QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,493,478 | 9,974,205 | How can I identify changes in stock in a pandas dataframe | <p>I am working with a pandas data frame. This data frame has 3 important columns, one is <code>AmountOfStock</code>, which indicates the amount of available units, other is <code>ProductType</code>, which is the code of the specified product, finally <code>DateTime</code>, indicates the date and time at which the data has been sent to the database.
The database registers each 10 seconds the amount of stock of each product, thus some rows would be</p>
<pre><code>1-2023-11-16 10:00:00, ProductA, 30
2-2023-11-16 10:00:00, ProductB, 15
3-2023-11-16 10:00:10, ProductA, 29
4-2023-11-16 10:00:10, ProductB, 15
5-2023-11-16 10:00:20, ProductA, 29
6-2023-11-16 10:00:20, ProductB, 14
</code></pre>
<p>I want to get only the rows in which the quantity of product changes or the initial values. Thus, I would be interested in removing the 4th and 5th rows.
Can someone please tell me how to do this?</p>
| <python><pandas><database><row><delete-row> | 2023-11-16 09:06:33 | 2 | 503 | slow_learner |
77,493,349 | 17,438,511 | How to launch `python -m MY_MODULE` in debug mode in pycharm | <p>I have a script that can only be run as <code>python -m path.to.my_script</code>, due to the use of relative imports. Running it as <code>python path/to/my_script.py</code> gives "Attempted relative import with no known parent package" error.</p>
<p>How can I launch such script in PyCharm debug mode?</p>
| <python><pycharm> | 2023-11-16 08:45:45 | 1 | 647 | Boschie |
77,493,160 | 13,836,083 | why we cannot use alone starred target while assignment in python | <p>I was going through the python docs on simple assignment.</p>
<p>I found below from the docs.</p>
<blockquote>
<p>Assignment of an object to a target list, optionally enclosed in
parentheses or square brackets, is recursively defined as follows.</p>
<p>If the target list is a single target with no trailing comma,
optionally in parentheses, the object is assigned to that target.</p>
<p>Else:</p>
<p>If the target list contains one target prefixed with an asterisk,
called a “starred” target: The object must be an iterable with at
least as many items as there are targets in the target list, minus
one. The first items of the iterable are assigned, from left to right,
to the targets before the starred target. The final items of the
iterable are assigned to the targets after the starred target. A list
of the remaining items in the iterable is then assigned to the starred
target (the list can be empty).</p>
<p>Else: The object must be an iterable with the same number of items as
there are targets in the target list, and the items are assigned, from
left to right, to the corresponding targets.</p>
</blockquote>
<p>What I understood is that if the target list contains starred target then object, on RHS, must be iterable. So while assignment, python first unpacks the object and assigns the items as per the above rule and then rest of the values is assigned to the starred target.</p>
<p>Now, Considering above I have kept only starred target and was expecting the values on RHS ( which is a tuple) to be assigned to starred target. However, Python gives syntax error for this line. I am still trying to understand where I am lacking as it is nowhere mentioned that I can't use starred target alone.</p>
<pre><code>*a=1,2,3
</code></pre>
<p>But Below works. Please explain why here alone why alone starred target is working here ?</p>
<pre><code>[*a] = 1,2,3
</code></pre>
| <python> | 2023-11-16 08:13:47 | 2 | 540 | novice |
77,493,052 | 435,093 | Testing ansible "to_nice_json" filter within Python - "No filter named 'to_nice_json'" | <p>I want to test an Ansible Jinja2 template using a Python script, based on the answer provided here: <a href="https://stackoverflow.com/questions/35407822/how-can-i-test-jinja2-templates-in-ansible">How can I test jinja2 templates in ansible?</a></p>
<p>This code <em>used</em> to work, however I don't remember in which environment. Now, when I run it, I get an error about the filter not being found:</p>
<pre><code>#!/usr/bin/env bash
# python3 -m pip install ansible
python3 <<EOF
import ansible
import jinja2
print(ansible.__version__)
print(jinja2.__version__)
output = jinja2.Template("Hello {{ var | to_nice_json }}!").render(var=f"{{ 'a': 1, 'b': 2 }}")
print(output)
EOF
</code></pre>
<p>This returns:</p>
<pre><code>2.15.6
3.1.2
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py", line 1208, in __new__
return env.from_string(source, template_class=cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py", line 1105, in from_string
return cls.from_code(self, self.compile(source), gs, None)
^^^^^^^^^^^^^^^^^^^^
File "/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py", line 768, in compile
self.handle_exception(source=source_hint)
File "/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<unknown>", line 1, in template
jinja2.exceptions.TemplateAssertionError: No filter named 'to_nice_json'.
</code></pre>
<p>The <code>jinja2</code> pip dependency is pulled in via <code>ansible</code>.</p>
<p>How would I make my Python code find the right filter, or get jinja2 to load the filter?</p>
| <python><ansible><jinja2> | 2023-11-16 07:52:55 | 1 | 39,571 | slhck |
77,492,920 | 16,405,935 | Count unique value for each group and subtotal | <p>I have a simple dataframe as below:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'BR_NM': ['HN', 'HN', 'HP'],
'CUS_ID': ['12345', '12345', '12345'],
'ACC_ID': ['12345_1', '12345_2', '12345_3'],
'REGION': ['North', 'North', 'North'],
'CUS_TYPE': ['Individual', 'Individual', 'Individual']})
df
BR_NM CUS_ID ACC_ID REGION CUS_TYPE
HN 12345 12345_1 North Individual
HN 12345 12345_2 North Individual
HP 12345 12345_3 North Individual
</code></pre>
<p>I want to count unique <code>CUS_ID</code> based on <code>BR_NM</code> then sum it based on <code>REGION</code>. In my case, it's just one customer with three account but I want to count it as two customer. Below is my desired Ouput:</p>
<pre><code>REGION CUS_TYPE North
0 Individual 2
</code></pre>
<p>If I used <code>pivot_table</code> and <code>aggfunc = pd.Series.nunique</code> it just count as 1.</p>
<pre><code>df2 = pd.pivot_table(df, values='CUS_ID', columns='REGION', index='CUS_TYPE', aggfunc=pd.Series.nunique).reset_index()
</code></pre>
<p>Thank you.</p>
| <python><pandas><pivot> | 2023-11-16 07:25:27 | 2 | 1,793 | hoa tran |
77,492,750 | 10,200,497 | Change the background color of every other group in Excel using Pandas | <p>This is my dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
'p': ['short', np.nan, 'short', np.nan, np.nan, 'long', 'long', np.nan, np.nan],
's': [13, 13, 14, 15, 100, 1, 1000, 12, 1111]
}
)
</code></pre>
<p>I want to change the background color of every other group (even groups) to a different color in Excel.</p>
<p>This is what I want in Excel:</p>
<p><a href="https://i.sstatic.net/S7KY3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S7KY3.png" alt="enter image description here" /></a></p>
<p>The groups are defined based on <code>p</code>. That is a value in <code>p</code> and all of the <code>NaN</code> values below it, is one group. It is clear in the image above.</p>
<p>This is what I have tried but did not work:</p>
<pre><code>from matplotlib import colors
def colr(x):
y = x.assign(k=x['p'].ne(x['p'].shift()).cumsum())
d = dict(enumerate(colors.cnames))
y[:] = np.broadcast_to(y['k'].map(d).radd('background-color:').to_numpy()[:,None]
,y.shape)
return y.drop("k",1)
df = df.style.apply(colr,axis=None)
df.to_excel('file.xlsx', index=False, engine='openpyxl')
</code></pre>
| <python><pandas> | 2023-11-16 06:43:47 | 1 | 2,679 | AmirX |
77,492,156 | 13,994,829 | How to calculate the "Engagement Rate" in GA4 connected to BigQuery? | <h2>Description</h2>
<ul>
<li>I have connected the <a href="https://support.google.com/analytics/answer/9823238?hl=en#zippy=%2Cin-this-article" rel="nofollow noreferrer">GA4 to BigQuery</a>.</li>
<li>The GA4 data in bigquery include: <code>ga_session_id</code>, <code>page_location</code>, <code>user_pseudo_id</code>, <code>session_engaged</code>, ...</li>
<li>I want to use python client api to query the data from bigquery.</li>
<li>Then use the data from bigquery to calculate the <code>engagement rate</code> for each page like GA4 report.</li>
</ul>
<h2>Try</h2>
<p>According to the <a href="https://measureschool.com/ga4-metrics/" rel="nofollow noreferrer">defination of <code>engagement rate</code></a>:</p>
<p><code>engagement_rate = engaged_session_nums / session_nums * 100%</code></p>
<p>I have create a dataframe which include <code>ga_session_id</code>, <code>page_location</code>, <code>user_pseudo_id</code>, <code>session_engaged</code>, and the <code>session_engaged</code> is a booling type.</p>
<p>I use the following code to calculate <code>engagement rate</code>:</p>
<p>(<strong>not sure <code>engaged_session_nums</code> how to calculate</strong>)</p>
<pre class="lang-py prettyprint-override"><code># the session = ga_session_id + user_pseudo_id
df['pseudo_with_session'] = df['ga_session_id'].astype(str) + df['user_pseudo_id']
# not sure engaged_session_nums how to calculate
df['engaged_session_nums'] = df.groupby(['page_location'])['session_engaged'].transform(lambda x: (x == '1').sum())
# calculate the session nums for each page
df['total_session_nums'] = df.groupby('page_location')['pseudo_with_session'].transform('nunique')
df['engaged_rate'] = df['engaged_session_nums'] / df['total_session_nums'] * 100
</code></pre>
<h2>Results</h2>
<p>The result get over 100%, and very different with GA4 report.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>page_location</th>
<th>engagement_rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>/</td>
<td>26.2%</td>
</tr>
<tr>
<td>/en/</td>
<td>117%</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div><h2>Expected (GA4 report)</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>page_location</th>
<th>engagement_rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>/</td>
<td>75.22%</td>
</tr>
<tr>
<td>/en/</td>
<td>73.34%</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div> | <python><dataframe><google-bigquery><google-analytics><google-analytics-4> | 2023-11-16 03:37:43 | 1 | 545 | Xiang |
77,492,144 | 1,942,868 | conditional constraints for django model | <p>I have database model like this</p>
<pre><code>ActionType = (
(1,"PostBack"),
(2,"Uri"),
)
class RichMenu(BaseModel):
key = m.CharField(max_length=64,unique=True)
action_type = m.IntegerField(choices=ActionType,default=1)
url = m.CharField(max_length=255,blank=True,null=True)
name = m.CharField(max_length=255,blank=True,null=True)
</code></pre>
<p>Now I would like to make constraint like this,</p>
<ul>
<li><p>When <code>action_type</code> is 1, <code>url</code> should be null and <code>name</code> should not be null</p>
</li>
<li><p>When <code>action_type</code> is 2, <code>name</code> should be null and <code>url</code> should not be null</p>
</li>
</ul>
<p>Is it possible to make conditional constraints for this case?</p>
| <python><django><model> | 2023-11-16 03:34:45 | 2 | 12,599 | whitebear |
77,492,026 | 10,018,602 | Query GPT4All local model with Langchain and many .txt files - KeyError: 'input_variables' | <p><code>python 3.8, Windows 10, neo4j==5.14.1, langchain==0.0.336</code></p>
<p>I'm attempting to utilize a local Langchain model (GPT4All) to assist me in converting a corpus of loaded <code>.txt</code> files into a <code>neo4j</code> data structure through querying. I have provided a minimal reproducible example code below, along with the references to the article/repo that I'm attempting to emulate. I have also provided a "context" that should be included in the query, along with all the <code>Document</code> objects. I'm still learning how to use <code>Langchain</code> so I really don't know what I'm doing yet, but the current traceback I'm getting looks like this:</p>
<pre><code>Traceback (most recent call last):
File ".\neo4jmain.py", line xx, in <module>
prompt_template = PromptTemplate(
File "C:\Users\chalu\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1102, in pydantic.main.validate_model
File "C:\Users\chalu\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\schema\prompt_template.py", line 76, in validate_variable_names
if "stop" in values["input_variables"]:
KeyError: 'input_variables'
</code></pre>
<p>As you'll see, I'm literally not defining <code>input_variables</code> anywhere, so I assume this is a default behaviour of Langchain, but again, not sure. I was also getting an error:</p>
<pre><code>LLaMA ERROR: The prompt is 5161 tokens and the context window is 2048!
ERROR: The prompt size exceeds the context window size and cannot be processed.
</code></pre>
<p>...which is clearly a result of the query string itself being too big. I want to be able to query my documents for the answer, while providing the model with the Documents to reference. How can I do this? The Langchain documentation is not great for noobies in this space, it's all over the place and lacks MANY, SIMPLE use cases for noobies so I'm asking it here.</p>
<pre><code># https://medium.com/neo4j/enhanced-qa-integrating-unstructured-and-graph-knowledge-using-neo4j-and-langchain-6abf6fc24c27
# https://github.com/sauravjoshi23/ai/blob/main/retrieval%20augmented%20generation/integrated-qa-neo4j-langchain.ipynb
# Script to convert a corpus of many text files into a neo4j graph
# Imports
import os
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms.gpt4all import GPT4All
from langchain.prompts import PromptTemplate
from transformers import AutoTokenizer
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def bert_len(text):
"""Return the length of a text in BERT tokens."""
tokens = tokenizer.encode(text)
return len(tokens)
def get_files(path: str) -> list:
"""Return a list of all files in a directory, recursively."""
files = []
for file in os.listdir(path):
file_path = os.path.join(path, file)
if os.path.isdir(file_path):
files.extend(get_files(file_path))
else:
files.append(file_path)
return files
# Get the text files
all_txt_files = get_files('data')
raw_txt_files = []
for current_file in all_txt_files:
raw_txt_files.extend(TextLoader(current_file, encoding='utf-8').load())
# Create a text splitter object that will help us split the text into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1024, # 200,
chunk_overlap = 128, # 20
length_function = bert_len,
separators=['\n\n', '\n', ' ', ''],
)
# Split the text into "documents"
documents = text_splitter.create_documents([raw_txt_files[0].page_content])
# Utilizing these Document objects, we want to query the GPT4All model to help us create
# a JSON object that contains the ontology of terms mentioned in the given context,
# while mitigating "max_tokens" error.
# Create a PromptTemplate object that will help us create the prompt for GPT4All(?)
prompt_template = PromptTemplate(
template = """
You are a network graph maker who extracts terms and their relations from a given context.
You are provided with a context chunk (delimited by ```). Your task is to extract the ontology
of terms mentioned in the given context. These terms should represent the key concepts as per the context.
Thought 1: While traversing through each sentence, Think about the key terms mentioned in it.
Terms may include object, entity, location, organization, person,
condition, acronym, documents, service, concept, etc.
Terms should be as atomistic as possible
Thought 2: Think about how these terms can have one on one relation with other terms.
Terms that are mentioned in the same sentence or the same paragraph are typically related to each other.
Terms can be related to many other terms
Thought 3: Find out the relation between each such related pair of terms.
Format your output as a list of json. Each element of the list contains
a pair of terms and the relation between them, like the following:
[Dict("node_1": "A concept from extracted ontology",
"node_2": "A related concept from extracted ontology",
"edge": "relationship between the two concepts, node_1 and node_2 in one or two sentences",
),
Dict("node_1": "A concept from extracted ontology",
"node_2": "A related concept from extracted ontology",
"edge": "relationship between the two concepts, node_1 and node_2 in one or two sentences",
),
Dict(...)]
Context Documents: {documents}
""",
variables = {
"documents": documents,
}
)
# Create a GPT4All object that will help us query the GPT4All model
llm = GPT4All(
model=r"C:\Users\chalu\AppData\Local\nomic.ai\GPT4All\gpt4all-falcon-q4_0.gguf",
n_threads=3,
max_tokens=5162, # <-- attempt to mitigate "max_tokens" error
verbose=True,
)
# Get the response from GPT-4-All
response = llm(prompt_template)
print(response)
</code></pre>
| <python><neo4j><langchain> | 2023-11-16 02:51:54 | 2 | 1,335 | wildcat89 |
77,491,948 | 9,500,955 | Pass a whole dataset contains multiple files to HuggingFace function in Palantir | <p>I am using the pre-trained model from HuggingFace (<a href="https://huggingface.co/dslim/bert-base-NER/tree/main" rel="nofollow noreferrer">dslim/bert-base-NER</a>). Normally when working locally or in a Colab notebook, we can use the code below to load the model:</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
</code></pre>
<p>This code will connect with the HuggingFace server to load the model directly. But on Palantir, we cannot do much unless configuring the security.</p>
<p>The workaround approach here is to download all the files and upload them to a dataset. We can pass a folder that contains these files to the HuggingFace function (check this <a href="https://stackoverflow.com/a/64007213/9500955">answer</a>) like below:</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/')
model = AutoModelForTokenClassification.from_pretrained('./local_model_directory/')
</code></pre>
<p>For a model that has only a <code>pickle</code> file, we can easily read that file via a dataset called <code>classifier</code>:</p>
<pre class="lang-py prettyprint-override"><code>@transform(
file_input=Input("/Users/model/classifier")
)
def load_model(file_input):
with file_input.filesystem().open("classifier.pkl", "rb") as f:
model = pickle.load(f)
return model
</code></pre>
<p>My question is: How can we open a dataset and pass a whole directory to this function on the Palantir code repository?</p>
| <python><palantir-foundry><foundry-code-repositories> | 2023-11-16 02:25:18 | 1 | 1,974 | huy |
77,491,941 | 2,850,913 | Llama 2 with Langchain tools | <p>I am trying to follow <a href="https://www.pinecone.io/learn/llama-2/" rel="nofollow noreferrer">this tutorial</a> on using Llama 2 with Langchain tools (you don't have to look at the tutorial all code is contained in this question).</p>
<p>My code is very similar to that in the tutorial except I am using a local model rather than connecting to Hugging Face and I am not using bitsandbytes for quantisation since it requires cuda and I am on macOS.</p>
<p>I am using the unquantised Meta Llama 2 13b chat model <a href="https://huggingface.co/meta-llama/Llama-2-13b-chat-hf" rel="nofollow noreferrer">meta-llama/Llama-2-13b-chat-hf</a>.</p>
<p>The model appears to be outputting JSON correctly but for some reason I am getting "Could not parse LLM output".</p>
<p>Here is the code (added \ before triple backtick due to Stackoverflow code formatting). Note I added the following to the prompt "When Assistant responds with JSON they make sure to enclose the JSON with three back ticks." This resolves an issue where the model was outputting JSON that was not in the correct format (surrounded with backticks and a 'json' tag).</p>
<pre class="lang-py prettyprint-override"><code>import transformers
model_id = './Models/Llama_2/llama_2_13b'
# initialize the model
model = transformers.AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
generate_text = transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True, # langchain expects the full text
task='text-generation',
# we pass model parameters here too
temperature=0.01, # 'randomness' of outputs, 0.0 is the min and 1.0 the max
max_new_tokens=512, # mex number of tokens to generate in the output
repetition_penalty=1.1 # without this output begins repeating
)
from langchain.llms import HuggingFacePipeline
llm = HuggingFacePipeline(pipeline=generate_text)
from langchain.memory import ConversationBufferWindowMemory
from langchain.agents import load_tools
memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=5, return_messages=True, output_key="output"
)
tools = load_tools(["llm-math"], llm=llm)
from langchain.agents import initialize_agent
# initialize agent
agent = initialize_agent(
agent="chat-conversational-react-description",
tools=tools,
llm=llm,
verbose=True,
early_stopping_method="generate",
memory=memory,
handle_parsing_errors=True
)
# special tokens used by llama 2 chat
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
# create the system message
sys_msg = "<s>" + B_SYS + """Assistant is a expert JSON builder designed to assist with a wide range of tasks.
Assistant is able to respond to the User and use tools using JSON strings that contain "action" and "action_input" parameters.
All of Assistant's communication is performed using this JSON format.
Assistant can also use tools by responding to the user with tool use instructions in the same "action" and "action_input" JSON format. Tools available to Assistant are:
- "Calculator": Useful for when you need to answer questions about math.
- To use the calculator tool, Assistant should write like so:
```json
{{"action": "Calculator",
"action_input": "sqrt(4)"}}
```
When Assistant responds with JSON they make sure to enclose the JSON with three back ticks.
Here are some previous conversations between the Assistant and User:
User: Hey how are you today?
Assistant: ```json
{{"action": "Final Answer",
"action_input": "I'm good thanks, how are you?"}}
\```
User: I'm great, what is the square root of 4?
Assistant: ```json
{{"action": "Calculator",
"action_input": "sqrt(4)"}}
\```
User: 2.0
Assistant: ```json
{{"action": "Final Answer",
"action_input": "It looks like the answer is 2!"}}
\```
User: Thanks could you tell me what 4 to the power of 2 is?
Assistant: ```json
{{"action": "Calculator",
"action_input": "4**2"}}
\```
User: 16.0
Assistant: ```json
{{"action": "Final Answer",
"action_input": "It looks like the answer is 16!"}}
\```
Here is the latest conversation between Assistant and User.""" + E_SYS
new_prompt = agent.agent.create_prompt(
system_message=sys_msg,
tools=tools
)
agent.agent.llm_chain.prompt = new_prompt
instruction = B_INST + " Respond to the following in JSON with 'action' and 'action_input' values " + E_INST
human_msg = instruction + "\nUser: {input}"
agent.agent.llm_chain.prompt.messages[2].prompt.template = human_msg
</code></pre>
<p>Then I prompt it with;</p>
<pre class="lang-py prettyprint-override"><code>agent("hey how are you today?")
</code></pre>
<p>and I get the following;</p>
<pre class="lang-none prettyprint-override"><code>> Entering new AgentExecutor chain...
Assistant: ```json
{"action": "Final Answer",
"action_input": "I'm good thanks, how are you?"}
\```
> Finished chain.
{'input': 'hey how are you today?',
'chat_history': [],
'output': "I'm good thanks, how are you?"}
</code></pre>
<p>Which is great, but when I prompt it with a question that requires use of a tool;</p>
<pre class="lang-py prettyprint-override"><code>agent("what is 4 to the power of 2.1?")
</code></pre>
<p>I get;</p>
<pre class="lang-none prettyprint-override"><code>> Entering new AgentExecutor chain...
Assistant: ```json
{"action": "Calculator",
"action_input": "4**2.1"}
\```
Observation: Answer: 18.37917367995256
Thought:Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
Observation: Invalid or incomplete response
</code></pre>
<p>and it just gets stuck in a loop repeating "Could not parse LLM output" and "Invalid or incomplete response"</p>
<p>Does anyone know how to fix the "Could not parse LLM output" errors?</p>
<p>Supposedly this code worked for the authors of the tutorial.</p>
<p>I am using Python 3.11.5 with Anaconda, tensorflow 2.15.0, transformers 4.35.2, langchain 0.0.336, on macOS Sonoma.</p>
<p>I updated the code to use the output parser from <a href="https://github.com/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/llama-2/llama-2-70b-chat-agent.ipynb" rel="nofollow noreferrer">here</a>.</p>
<pre class="lang-py prettyprint-override"><code>from langchain.agents import AgentOutputParser
from langchain.agents.conversational_chat.prompt import FORMAT_INSTRUCTIONS
from langchain.output_parsers.json import parse_json_markdown
from langchain.schema import AgentAction, AgentFinish
class OutputParser(AgentOutputParser):
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> AgentAction | AgentFinish:
try:
# this will work IF the text is a valid JSON with action and action_input
response = parse_json_markdown(text)
action, action_input = response["action"], response["action_input"]
if action == "Final Answer":
# this means the agent is finished so we call AgentFinish
return AgentFinish({"output": action_input}, text)
else:
# otherwise the agent wants to use an action, so we call AgentAction
return AgentAction(action, action_input, text)
except Exception:
# sometimes the agent will return a string that is not a valid JSON
# often this happens when the agent is finished
# so we just return the text as the output
return AgentFinish({"output": text}, text)
@property
def _type(self) -> str:
return "conversational_chat"
# initialize output parser for agent
parser = OutputParser()
</code></pre>
<p>by adding</p>
<pre class="lang-py prettyprint-override"><code>agent_kwargs={"output_parser": parser}
</code></pre>
<p>to initialise_agent, and the code no longer produces an error but the final output from the LLM is still empty.</p>
<pre class="lang-none prettyprint-override"><code>> Entering new AgentExecutor chain...
Assistant: ```json
{"action": "Calculator",
"action_input": "4**2.1"}
\```
Observation: Answer: 18.37917367995256
Thought:
> Finished chain.
{'input': 'what is 4 to the power of 2.1?',
'chat_history': [HumanMessage(content='hey how are you today?'),
AIMessage(content="I'm good thanks, how are you?")],
'output': ''}
</code></pre>
| <python><langchain><llama> | 2023-11-16 02:23:05 | 1 | 750 | tail_recursion |
77,491,924 | 699,467 | Is it possible to skip loop iterations for itertools.product() if i know the ID of the iteration? | <p>I have the following code:</p>
<pre><code>import math
import numpy as np
import itertools
s = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]
def variate4(l):
yield from itertools.product(*([l] * 4))
for i in variate4(s):
print(repr(''.join(str(i))))
</code></pre>
<p>The result of execution is all possible four-digit combinations of numbers from the <code>s</code> array.
For example: (12, 32, 46, 50).
As i see, there is always the same order in results.</p>
<p>Is it possible to generate combination by its id?
I mean result number 4856 for example.</p>
<p>I tried this way:</p>
<pre><code>j = 0
for i in variate4(s):
j = j + 1
if j == 4856:
print(repr(''.join(str(i))))
break
</code></pre>
<p>But still have to wait until all previous combinations are generated.
I'm trying to achieve instant jump to the desired iteration.</p>
| <python><loops> | 2023-11-16 02:15:03 | 1 | 2,406 | Sir D |
77,491,831 | 748,493 | Pandas DataFrame from a list of strings and n-th order tuples | <p>I am dealing with data that can contain stings, tuples of strings, tuples of tuples of strings, etc., e.g.</p>
<pre><code>values = ['a0'] + [('b0', 'b1')] + [(('c0', 'c1'), ('c2', 'c3'))]
</code></pre>
<p>and I need to construct a dataframe from these values that has as many columns as the maximum number of strings amongst all list elements, i.e. 4 for the above example, with <code>None</code> in those locations where there are less than this maximum, e.g.</p>
<p><a href="https://i.sstatic.net/4yuij.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4yuij.png" alt="enter image description here" /></a></p>
<p>The <code>pd.DataFrame</code> constructor behaves differently depending on the order of elements in the list.</p>
<p>For example, <code>pd.DataFrame(['a0'] + [('b0','b1')])</code> results in</p>
<p><a href="https://i.sstatic.net/YJhR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YJhR2.png" alt="enter image description here" /></a></p>
<p>while <code>pd.DataFrame( [('b0','b1')] + ['a0'])</code> produces</p>
<p><a href="https://i.sstatic.net/Nrhc3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nrhc3.png" alt="enter image description here" /></a></p>
<p>The desired output, in this case, is</p>
<p><a href="https://i.sstatic.net/bCmDr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCmDr.png" alt="enter image description here" /></a></p>
<p>In general, a tuple might contain elements of mixed order, e.g.
<code>[('d0', ('d1', ('d2', 'd3')))]</code>.</p>
| <python><pandas><dataframe> | 2023-11-16 01:42:18 | 4 | 522 | Confounded |
77,491,797 | 5,363,840 | How does Pycharm handle virtual enviroments and their variables set in activate script? | <p>I'm running into an issue in pycharm. I'm using a virtual enviroment and I have added the following line to my activate script</p>
<pre><code>source /home/me/environment.sh
</code></pre>
<p>environment.sh only contains some lines with exports for variables such as:</p>
<pre><code>#!/bin/bash
export USERNAME="GUY"
</code></pre>
<p>When I run the python interpreter from the terminal in pycharm, I am able to see these variables by printing out</p>
<p><code>os.environ</code></p>
<p>However if I attempt to run any python script in my project using the actual <em>run</em> button in Pycharm, these variables are not available. I have checked the run/debug configuration and everything looks correct and it is using my venv interpreter. I do see that I can manually enter these variables into the run configuration, but why would it not be recognizing them from my active virtual environment?</p>
| <python><pycharm><python-venv> | 2023-11-16 01:30:09 | 0 | 462 | Dr.Tautology |
77,491,665 | 12,474,157 | How to Add Authentication Mock to Pytest in a FastAPI Application? | <p>I am working on testing a FastAPI application using pytest and need to integrate authentication mocking. I have an endpoint <code>/roles</code> which requires authentication, but I'm not sure how to correctly add a mock for the authentication in my pytest setup.</p>
<h3>pytest code I have:</h3>
<pre class="lang-py prettyprint-override"><code># pytest code for testing /roles endpoint
import pytest
from fastapi.testclient import TestClient
from unittest.mock import patch, AsyncMock
from main import app
from tests.mock_data import test_roles_data
class MockRoles:
async def setup(self):
pass
async def get_json(self):
return test_roles_data
@pytest.fixture
def client():
with TestClient(app) as c:
yield c
@patch('routers.base.parse_params', return_value=('custom_fields', '2023-01-01', '2023-01-31'))
@patch('services.roles.Roles', return_value=MockRoles())
def test_roles_endpoint(mock_parse_params, mock_roles, client):
response = client.get("/roles", headers={"account": "test_account"})
assert response.status_code == 200
assert response.json() == test_roles_data
</code></pre>
<p>I also have a fake authentication function which I need to integrate into the test:
I believe this is the correct way to define it</p>
<pre class="lang-py prettyprint-override"><code># Fake authentication function
async def fake_authenticate(
apitoken: Optional[str] = None,
account: Optional[str] = None,
sessionHost: Optional[str] = None
):
return {
'apitoken': 'apitoken',
'account': 'account',
'sessionHost': 'sessionHost',
}
</code></pre>
<p>My goal is to mock the authentication process so that the <code>/roles</code> endpoint can be tested without actual authentication. Could someone guide me on how to correctly add this <code>fake_authenticate</code> function to my pytest setup?</p>
<p><strong>Additional Info:</strong></p>
<ul>
<li>The <code>/roles</code> endpoint in the FastAPI app requires authentication.</li>
<li>I am looking to test the endpoint's response assuming the authentication is successful.</li>
</ul>
<h3>/main.py</h3>
<pre class="lang-py prettyprint-override"><code>async def authenticate(
apitoken: Optional[str] = Header(""),
account: Optional[str] = Header(""),
sessionHost: Optional[str] = Header(""),
):
print(sessionHost)
if (
sessionHost
and sessionHost[-20:] == "mydomain-staging.com"
or sessionHost == "https://apistaging.mydomain.com"
):
url = sessionHost + "/api/jml/templates"
else:
url = PF_API_URL + "/jml/templates"
print(url)
params = {"per_page": 1, "page": 1}
accept = "application/vnd.mydomain+json;version=2"
auth_headers = {
"content_type": "application/json",
"accept": accept,
"account": account,
"apitoken": apitoken,
}
r = await session.get(url, headers=auth_headers, params=params, timeout=30)
if r.status_code != 200:
raise HTTPException(status_code=401, detail="Unauthorized")
app.token_cache[apitoken] = {
"account": account,
"ts": datetime.now(),
}
app.include_router(
stats.router,
dependencies=[Depends(authenticate)],
responses={404: {"description": "Not found"}},
)
</code></pre>
<h3>/routers/stats.py</h3>
<pre class="lang-py prettyprint-override"><code>@router.get("/roles")
async def roles(
request: Request,
account: str = Header(...),
start_date: str = None,
end_date: str = None,
profile_id: int = 0,
):
custom_fields, start_date, end_date = parse_params(
request, start_date, end_date
)
new_role = Roles(
account, start_date, end_date, profile_id, custom_fields=custom_fields
)
try:
await new_role.setup()
result = await new_role.get_json()
except NoDataException:
result = {"Error": "No Data"}
return result
</code></pre>
| <python><authentication><mocking><pytest><fastapi> | 2023-11-16 00:31:16 | 0 | 1,720 | The Dan |
77,491,623 | 12,474,157 | How to Correctly Implement Mocked Authentication in Pytest for FastAPI Application? | <p>I am working with FastAPI and pytest to test an endpoint in my application. I need to mock the authentication dependency, but I'm facing challenges in implementing it correctly. My primary goal is to ensure that the mocked authentication is properly set up so that my tests can run without hitting the actual authentication logic.</p>
<h2>My pytest code</h2>
<pre class="lang-py prettyprint-override"><code>import os
from typing import Optional
from fastapi.testclient import TestClient
from unittest.mock import patch, AsyncMock
from main import app
from tests.mock_data import test_roles_data
os.environ["TEST_ENV"] = "test"
client = TestClient(app)
async def fake_authenticate(
apitoken: Optional[str] = None,
account: Optional[str] = None,
sessionHost: Optional[str] = None
):
return {
'apitoken': 'apitoken',
'account': 'account',
'sessionHost': 'sessionHost',
}
app.dependency_overrides[app.authenticate] = fake_authenticate
@patch('routers.base.parse_params', new_callable=AsyncMock)
@patch('services.roles.Roles.get_json', new_callable=AsyncMock)
def test_roles_endpoint(mock_parse_params, mock_get_json):
mock_get_json.return_value = test_roles_data
mock_parse_params.return_value = ("custom_fields_value", "2023-01-01", "2023-01-31")
response = client.get("/roles", headers={
'apitoken': 'apitoken',
'account': 'account',
'sessionHost': 'sessionHost',
},
params={"start_date": "2023-01-01", "end_date": "2023-01-31"})
assert response.status_code == 200
# assert response.json() == test_roles_data
# Reset the environment variable after tests
del os.environ["TEST_ENV"]
</code></pre>
<p>However, when I run the test, I get the following error:</p>
<pre><code>AttributeError: 'FastAPI' object has no attribute 'authenticate'
</code></pre>
<p>This error occurs at the line where I try to override the dependency with the mocked authentication function. I am certain that the <code>authenticate</code> function exists in my main application and is used correctly in my routes.</p>
<p>I am looking for guidance on how to properly mock the authentication in this pytest setup for my FastAPI application. What is the correct way to implement this so that my endpoint tests can bypass the actual authentication logic?</p>
<h2>Additional info</h2>
<h3>/main.py</h3>
<pre class="lang-py prettyprint-override"><code>async def authenticate(
apitoken: Optional[str] = Header(""),
account: Optional[str] = Header(""),
sessionHost: Optional[str] = Header(""),
):
print(sessionHost)
if (
sessionHost
and sessionHost[-20:] == "mydomain-staging.com"
or sessionHost == "https://apistaging.mydomain.com"
):
url = sessionHost + "/api/jml/templates"
else:
url = PF_API_URL + "/jml/templates"
print(url)
params = {"per_page": 1, "page": 1}
accept = "application/vnd.mydomain+json;version=2"
auth_headers = {
"content_type": "application/json",
"accept": accept,
"account": account,
"apitoken": apitoken,
}
r = await session.get(url, headers=auth_headers, params=params, timeout=30)
if r.status_code != 200:
raise HTTPException(status_code=401, detail="Unauthorized")
app.token_cache[apitoken] = {
"account": account,
"ts": datetime.now(),
}
app.include_router(
stats.router,
dependencies=[Depends(authenticate)],
responses={404: {"description": "Not found"}},
)
</code></pre>
<h3>/routers/stats.py</h3>
<pre class="lang-py prettyprint-override"><code>@router.get("/roles")
async def roles(
request: Request,
account: str = Header(...),
start_date: str = None,
end_date: str = None,
profile_id: int = 0,
):
custom_fields, start_date, end_date = parse_params(
request, start_date, end_date
)
new_role = Roles(
account, start_date, end_date, profile_id, custom_fields=custom_fields
)
try:
await new_role.setup()
result = await new_role.get_json()
except NoDataException:
result = {"Error": "No Data"}
return result
</code></pre>
| <python><unit-testing><mocking><pytest><fastapi> | 2023-11-16 00:15:35 | 1 | 1,720 | The Dan |
77,491,414 | 3,083,830 | What does n_jobs=-1 do in XGBClassifier from xgboost? | <p>What does the n_jobs=-1 means do when creating a XGBClassifier object?</p>
<pre><code>xgbc = XGBClassifier(n_jobs=-1)
</code></pre>
| <python><scikit-learn><xgboost><xgbclassifier> | 2023-11-15 23:09:12 | 1 | 641 | Ivan Verges |
77,491,346 | 1,934,510 | cannot import name 'Job' from partially initialized module 'models' (most likely due to a circular import) | <p>I'm trying to build a Flask app but I'm receiving this error</p>
<p>ImportError: cannot import name 'Job' from partially initialized module 'models' (most likely due to a circular import)</p>
<p>This my code:</p>
<p>app.py</p>
<pre><code>from flask import Flask, render_template, redirect, url_for, flash
from flask_sqlalchemy import SQLAlchemy
from flask_wtf import FlaskForm
from wtforms import StringField, TextAreaField, SubmitField
from wtforms.validators import DataRequired
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///jobs.db"
app.config["SECRET_KEY"] = "your_secret_key"
db = SQLAlchemy(app)
@app.route("/")
def index():
jobs = Job.query.all()
return render_template("index.html", jobs=jobs)
from models import Job
class JobForm(FlaskForm):
title = StringField("Job Title", validators=[DataRequired()])
description = TextAreaField("Job Description", validators=[DataRequired()])
submit = SubmitField("Post Job")
@app.route("/post", methods=["GET", "POST"])
def post_job():
form = JobForm()
if form.validate_on_submit():
job = Job(title=form.title.data, description=form.description.data)
db.session.add(job)
db.session.commit()
flash("Job has been posted!", "success")
return redirect(url_for("index"))
return render_template("post_job.html", title="Post Job", form=form)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>models.py</p>
<pre><code>from app import db
class Job(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(100), nullable=False)
description = db.Column(db.Text, nullable=False)
def __repr__(self):
return f"Job('{self.title}', '{self.description}')"
</code></pre>
| <python><flask> | 2023-11-15 22:52:39 | 1 | 8,851 | Filipe Ferminiano |
77,491,336 | 1,663,528 | How to search results of PyMySQL query with IN operator when it returns a tuple of tuples? | <p>I have a MySQL database table of ipv4 addresses I want to search against, I pull it in to a Python using an SQL query (PyMySQL library) with a single column in the SELECT and cursor.fetchall() but when I loop through the resulting tuple.</p>
<pre><code>with conn.cursor() as cur:
sqlQuery = 'SELECT ipv4_address FROM bad_ips WHERE IS NOT NULL'
cur.execute(sqlQuery)
bad_ips = cur.fetchall()
conn.close()
</code></pre>
<p>When I loop through it, it appears to return a tuple of tuples, show this sort of thing:</p>
<p>('192.168.1.1',)
('192.168.1.5',)</p>
<p>I want it to return just the IP addresses as expected like this:</p>
<p>'192.168.1.1'
'192.168.1.5'</p>
<p>That way I can look up an IP using:</p>
<pre><code>if str(this_ip_address) in bad_ips:
</code></pre>
| <python><pymysql> | 2023-11-15 22:50:12 | 1 | 8,579 | Mr Fett |
77,491,117 | 5,519,012 | Using cross join of subquery in python Peewee | <p>I want to convert the following sql query to python peewee -</p>
<pre><code>WITH common_subquery AS (
SELECT
t1.fly_from,
t1.airlines as first_airline,
t1.flight_numbers as first_flight_number,
t1.link_to as first_link,
t1.departure_to,
t1.fly_to AS connection_at,
t2.airlines as second_airline,
t2.flight_numbers as second_flight_number,
t2.link_to as second_link,
t2.fly_to,
t1.arrival_to AS landing_at_connection,
t2.departure_to AS departure_from_connection,
t2.arrival_to,
CAST((julianday(t2.departure_to) - julianday(t1.arrival_to)) * 24 AS INTEGER) AS duration_hours,
t1.discount_price + t2.discount_price AS total_price
FROM
flights AS t1
JOIN flights AS t2 ON t1.flight_hash = t2.flight_hash
WHERE
(t2.fly_from != t1.fly_from)
AND (t1.fly_from != t2.fly_to)
ORDER BY
total_price ASC
)
SELECT
t1.fly_from as source,
t1.first_airline as source_outbound_airline,
t1.first_flight_number as source_outbound_flight_number,
t1.first_link as source_outbound_link,
t1.departure_to as outbound_departure,
t1.landing_at_connection,
t1.connection_at as outbound_connection,
t1.second_airline as connection_outbound_airline,
t1.second_flight_number as connection_outbound_flight_number,
t1.second_link as connection_outbound_link,
t1.departure_from_connection,
t1.arrival_to as destination_arrival,
t1.fly_to as destination,
t2.first_airline as inbound_connection_airline,
t2.first_flight_number as inbound_connection_flight_number,
t2.first_link as inbound_connection_link,
t2.departure_to as return_departure,
t2.landing_at_connection as return_arrival,
t2.connection_at as inbound_connection,
t2.second_airline as inbound_airline,
t2.second_flight_number as inbound_flight_number,
t2.second_link as inbound_link,
t2.departure_from_connection as return_departure_from,
t2.arrival_to as return_destination_arrival,
CEIL((t1.total_price + t2.total_price) / 100.0) * 100 AS round_total_price,
FLOOR((julianday(t2.departure_from_connection) - julianday(t1.arrival_to))) AS days_in_dest
FROM
common_subquery AS t1
CROSS JOIN common_subquery AS t2
WHERE
(julianday(t2.departure_from_connection) - julianday(t1.arrival_to)) BETWEEN 5 AND 8
AND t1.duration_hours < 24
AND t2.duration_hours < 24
AND t1.fly_to = t2.fly_from
AND t1.fly_from like '%TLV%' and t1.fly_to like '%PRG%'
ORDER BY
round_total_price ASC,
t1.duration_hours ASC,
t2.duration_hours ASC;
</code></pre>
<p>The peewee model is -</p>
<pre><code>class Flights(Model):
fly_from = CharField()
fly_to = CharField()
nights = IntegerField()
days_off = IntegerField()
price = IntegerField()
discount_price = IntegerField()
airlines = CharField()
flight_numbers = CharField()
departure_to = DateTimeField()
arrival_to = DateTimeField()
departure_from = DateTimeField()
arrival_from = DateTimeField()
link_to = CharField()
link_from = CharField()
month = IntegerField()
date_of_scan = DateTimeField()
holiday_name = CharField()
special_date = BooleanField(default=False)
is_connection_flight = BooleanField(default=False)
flight_hash = CharField(default="")
class Meta:
database = db
</code></pre>
<p>I was able to recreate the subquery, but the issue is that I am not able to <code>cross join</code> the subquery.</p>
<p>for example -</p>
<pre><code>subquery = Flight.select(...).order_by(...)
</code></pre>
<p>I can't figure out how to cross join it in another query with itself.
I want to get something like -</p>
<pre><code> subquery.select().join(subquery, CROSS)
</code></pre>
<p>is that possible using peewee? I can do it using python, but I want to have that calculations on the database engine side</p>
| <python><sql><peewee> | 2023-11-15 21:59:09 | 1 | 365 | Meir Tolpin |
77,491,078 | 5,036,928 | PyVista: 3D Gaussian Smoothing of PolyData | <p>I would like to replicate the example here <a href="https://docs.pyvista.org/version/stable/examples/01-filter/gaussian-smoothing.html" rel="nofollow noreferrer">https://docs.pyvista.org/version/stable/examples/01-filter/gaussian-smoothing.html</a> using my own data but trying to apply the <code>gaussian_smooth()</code> method to my <code>ImageData</code> results in <code>MissingDataError: No data available.</code> (but works for the example). I'm guessing I need to pass my scalar field to <code>ImageData</code> but I'm not sure with what attribute I do this.</p>
<p>Some potentially helpful code:</p>
<pre><code># create a uniform grid to sample the function with
n = 40
x_min, y_min, z_min = [np.min(q) - 0.25*np.absolute(np.min(q)) for q in [tmp[tmp[:,3]==1, 0], tmp[tmp[:,3]==1, 1], tmp[tmp[:,3]==1, 2]]]
x_max, y_max, z_max = [np.max(q) + 0.25*np.absolute(np.max(q)) for q in [tmp[tmp[:,3]==1, 0], tmp[tmp[:,3]==1, 1], tmp[tmp[:,3]==1, 2]]]
grid = pv.ImageData(
dimensions=(n, n, n),
spacing=( (x_max - x_min) / n,
(y_max - y_min) / n,
(z_max - z_min) / n),
origin=(x_min, y_min, z_min),
)
smooth_grid = grid.gaussian_smooth(std_dev=3.0)
</code></pre>
<p>My question: How can I successfully perform a <code>gaussian_smooth</code> on my <code>ImageData</code></p>
| <python><3d><surface><gaussianblur><pyvista> | 2023-11-15 21:51:27 | 1 | 1,195 | Sterling Butters |
77,490,979 | 9,100,431 | Python - Datetime Problem printing '%p' with locale | <p>I have a simple script that fetches a UTC datetime from an API and then I try to parse it to</p>
<pre><code>"%I:%M:%S %p . %d de %B del %Y"
</code></pre>
<p>I get the correct format without a locale set (12:35:01 PM. 15 de November del 2023).
But when I try to get the Spanish month, the %p suddenly disappears.</p>
<pre><code>def get_time():
locale.setlocale(locale.LC_TIME, 'es_ES.utf8')
sharepoint_api = api.SharePointAPI()
last_time_ran = sharepoint_api.get_last_executed_time()
#Change UTC to local timezone
utc_format = "%Y-%m-%dT%H:%M:%SZ"
utc_time = datetime.strptime(last_time_ran, utc_format)
utc_time = utc_time.replace(tzinfo=pytz.UTC)
local_tz = pytz.timezone("America/Monterrey")
local_time = utc_time.astimezone(local_tz)
# Parse to desired format
output_format = "%I:%M:%S %p . %d de %B del %Y"
formatted_date = local_time.strftime(output_format)
return formatted_date
</code></pre>
<p>This prints:</p>
<pre><code>12:35:01 . 15 de Noviembre del 2023
</code></pre>
<p>Different workarounds say I should use a dictionary to change the months, or parse the date before the locale is set to get the PM/AM and then add it after the locale.</p>
<p>Isn't there an easier way? Is this a datetime library bug?</p>
| <python><datetime><python-datetime> | 2023-11-15 21:28:41 | 0 | 660 | Diego |
77,490,678 | 11,163,122 | SQL Alchemy v2 is after_transaction_end needed for nested session | <p>In <a href="https://aalvarez.me/posts/setting-up-a-sqlalchemy-and-pytest-based-test-suite/" rel="nofollow noreferrer">Setting Up a SQLAlchemy and Pytest Based Test Suite</a> (published May 2022 with SQLAlchemy v1), for the transactional testing, it uses the below code snippet. Note I cleaned it up a bit for this post:</p>
<pre class="lang-py prettyprint-override"><code># conftest.py
import pytest
# Import a session factory in our app code. Possibly created using
# `sessionmaker()`
from myapp.db import Session
@pytest.fixture(autouse=True)
def session(connection):
transaction = connection.begin()
session = Session(bind=connection)
session.begin_nested()
@event.listens_for(session, "after_transaction_end")
def restart_savepoint(db_session, ended_transaction):
if ended_transaction.nested and not ended_transaction._parent.nested:
session.expire_all()
session.begin_nested()
yield session
Session.remove()
transaction.rollback()
</code></pre>
<p>I am now working with SQLAlchemy v2 in November 2023, and have three questions around the <code>restart_savepoint</code> inner function:</p>
<ul>
<li>Can you explain the <code>ended_transaction.nested and not ended_transaction._parent.nested</code> logic?</li>
<li>Is this <code>restart_savepoint</code> behavior still necessary with SQL Alchemy v2?</li>
<li>Is the <code>Session(bind=connection)</code> still necessary with SQL Alchemy v2?</li>
</ul>
<p>I am thinking the below is equally viable:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True)
def session(connection):
with Session() as session, session.begin(nested=True):
yield session
</code></pre>
| <python><sqlalchemy> | 2023-11-15 20:21:38 | 1 | 2,961 | Intrastellar Explorer |
77,490,677 | 1,309,005 | Can I use a symbol other than the equal sign in self-documenting f-strings | <p>Self-documenting f-strings in Python allow you to write a print statement like this:</p>
<pre class="lang-py prettyprint-override"><code>print(f"{result=}")
</code></pre>
<p>This works fine, but the <code>=</code> sign is not just used to tell Python you want a self-documenting f-string...it is also the symbol that gets formatted into the output. Is there any way to get Python to use the <code>:</code> character instead of the <code>=</code> character in the <em>output</em> of self-documenting f-strings, maybe something like this:</p>
<pre class="lang-py prettyprint-override"><code>print(f"{result:}")
</code></pre>
| <python><f-string> | 2023-11-15 20:21:33 | 1 | 1,707 | RSW |
77,490,543 | 3,233,017 | Can TQDM track the output of a subprocess? | <p>As part of a long pipeline (all orchestrated in Python), I'm calling an external program using the subprocess module. This program takes a while to finish, so I'd like to show the user a nice progress bar, to reassure them that it hasn't frozen.</p>
<p>This program also sends a whole lot of information to stdout—much more than I want to show the end user. But the number of lines in this output is consistent.</p>
<p>What's the best way to track the number of lines that have been sent to stdout by a subprocess, for the purpose of updating a real-time status bar (via tqdm)?</p>
| <python><subprocess><tqdm> | 2023-11-15 19:52:04 | 0 | 3,547 | Draconis |
77,490,435 | 13,086,128 | AttributeError: cython_sources | <p>I am using:</p>
<pre><code>python: 3.12
OS: Windows 11 Home
</code></pre>
<p>I tried to install <code>catboost==1.2.2</code></p>
<p>I am getting this error:</p>
<pre><code>C:\Windows\System32>py -3 -m pip install catboost==1.2.2
Collecting catboost==1.2.2
Downloading catboost-1.2.2.tar.gz (60.1 MB)
---------------------------------------- 60.1/60.1 MB 5.1 MB/s eta 0:00:00
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [135 lines of output]
Collecting setuptools>=64.0
Using cached setuptools-68.2.2-py3-none-any.whl (807 kB)
Collecting wheel
Using cached wheel-0.41.3-py3-none-any.whl (65 kB)
Collecting jupyterlab
Downloading jupyterlab-4.0.8-py3-none-any.whl (9.2 MB)
---------------------------------------- 9.2/9.2 MB 7.8 MB/s eta 0:00:00
Collecting conan<=1.59,>=1.57
Downloading conan-1.59.0.tar.gz (780 kB)
-------------------------------------- 781.0/781.0 kB 4.9 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting async-lru>=1.0.0 (from jupyterlab)
Downloading async_lru-2.0.4-py3-none-any.whl (6.1 kB)
Collecting ipykernel (from jupyterlab)
Downloading ipykernel-6.26.0-py3-none-any.whl (114 kB)
-------------------------------------- 114.3/114.3 kB 6.5 MB/s eta 0:00:00
Collecting jinja2>=3.0.3 (from jupyterlab)
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
-------------------------------------- 133.1/133.1 kB 7.7 MB/s eta 0:00:00
Collecting jupyter-core (from jupyterlab)
Downloading jupyter_core-5.5.0-py3-none-any.whl (28 kB)
Collecting jupyter-lsp>=2.0.0 (from jupyterlab)
Downloading jupyter_lsp-2.2.0-py3-none-any.whl (65 kB)
---------------------------------------- 66.0/66.0 kB 3.7 MB/s eta 0:00:00
Collecting jupyter-server<3,>=2.4.0 (from jupyterlab)
Downloading jupyter_server-2.10.1-py3-none-any.whl (378 kB)
-------------------------------------- 378.6/378.6 kB 4.7 MB/s eta 0:00:00
Collecting jupyterlab-server<3,>=2.19.0 (from jupyterlab)
Downloading jupyterlab_server-2.25.1-py3-none-any.whl (58 kB)
---------------------------------------- 59.0/59.0 kB 3.0 MB/s eta 0:00:00
Collecting notebook-shim>=0.2 (from jupyterlab)
Downloading notebook_shim-0.2.3-py3-none-any.whl (13 kB)
Collecting packaging (from jupyterlab)
Downloading packaging-23.2-py3-none-any.whl (53 kB)
---------------------------------------- 53.0/53.0 kB 2.7 MB/s eta 0:00:00
Collecting tornado>=6.2.0 (from jupyterlab)
Downloading tornado-6.3.3-cp38-abi3-win_amd64.whl (429 kB)
-------------------------------------- 429.2/429.2 kB 9.1 MB/s eta 0:00:00
Collecting traitlets (from jupyterlab)
Downloading traitlets-5.13.0-py3-none-any.whl (84 kB)
---------------------------------------- 85.0/85.0 kB 4.7 MB/s eta 0:00:00
Collecting requests<3.0.0,>=2.25 (from conan<=1.59,>=1.57)
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
---------------------------------------- 62.6/62.6 kB ? eta 0:00:00
Collecting urllib3<1.27,>=1.26.6 (from conan<=1.59,>=1.57)
Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB)
-------------------------------------- 143.8/143.8 kB 4.3 MB/s eta 0:00:00
Collecting colorama<0.5.0,>=0.3.3 (from conan<=1.59,>=1.57)
Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting PyYAML<=6.0,>=3.11 (from conan<=1.59,>=1.57)
Downloading PyYAML-6.0.tar.gz (124 kB)
-------------------------------------- 125.0/125.0 kB 3.6 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
Getting requirements to build wheel did not run successfully.
exit code: 1
[54 lines of output]
running egg_info
writing lib\PyYAML.egg-info\PKG-INFO
writing dependency_links to lib\PyYAML.egg-info\dependency_links.txt
writing top-level names to lib\PyYAML.egg-info\top_level.txt
Traceback (most recent call last):
File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 288, in <module>
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\dist.py", line 989, in run_command
super().run_command(command)
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 318, in run
self.find_sources()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 326, in find_sources
mm.run()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 548, in run
self.add_defaults()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 586, in add_defaults
sdist.add_defaults(self)
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\sdist.py", line 113, in add_defaults
super().add_defaults()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 251, in add_defaults
self._add_defaults_ext()
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 336, in _add_defaults_ext
self.filelist.extend(build_ext.get_source_files())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 204, in get_source_files
File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 107, in __getattr__
raise AttributeError(attr)
AttributeError: cython_sources
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Getting requirements to build wheel did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 23.1.2 -> 23.3.1
[notice] To update, run: C:\Users\talta\AppData\Local\Programs\Python\Python312\python.exe -m pip install --upgrade pip
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Any workaround or solutions?</p>
<p>Comments and answers are much appreciated.</p>
| <python><python-3.x><pip><catboost><python-3.12> | 2023-11-15 19:31:56 | 1 | 30,560 | Talha Tayyab |
77,490,281 | 11,981,718 | How to group 2d spatial grid data based on their elevation using clustering: clusters have to be contiguous | <p><a href="https://i.sstatic.net/W6nfd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W6nfd.png" alt="enter image description here" /></a></p>
<p>I have 2d gridded data that I want to group based on elevation, but the clusters have to be continuous and have a minimum number of data in the (4). A visual representation of the clustering would look like that:</p>
<p>Thanks!</p>
<p><a href="https://i.sstatic.net/Q8KwT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q8KwT.png" alt="gridded data with clusters" /></a></p>
| <python><scikit-learn><cluster-analysis> | 2023-11-15 18:59:21 | 1 | 412 | tincan |
77,490,265 | 1,497,199 | How to run celery workers with sequential task execution with "per-child" bounds | <p>The context is that I'm using kubernetes for parallelism and workload balancing, similar to <a href="https://miguescri.com/post/2021-05-22-celery-hpa/" rel="nofollow noreferrer">this</a>. Given this, it is desirable to have no concurrency within the celery workers -- we get that by k8s horizontal scaling. In addition, limiting each worker to sequential task processing ensures that the memory use is constrained.</p>
<p>Given this context, it seems sensible to use the <code>solo</code> pool option when starting the workers. However, I am suspicious that keeping the worker running for an arbitrarily long time and doing all of the work in the same process will result in gradual memory leakage that will cause a task execution to fail.</p>
<p>Celery provides options for starting the worker to address this memory (or other resource) leakage: <code>max-tasks-per-child, max-memory-per-child</code>; these restart the child processes (or threads) that actually do the processing work either periodically or when a memory threshold is reached. The latter helps deal with packages/processes that leak memory.</p>
<p>The seemingly ideal solution would be to start the worker(s) like this:</p>
<pre><code>python -m celery -A worker.app worker -P solo --max-memory-per-child 1048576 --max-tasks-per-child 10
</code></pre>
<p>That way, the worker (in a given pod), just processes tasks sequentially, but also restarts periodically (every 10 jobs) or if the memory bound is exceeded (1048576KiB) in order to ensure that all resources are returned to the OS.</p>
<p>However, I have my doubts that the "per-child" options have an effect on a solo worker -- Will the solo worker re-start if the bounds are exceeded or are these options only relevant for multi-process or multi-threaded workers? [the celery documentation is not helpful]</p>
<p>Overall my goals are this:</p>
<ul>
<li>Use <code>celery</code> for task distribution an processing</li>
<li>Run the workers in separate kubernetes pods to take advantage of that system's ability to scale resources</li>
<li>Have no concurrency in the celery workers so that the pod memory usage is stable and predictable,</li>
<li>Protect against resource (memory) leakage to reduce the chance that tasks fail</li>
</ul>
<p>how can I set up the celery workers to achieve this?</p>
<p>The "per-child" options look like a promising approach if there is a way to confirm that they operate correctly for a <code>solo</code> pool worker.</p>
| <python><kubernetes><celery> | 2023-11-15 18:55:24 | 0 | 8,229 | Dave |
77,490,225 | 4,542,117 | python matrix traversal expanding outwards concentric squares | <p>Let's say I have a 5x5 matrix as follows:</p>
<pre><code>0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
</code></pre>
<p>Based on some location, I am given an index in both the (x,y) coordinate where I want to start building concentric squares outwards of values. A few examples below:</p>
<pre><code>3 3 3 3 3
3 2 2 2 2
3 2 1 1 2
3 2 1 1 2
3 2 2 2 2
</code></pre>
<pre><code>2 2 2 3 4
1 1 2 3 4
1 1 2 3 4
2 2 2 3 4
3 3 3 3 4
</code></pre>
<p>Is there a more automated / function / libraries that can do this easily instead of hard-coding these sort of values?</p>
| <python><numpy> | 2023-11-15 18:46:56 | 1 | 374 | Miss_Orchid |
77,490,131 | 5,840,173 | Different headers in the same session | <p>After I ran the following python script,</p>
<pre><code>import requests
header_1 = {"Authorization":"Bearer abc123"}
header_2 = {"Authorization":"Bearer def456"}
url = "https://notify-api.line.me/api/notify"
session = requests.session()
a = session.post(url, headers=header_1, files=file, data=data)
print(a.json())
b = session.post(url, headers=header_2, files=file, data=data)
print(b.json())
</code></pre>
<p>I got the following error message.</p>
<pre><code>{'status': 200, 'message': 'ok'}
{'status': 400, 'message': 'Invalid image.'}
</code></pre>
<p>I don't know why I can't POST twice using different headers. If I like to do so, how should I do?</p>
<p><strong>Another Try 1</strong></p>
<p>I have tried to clear the headers between two POSTs. But it doesn't work.</p>
<pre><code>import requests
header_1 = {"Authorization":"Bearer abc123"}
header_2 = {"Authorization":"Bearer def456"}
url = "https://notify-api.line.me/api/notify"
session = requests.session()
a = session.post(url, headers=header_1, files=file, data=data)
print(a.json())
session.headers.clear() ########## new line
b = session.post(url, headers=header_2, files=file, data=data)
print(b.json())
</code></pre>
<p><strong>Another Try 2</strong></p>
<p>I also have tried to define different sessions. This got the same error message</p>
<pre><code>import requests
header_1 = {"Authorization":"Bearer abc123"}
header_2 = {"Authorization":"Bearer def456"}
url = "https://notify-api.line.me/api/notify"
session_1 = requests.session()
session_2 = requests.session()
a = session_1.post(url, headers=header_1, files=file, data=data)
print(a.json())
b = session_2.post(url, headers=header_2, files=file, data=data)
print(b.json())
</code></pre>
<p><strong>Another Try 3</strong></p>
<p>If I changed the order of the POSTs. The first POST is ok and the second one got an error.</p>
<pre><code>import requests
header_1 = {"Authorization":"Bearer abc123"}
header_2 = {"Authorization":"Bearer def456"}
url = "https://notify-api.line.me/api/notify"
session_1 = requests.session()
session_2 = requests.session()
b = session_2.post(url, headers=header_2, files=file, data=data)
print(b.json())
a = session_1.post(url, headers=header_1, files=file, data=data)
print(a.json())
</code></pre>
<p>Is there anyone can help. Thanks.</p>
| <python><http> | 2023-11-15 18:31:10 | 0 | 511 | bfhaha |
77,490,095 | 688,954 | ModuleNotFoundError when running coverage run -m pytest . in GitHub Actions | <p>I have a very simple project like below</p>
<pre><code>//
├─ src/
│ ├─ sample/
│ │ ├─ __init__.py
│ │ ├─ simple.py
├─ tests/
│ ├─ __init__.py
│ ├─ test_simple.py
</code></pre>
<p>When I tried running the coverage command below in the my local laptop, it runs okay and generates <code>coverage.xml</code></p>
<pre><code>coverage run -m pytest . && coverage xml
</code></pre>
<p>But when I run in the GitHub actions using the config below, it returns error</p>
<pre><code> - uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest coverage
- name: Run test
run: |
coverage run -m pytest .
coverage xml
</code></pre>
<p>The error is below</p>
<pre><code>Run coverage run -m pytest .
============================= test session starts ==============================
platform linux -- Python 3.11.6, pytest-7.4.3, pluggy-1.3.0
rootdir: /home/runner/work/openapi-splitter/openapi-splitter
collected 0 items / 1 error
==================================== ERRORS ====================================
____________________ ERROR collecting tests/test_simple.py _____________________
ImportError while importing test module '/home/runner/work/openapi-splitter/openapi-splitter/tests/test_simple.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_simple.py:7: in <module>
from sample.simple import add_one
E ModuleNotFoundError: No module named 'sample'
=========================== short test summary info ============================
ERROR tests/test_simple.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.11s ===============================
</code></pre>
<p>Why did this happen and how to fix this?</p>
| <python><pytest><github-actions><code-coverage> | 2023-11-15 18:23:00 | 2 | 4,033 | Petra Barus |
77,490,071 | 6,067,528 | How can I divide these two sparse matrices together? | <p>I am trying to move dense matrix operations to be sparse. I was using numpy broadcasting to divide an array of shape (432,) to (591, 432) when they were dense, but how can do I this with sparse matrices?</p>
<pre><code><591x432 sparse matrix of type '<class 'numpy.int64'>'
with 3876 stored elements in Compressed Sparse Column format>
<1x432 sparse matrix of type '<class 'numpy.int64'>'
with 432 stored elements in COOrdinate format>
</code></pre>
<p>When I try with this dummy data below...</p>
<pre><code>import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
matrix = CountVectorizer().fit_transform(raw_documents=["test sentence.", "test sent 2.").T
max_w = np.max(matrix, axis=0)
matrix / max_w
</code></pre>
<p>I get <code>ValueError: inconsistent shapes</code>. How can I divide these ?</p>
| <python><numpy><scipy> | 2023-11-15 18:18:38 | 1 | 1,313 | Sam Comber |
77,490,002 | 3,782,911 | How to calculate a rolling function of only defined values in pandas/numpy? | <p>I've found <a href="https://stackoverflow.com/questions/69269582/how-to-ignore-nan-when-applying-rolling-with-pandas">How to ignore NaN when applying rolling with Pandas</a></p>
<p>But it didn't help.</p>
<ul>
<li>I have arrays where each line item is a specific value for a specific time index for a specific column.</li>
<li>There may be more than one non-NaN entry per row, but certainly not many.</li>
<li>I want to calculate a function that calculates the gradient according to the defined values, relative to the index.</li>
</ul>
<p>Can't work out how to do it with either numpy or pandas.</p>
<p>Advice helpful. pandas.drop_na, skip_na didn't help.</p>
<p>Template for output</p>
<pre><code>fa = np.random.randn(10,4)
mask = np.zeros(40, dtype=bool)
mask[:15] = True
np.random.shuffle(mask)
mask = mask.reshape(10,4)
fa[mask] = np.nan
fa
Out[40]:
array([[ nan, -0.57681061, nan, 0.23047461],
[ 0.26260072, -0.62024175, 0.35678478, nan],
[-0.5781359 , -0.17364336, nan, nan],
[-0.58982883, nan, 0.07114217, 1.03781762],
[-0.03906354, -0.49546887, nan, nan],
[-0.3988263 , 0.21794358, nan, -0.04167338],
[ 0.35731643, -0.80956629, -0.29624602, 2.59351753],
[-0.02804324, nan, nan, nan],
[ nan, 0.75344618, -0.52145898, nan],
[-0.45565981, 0.26946552, nan, 1.64095417]])
dx = pd.date_range("2023-01-01", periods=10, freq="S")
df = pd.DataFrame(fa, index=idx)
## Apply function
df.rolling(3).apply(lambda s: s.sum())
Out[52]:
0 1 2 3
2018-01-01 00:00:00 NaN NaN NaN NaN
2018-01-01 00:00:01 NaN NaN NaN NaN
2018-01-01 00:00:02 NaN -1.370696 NaN NaN
2018-01-01 00:00:03 -0.905364 NaN NaN NaN
2018-01-01 00:00:04 -1.207028 NaN NaN NaN
2018-01-01 00:00:05 -1.027719 NaN NaN NaN
2018-01-01 00:00:06 -0.080573 -1.087092 NaN NaN
2018-01-01 00:00:07 -0.069553 NaN NaN NaN
2018-01-01 00:00:08 -0.126387 NaN NaN NaN
2018-01-01 00:00:09 -0.126387 NaN NaN NaN
## What would be good is to have the output array being based only on the defined values
## as if the NaN's weren't there. So, for example in the below array which is made
## from taking the columns and applying dropna to them before running the
## rolling aggregate function.
2018-01-01 00:00:00 NaN NaN NaN NaN
2018-01-01 00:00:01 NaN NaN NaN NaN
2018-01-01 00:00:02 NaN -1.370696 NaN NaN
2018-01-01 00:00:03 -0.589829 -1.289354 NaN NaN
2018-01-01 00:00:04 -0.039064 -0.451169 NaN NaN
2018-01-01 00:00:05 -0.398826 -1.087092 NaN2 1.226619
2018-01-01 00:00:06 0.357316 NaN 0.131681 3.589662
2018-01-01 00:00:07 -0.028043 NaN NaN NaN
2018-01-01 00:00:08 NaN 0.161823 -0.746563 NaN
2018-01-01 00:00:09 -0.455660 0.2133456 NaN 4.192798
</code></pre>
<p>The last line has been formed by doing</p>
<pre><code>df[n].dropna().rolling(3).apply(lambda s: s.sum())
</code></pre>
<p>on each column, and then filled in by hand.</p>
<p>Now the actual function I want to run uses the time index as an input as well, so it's a bit more complicated than this (otherwise it would easy -- just swap out all the <code>nan</code>'s with <code>0</code>'s and we're done).</p>
| <python><pandas><numpy> | 2023-11-15 18:07:27 | 0 | 2,795 | David Boshton |
77,489,981 | 458,742 | How should mysql.connector be shared correctly with threads in Django applications? | <p>This concerns a potential solution to <a href="https://stackoverflow.com/questions/77487774/commands-out-of-sync-you-cant-run-this-command-now-on-first-use-of-cursor">my earlier question</a> in which I was getting <code>commands out of sync; you can't run this command now</code> after wrapping mysql cursors in <code>__enter__</code> and <code>__exit__</code>.</p>
<p>I have read elsewhere that Django does not spawn new threads to handle multiple requests, but rather multiple Django processes would normally be spawned in order to achieve parallelism. Despite this advice, I began printing <code>Thread.ident</code> and <code>Thread.native_id</code> and noticed that they were changing.</p>
<p>Based on this, I speculatively wrote something like:</p>
<pre><code>def db_connect ():
t = threading.current_thread()
if not hasattr (t, 'db_connection'):
print (f"New MyDBConnectionClass for {t.ident}~{t.native_id}")
t.db_connection = MyDBConnectionClass ()
return t.db_connection.get_cursor () # This object has a __exit__ which closes the cursor
</code></pre>
<p>along with</p>
<pre><code>class MyDBConnectionClass ():
def __del__(self):
t = threading.current_thread()
print (f"MyDBConnectionClass deleted for {t.ident}~{t.native_id}")
</code></pre>
<p>the view handlers' usage of the cursors is unchanged:</p>
<pre><code>with db_connect() as db:
results = db.all (query, args)
</code></pre>
<p>So far this seems to have fixed the <code>commands out of sync</code> error (and the exit code 245 crash mentioned in the original question).</p>
<p><code>MyDBConnectionClass.__delete__</code> is not being called, but various instances are created for a few different <code>ident</code> values.</p>
<p>My current hypothesis is that there is some sort of thread pool going on, and my application was crashing unpredictably because <em>sometimes</em> (often) a view would be handled in the thread which created the initial connection, but sometimes not. This experiment seems to show that giving each thread a distinct connection works, which makes sense, but I am unsatisfied because:</p>
<ul>
<li>the Thread objects are apparently not being deleted (or else something else is preventing <code>MyDBConnection.__del__</code> from being called)</li>
<li>this doesn't agree with the documentation and examples I have read elsewhere</li>
<li>it's messy, and Django has a fairly clean design as far as I have seen -- I think I must be missing something</li>
</ul>
<p>So, what is the correct way to handle mysql connection and cursor objects so that I can do</p>
<pre><code>with my_connection_wrapper.get_cursor() as my_cursor_wrapper:
my_cursor_wrapper.foo ()
</code></pre>
<p>freely in Django views without leaking resources and without causing thread affinity instability issues (assuming that really is the problem)?</p>
| <python><mysql><django><multithreading> | 2023-11-15 18:04:28 | 1 | 33,709 | spraff |
77,489,961 | 4,256,677 | pdoc3 incorrectly rendered Args section | <p>Recently migrated from an M1 to M2 mac. Previous successful invocation of pdoc was on M1 with Ventura 13.6, same python version. Is there a prerequisite I'm missing, or maybe need to downgrade a dependency, or is this due to an unreported error in my docstrings somewhere else in the module?</p>
<p>Example Source code:</p>
<pre class="lang-py prettyprint-override"><code>"""
Args:
script_path (str, optional): the full s3 or github uri, or local path to the script you want the service to execute. Defaults to "./script.py".
"""
</code></pre>
<p>Correctly rendered <strong>Args</strong> section, from previous invocation e.g.,</p>
<pre class="lang-html prettyprint-override"><code><h2 id="args">Args</h2>
<dl>
<dt><strong><code>script_path</code></strong> :&ensp;<code>str</code>, optional</dt>
<dd>the full s3 or github uri, or local path to the script you want the service to execute. Defaults to "./script.py".</dd>
</dl>
</code></pre>
<p>Incorrectly rendered Args section in current env:</p>
<pre class="lang-html prettyprint-override"><code><p>Args:<br>
script_path (str, optional): the full s3 or github uri, or local path
to the script you want the service to execute. Defaults to "./script.py".
</p>
</code></pre>
<ul>
<li>pdoc version: pdoc <code>0.10.0</code></li>
<li>python version: <code>3.9.16</code> (pyenv)</li>
<li>M2 mac with os x Ventura <code>13.6.2</code></li>
</ul>
| <python><docstring><pdoc> | 2023-11-15 18:00:23 | 1 | 1,179 | varontron |
77,489,899 | 8,869,570 | How to check that a dataframe consists of all 0 entries? | <p>I know one way is to loop through all the columns, e.g.,</p>
<pre><code>for col in df.columns:
assert (df[col] != 0).sum() == 0
</code></pre>
<p>Is there better approach that can operate on the entire dataframe without looping through each individual column?</p>
| <python><python-3.x><pandas><dataframe> | 2023-11-15 17:47:26 | 5 | 2,328 | 24n8 |
77,489,728 | 19,276,472 | Helper functions in another file, ModuleNotFoundError when trying to import | <p>I have a simple Python project using scrapy. My file structure looks like this:</p>
<pre><code>top_level_folder
|-scraper
|--spiders
|---help_functions.py
|---<some more files>
|--items.py
|--pipelines.py
|--settings.py
|--<some more files>
</code></pre>
<p>help_functions.py has a couple functions defined, like <code>add_to_items_buffer</code>.</p>
<p>In pipelines.py, I'm attempting to do...</p>
<pre><code>from help_functions import add_to_items_buffer
...
class BlahPipeline:
def process_item(self, item, spider):
...
add_to_items_buffer(item)
...
</code></pre>
<p>When I try to run this, I get <code>ModuleNotFoundError: No module named 'help_functions'</code>. Doing <code>from spiders.help_functions import add_to_items_buffer</code> throws a similar error.</p>
<p>What's going on here? I imagine I'm misunderstanding something fundamental about how Python imports work.</p>
| <python><scrapy> | 2023-11-15 17:16:44 | 3 | 720 | Allen Y |
77,489,704 | 2,556,795 | Groupby giving same aggregate value for all groups | <p>I am trying to take mean of each group and trying to assign those to a new column in another dataframe but the first group's mean value is populating across all groups.</p>
<p>Below is my dataframe <code>df1</code></p>
<pre><code>level value
CF 5
CF 4
CF 6
EL 2
EL 3
EL 1
EF 4
EF 3
EF 6
</code></pre>
<p>I am taking the mean of each group and saving it to a new column in another dataframe <code>df2</code>.</p>
<pre class="lang-py prettyprint-override"><code>df2['value'] = df1.groupby(['level'])['value'].transform('mean')
</code></pre>
<p>But this is giving me below result</p>
<pre><code>level value
CF 5.0
EL 5.0
EF 5.0
</code></pre>
<p>which should actually be</p>
<pre><code>level value
CF 5.0
EL 2.0
EF 4.333333
</code></pre>
<p>I get expected result if I am not saving the values to new columnn. I am not sure if this is correct way of assigning group values to new column.</p>
| <python><pandas><group-by> | 2023-11-15 17:12:10 | 2 | 1,370 | mockash |
77,489,554 | 8,443,357 | correct incorrect serilization in django | <p>After serialization,I tried to call Django-rest api by connecting 3 tables, then I got issue Original exception text was: <strong>'Store' object has no attribute 'Storeimage'.</strong>, I'm expecting to get api by connecting 3 models.</p>
<p>My models</p>
<pre><code>class Store(models.Model):
user = models.ForeignKey(User, default=1, null=True, on_delete=models.SET_NULL)
address = models.ForeignKey(Address, null=True,on_delete=models.CASCADE)
store_name = models.CharField(max_length=255)
store_about = models.CharField(max_length=255,null=True)
store_location = models.CharField( max_length=255)
#store_address = models.TextField()
store_phoneno = models.CharField(max_length=12)
class Storeimage(models.Model):
store = models.ForeignKey(Store, null=True, on_delete=models.SET_NULL)
picture = models.ImageField(upload_to='store_image/%Y/%m/',max_length=255,null=True)
class Address(models.Model):
street = models.TextField()
country = models.ForeignKey(Country, default=1,null=True, on_delete=models.CASCADE)
state = models.ForeignKey(State, default=1,null=True, on_delete=models.CASCADE)
</code></pre>
<p>my serilzation</p>
<pre><code>class StoreGetSerialiser(serializers.ModelSerializer):
#store_product = StoreproductSerializer(many=True)
address = AddressSerializer()
Storeimage = StoreImage(many=True)
owner = UserPublicSerializer(source='user', read_only=True)
class Meta:
model = Store
fields = [
'pk',
'owner',
'store_name',
'Storeimage',
'store_location',
'address',
'store_phoneno',
'store_website',
]
class StoreImage(serializers.ModelSerializer):
class Meta:
model = Storeimage
fields = '__all__'
</code></pre>
<p>Myexpected output:</p>
<pre><code> {
"pk": 2,
"owner": {
"id": 1
},
"store_name": "",
"store_location": "",
"address": {
"id": 14,
"street": "1",
"country": 1,
"state": 1,
"city": 1
},
"store_phoneno": "",
"store_image": [{"pk":1,"picture":"/location/abcd"},{"pk":2,"picture":"/location/abcd"}]
},
</code></pre>
| <python><django><serialization><django-rest-framework> | 2023-11-15 16:50:35 | 1 | 652 | selvakumar |
77,489,523 | 1,753,640 | Python Regex extract paragraph text between number | <p>I have text as follows and I want to extract just the text</p>
<pre><code>1. foobar
2. foo
3. bar
</code></pre>
<p>The result should be <code>[foobar, foo, bar]</code>.</p>
<p>What python regex will extract the results I want? I tried the following but no luck</p>
<p><code>r'\d+.*?(?=\d|$)'</code></p>
| <python><regex> | 2023-11-15 16:45:40 | 2 | 385 | user1753640 |
77,488,946 | 6,843,153 | argparse to accept random arguments | <p>I have a Python application that implements <code>argparse</code> with a set of arguments declared:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--arg1",
default="dev",
choices=["real", "test", "dev"],
help="arg 1"
)
parser.add_argument("--arg2", default="0", help="arg 2")
parser.add_argument(
"--arg3",
nargs="+",
default=["one", "two"],
choices=["one", "two"],
help="arg 3",
)
parser.add_argument("--arg4", action="store_true", help="arg 4")
parser.add_argument("--arg5", action="store_true", help="arg 5")
parser.add_argument("--arg6", action="store_true", help="arg 6")
parser.add_argument("--arg7", default=None, help="arg 7")
args = parser.parse_args()
</code></pre>
<p>If I send an argument that is not defined in these declarations, I get this exception:</p>
<pre><code>error: unrecognized arguments: arg8 value
</code></pre>
<p>Is it possible to indicate <code>argparse</code> to accept non declared arguments?</p>
| <python><argparse> | 2023-11-15 15:23:08 | 1 | 5,505 | HuLu ViCa |
77,488,891 | 2,741,831 | Subquery from pypika documentation not working | <pre><code>from pypika import Query, Table, Field, Tables, Order
history, customers = Tables('history', 'customers')
last_purchase_at = Query.from_(history).select(
history.purchase_at
).where(history.customer_id==customers.customer_id).orderby(
history.purchase_at, order=Order.desc
).limit(1)
q = Query.from_(customers).select(
customers.id, last_purchase_at._as('last_purchase_at')
)
</code></pre>
<p>I have picked the code directly from the documentation of pypika <a href="https://pypika.readthedocs.io/en/latest/2_tutorial.html#joining-tables-and-subqueries" rel="nofollow noreferrer">https://pypika.readthedocs.io/en/latest/2_tutorial.html#joining-tables-and-subqueries</a></p>
<p>yet it gives me the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/trash/pikatest/main.py", line 10, in <module>
customers.id, last_purchase_at._as('last_purchase_at')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'Field' object is not callable
</code></pre>
<p>Did I do something wrong? Is this just broken? I already tried simplifying the code or using PostgreSQLQuery, but same result.</p>
| <python><pypika> | 2023-11-15 15:16:33 | 1 | 2,482 | user2741831 |
77,488,835 | 14,723,580 | Gradient descent stuck in local minima? | <p>I'm running gradient descent to find a root for a system of nonlinear equations and I am wondering how you might detect if the method is stuck at the local minima, because I believe with the settings I am using this might be the case? my initial values are [-2, -1], tolerance of 10^-2 and 20 iterations. One thing I had read upon was that if the residual begins to flat line or begins to decrease incredibly slowly, it could be an indicator of the method being stuck in the local minima though, I am not entirely sure. I have graphed my residual with its iteration as the values of my iterates for each iteration and I'm wondering how I might know if it's stuck at the local minima.</p>
<pre><code>def system(x):
F = np.zeros((2,1), dtype=np.float64)
F[0] = x[0]*x[0] + 2*x[1]*x[1] + math.sin(2*x[0])
F[1] = x[0]*x[0] + math.cos(x[0]+5*x[1]) - 1.2
return F
def jacb(x):
J = np.zeros((2,2), dtype=np.float64)
J[0,0] = 2*(x[0]+math.cos(2*x[0]))
J[0,1] = 4*x[1]
J[1,0] = 2*x[0]-math.sin(x[0]+5*x[1])
J[1,1] = -5*math.sin(x[0]+5*x[1])
return J
iterates, residuals = GradientDescent('system', 'jacb', np.array([[-2],[-1]]), 1e-2, 20, 0);
</code></pre>
<p><a href="https://pastebin.com/cmJn3WxC" rel="nofollow noreferrer">FullGradientDescent.py</a>
<a href="https://pastebin.com/6HYnfh8P" rel="nofollow noreferrer">GradientDescentWithMomentum</a></p>
<p>I'm testing usually with 20 iterations but I did 200 to illustrate the slowing down of the residual
<a href="https://i.sstatic.net/wr8pi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wr8pi.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/bzopG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bzopG.png" alt="enter image description here" /></a></p>
<p><strong>Marat</strong> suggested using GD with momentum.
Code changes:</p>
<pre><code>dn = 0
gamma = 0.8
dn_prev = 0
while (norm(F,2) > tol and n <= max_iterations):
J = eval(jac)(x,2,fnon,F,*fnonargs)
residuals.append(norm(F,2))
dn = gamma * dn_prev+2*(np.matmul(np.transpose(J), F))
dn_prev = dn
lamb = 0.01
x = x - lamb * dn
</code></pre>
<p>Residual using GD with momentum
<a href="https://i.sstatic.net/PP7lN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PP7lN.png" alt="enter image description here" /></a></p>
<p><strong>lastchance</strong> suggested doing a contour plot, this seems to show the behaviour of the algorithm but it still does not converge?
<a href="https://i.sstatic.net/QoNSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QoNSg.png" alt="enter image description here" /></a></p>
| <python><numerical-methods><gradient-descent><root-finding> | 2023-11-15 15:08:09 | 2 | 753 | Krellex |
77,488,813 | 4,966,886 | Gradients and Laplacian of an image using skimage vs. open-cv | <p>I compare the vertical and horizontal gradients and Laplacian of an image using skimage and cv2 with the following code:</p>
<pre><code>import sys
import matplotlib.pyplot as plt
from matplotlib.image import imread
import skimage
import cv2
def plot(ax, img, title):
ax.imshow(img) # cmap = 'gray'
ax.set_title(title)
ax.set_xticks([])
ax.set_yticks([])
img = imread("./strawberry.jpg")
laplacian = cv2.Laplacian(img,cv2.CV_32F)
sobelx = cv2.Sobel(img,cv2.CV_32F,1,0,ksize=3)
sobely = cv2.Sobel(img,cv2.CV_32F,0,1,ksize=3)
fig1 = plt.figure(figsize=(10, 10))
fig1.suptitle('cv2', fontsize=14, fontweight='bold')
ax = fig1.add_subplot(221)
plot(ax, img, 'Original')
ax = fig1.add_subplot(222)
plot(ax, laplacian, 'Laplacian')
ax = fig1.add_subplot(223)
plot(ax, sobelx, 'Sobel X')
ax = fig1.add_subplot(224)
plot(ax, sobely, 'Sobel Y')
fig1.set_tight_layout(True)
laplacian = skimage.filters.laplace(img,ksize=5)
sobelx = skimage.filters.sobel(img, axis=0)
sobely = skimage.filters.sobel(img, axis=1)
fig2 = plt.figure(figsize=(10, 10))
fig2.suptitle('skimage', fontsize=14, fontweight='bold')
ax = fig2.add_subplot(221)
plot(ax, img, 'Original')
ax = fig2.add_subplot(222)
plot(ax, laplacian, 'Laplacian')
ax = fig2.add_subplot(223)
plot(ax, sobelx, 'Sobel X')
ax = fig2.add_subplot(224)
plot(ax, sobely, 'Sobel Y')
fig2.set_tight_layout(True)
plt.show()
</code></pre>
<p>Here are the results:</p>
<p><a href="https://i.sstatic.net/kPEdO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPEdO.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xL6gf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xL6gf.jpg" alt="enter image description here" /></a></p>
<p>So, they are vastly different. Even if the kernels would differ, I would not expect such differences. Did I miss something in my script ?</p>
| <python><opencv><matplotlib><scikit-image><laplacian> | 2023-11-15 15:04:12 | 1 | 306 | user11634 |
77,488,627 | 3,416,725 | How to unit test mocking SQLAlchemy engine in Python for an update query | <p>I have a function called <code>update_worker_data</code>. This simply updates a table in PostgreSQL, however if the row does not exist in the db, it then inserts it. This is checked by getting the <code>rowcount</code> after the query is executed:</p>
<pre><code>def update_worker_data(db: engine, data: List[dict]) -> int:
"""Update the data for the modifiable table.
Args:
----------
* db(engine): db engine
* date(List[dict]): data to either be updated or inserted
Returns:
----------
* row_count(int): amount of rows updated and inserted
"""
update_query = """
UPDATE worker_data
SET "first_col" = %(f_col)s, "second_col" = %(s_col)s, "third_col" = %(t_col)s, "fourth_col" = %(fo_col)s
WHERE alt_id = %(a_id)s
"""
insert_query = """
INSERT INTO worker_data("first_col", "second_col", "third_col", "fourth_col", "alt_id")
VALUES (%(f_col)s, %(s_col)s, %(t_col)s, %(fo_col)s, %(a_id)s);
"""
row_count = 0
for d in data:
params = {
"f_col": d["first_col"],
"s_col": d["second_col"],
"t_col": d["third_col"],
"fo_col": d["fourth_col"],
"a_id": d["alt_id"]
}
with db.connect() as conn:
has_updated = conn.execute(update_query, params).rowcount
has_inserted = 0
if has_updated == 0:
has_inserted = conn.execute(insert_query, params).rowcount
row_count += (has_inserted + has_updated)
return row_count
</code></pre>
<p>My unit test is currently like so:</p>
<pre><code>@patch("api.db.engine")
def test_update_c_data(mock_engine, update_data_dict):
cursor_mock = mock_engine.connect.return_value.__enter__.return_value
cursor_mock.execute.return_value.rowcount = 1
actual_row_count = update_substance_store(mock_engine, update_data_dict)
assert actual_row_count == 4
</code></pre>
<p>When I run this unit test, it asserts true, however this will only ever execute the first query (<code>update_query</code>) and will never enter the <code>has_update == 0</code> block due to returning value of <code>.rowcount</code> equalling 1.
So according to the <a href="https://docs.python.org/3.12/library/unittest.mock.html#unittest.mock.Mock.side_effect" rel="nofollow noreferrer">mock docs</a> I should use side_effect. I then create a list of values for the side_effect and use parameterize.
My desired operation would then to be run first the <code>update_query</code> and then the insert_query.</p>
<p>I then changed my unit test to use side_effect so it becomes like so:</p>
<pre><code>@pytest.mark.parameterize("expected_row_count", [
([1, 0]),
([0, 1]),
])
@patch("api.db.engine")
def test_update_c_data(mock_engine, expected_row_count, update_data_dict):
cursor_mock = mock_engine.connect.return_value.__enter__.return_value
cursor_mock.execute.return_value.rowcount.side_effect = expected_row_count
actual_row_count = update_substance_store(mock_engine, update_data_dict)
assert actual_row_count == 5
</code></pre>
<p>However when I run this test, it just sets the value to a mock object. Is <code>side_effect</code> the correct way to handle this test case?</p>
| <python><unit-testing><pytest><pytest-mock> | 2023-11-15 14:38:29 | 0 | 493 | mp252 |
77,488,626 | 6,246,426 | Reformat long list in PyCharm to have one item per line | <p>When I do the code formatting in Pycharm it reformats the long lists in this manner:</p>
<pre class="lang-py prettyprint-override"><code>style1 = [
'Email:', 'SSN:', 'Address:', 'Home Phone:',
'Mobile Phone: ', 'DOB:', 'Date of Surgery:',
'Date of Service:', 'Facility of Service:', 'Clinic Number:',
'Employer:', 'Work Phone: ', 'Fax: ', 'Type:', 'IPA:',
'Health Plan:', 'ID #:', 'Claims Address:', 'Group #:',
'Claim # / PO #:', 'Phone:', 'Fax:', 'Contact',
'Adjuster Email', 'Util Review Phone', 'Util Review Fax',
'Doctor:', 'NPI #: ', 'Date of Injury: ', 'Body Parts:',
'Body Part Side:', 'Gender:', 'Diagnosis:', 'Diagnosis 2:',
'Procedure:'
]
</code></pre>
<p>Is there any way to reformat in PyCharm and have one item per line, like so:</p>
<pre class="lang-py prettyprint-override"><code>style2 = [
'Email:',
'SSN:',
'Address:',
'Home Phone:',
'Mobile Phone: ',
'DOB:',
'Date of Surgery:',
'Date of Service:',
'Facility of Service:',
'Clinic Number:',
'Employer:',
'Work Phone: ',
'Fax: ',
'Type:',
'IPA:',
'Health Plan:',
'ID #:',
'Claims Address:',
'Group #:',
'Claim # / PO #:',
'Phone:',
'Fax:',
'Contact',
'Adjuster Email',
'Util Review Phone',
'Util Review Fax',
'Doctor:',
'NPI #: ',
'Date of Injury: ',
'Body Parts:',
'Body Part Side:',
'Gender:',
'Diagnosis:',
'Diagnosis 2:',
'Procedure:'
]
</code></pre>
<p>I was trying to do it with .editconfig, but have not found a parameter that does the trick yet.</p>
| <python><pycharm> | 2023-11-15 14:38:04 | 0 | 1,208 | Victor Di |
77,488,611 | 1,936,538 | Jupyter Notebook export to HTML + Crontab doesn't show plots | <p>I am automating the automatic execution and HTML export of a Jupyter Notebook.
The Jupyter Notebook was created in VS Code and when I export it from the terminal in Ubuntu (jupyter nbconvert --execute --to html) I get the correct HTML with all the plots.
However this does not happen when I execute it in the crontab or when use the terminal connected remotely from another PC with ssh the plots are not shown. The script runs fine and outputs the print commands but not the plot.</p>
<p>The code goes like this:</p>
<pre><code>for i in range(15):
(... )
np.log(pd_df+1).join(pd_df_2).plot()
plt.show()
plt.close()
pd_df.join(pd_df_2).plot()
plt.show()
plt.close()
np.clip(pd_df_3, -1, 1).plot()
plt.show()
plt.close()`
(...)
</code></pre>
<p>I tried to include the code bellow, I have executed in sh instead of bash.</p>
<pre><code>%matplotlib inline # prior to import command
</code></pre>
<p>I also tried to include the code bellow but it also did not work.</p>
<pre><code>import plotly.io as pio
pio.renderers.default = 'notebook'
</code></pre>
| <python><pandas><matplotlib><jupyter-notebook><cron> | 2023-11-15 14:35:52 | 0 | 403 | husvar |
77,488,578 | 3,734,059 | SQAlchemy>2.0 does not INSERT or UPSERT into MySQL when using text() function | <p>We have recently updated to <code>SQLAlchemy==2.0.23</code> which requires the usage of <a href="https://docs.sqlalchemy.org/en/20/core/sqlelement.html#sqlalchemy.sql.expression.text" rel="nofollow noreferrer"><code>sqlalchemy.sql.expression.text</code></a> to format executed queries.</p>
<p>In older versions e. g. <code>SQLAlchemy==1.4.49</code> we could <code>INSERT</code> or <code>UPSERT</code> data using <code>session.execute(my_query)</code> which does not work anymore with <code>SQLAlchemy==2.0.23</code> using <code>session.execute(text(my_query))</code>.</p>
<p>Here's a minimal example of the problem:</p>
<pre><code>from sqlalchemy import create_engine, text
from sqlalchemy.orm import create_session
my_query = """
INSERT INTO test (`col_a`, `col_b`)
VALUES (4, 7), (5, 8), (6, 9) as nd
ON DUPLICATE KEY
UPDATE `col_a` = nd.`col_a`, `col_b` = nd.`col_b`
"""
constring = "mysql+pymysql://my_user:my_password@my.host:3306/my_db"
connection = create_engine(constring)
connection.echo = True # enable logging
session = create_session(bind=connection)
cursor = session.execute(text(my_query)) # does not INSERT or UPSERT
</code></pre>
<p>Interestingly, if I run the plain query on the database, it works perfectly and the logging also shows a working query.</p>
<p>Any hints on how to resolve this error?</p>
| <python><mysql><sqlalchemy> | 2023-11-15 14:30:52 | 1 | 6,977 | Cord Kaldemeyer |
77,488,576 | 5,309,827 | django userprofile nested relationship is empty in django REST | <p>Hello I have an UserProfile model in django, and I want to serialize my User model with its UserProfile nested to receive it via ajax in my template but it cames empty.</p>
<p>Model UserProfile</p>
<pre><code>class UserProfile(models.Model):
user = models.OneToOneField(User, related_name='profile', on_delete=models.DO_NOTHING)
departamento = models.ForeignKey(Departamento,on_delete=models.PROTECT,related_name="pertenece_a")
class Meta:
db_table = 'UserProfile'
def user_profile(sender, instance, signal, *args, **kwargs):
Userprofile, new = UserProfile.objects.get_or_create(user=instance)
signals.post_save.connect(user_profile, sender=User)
</code></pre>
<p>Serializers</p>
<pre><code>class ProfileSerializer(serializers.Serializer):
class Meta:
model = UserProfile
fields = ["departamento"]
class GestionUsuarioSerializer(serializers.ModelSerializer):
profile = ProfileSerializer(many=True,read_only=True)
class Meta:
model = User
fields = ["id","username","email","first_name","last_name","profile"]
</code></pre>
<p>Result</p>
<p><a href="https://i.sstatic.net/yM3Wx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yM3Wx.png" alt="enter image description here" /></a></p>
| <python><django><django-rest-framework> | 2023-11-15 14:30:38 | 1 | 323 | Jaime |
77,488,275 | 4,120,777 | Filter by list's element in payload | <p>In Qdrant DB I have a payload containing a list. How can I filter results of a search limiting to the ones where the list contain a specific element?</p>
<p>As example, if the set of points is:</p>
<pre class="lang-json prettyprint-override"><code>[
{ "id": 1, "Fruit": ["apple", "banana", "orange"] },
{ "id": 2, "Fruit": ["pear", "orange" ] },
{ "id": 3, "test": "empty" },
{ "id": 4, "Fruit": ["apple", "orange", "pear"] }
]
</code></pre>
<p>And I want to filter the results containing at least an Apple AND an Orange, i.e. the ones with ID 1 and 4.</p>
<p>How can I build such filter?</p>
<p>I already had a look at the documentation <a href="https://qdrant.tech/documentation/concepts/filtering/" rel="nofollow noreferrer">at this link</a>, without success.</p>
<p>In the docs there is no my case. What I tried is:</p>
<pre><code>Filter(should=[FieldCondition(key='Fruit', match=MatchValue(value='Banana'), range=None, geo_bounding_box=None, geo_radius=None, values_count=None), FieldCondition(key='Fruit', match=MatchValue(value='Apple'), range=None, geo_bounding_box=None, geo_radius=None, values_count=None)], must=None, must_not=None)
</code></pre>
<p>But the result that I know is in the DB does not show up</p>
<p>Thank you in advance.</p>
| <python><vector-database><qdrant><qdrantclient> | 2023-11-15 13:51:39 | 1 | 2,136 | Vincenzo Lavorini |
77,488,178 | 17,136,258 | Calculating the duration of an event | <p>I have a problem. I have a list that is full of events (note: the list does not have to be sorted by date and can contain multiple IDs).
I would like to "know" how long an event lasted.
The calculation can be represented as follows</p>
<pre><code>Duration of Event1 = Take the ID(Timestamp Event 2 - Timestamp Event 1)
</code></pre>
<p>For example:</p>
<pre><code>Cleaning = ID1234 ( 06.11.2023 14:29- 06.11.2023 14:19) = 10 min
</code></pre>
<p><code>End of Work</code> refers to the end, this should not be measured. I have tried it, but it counts incorrectly. So how can I improve it so that I get the desired output?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Sample DataFrame
data = {
'Timestamp': ['06.11.2023 14:19', '06.11.2023 14:29', '06.11.2023 14:37', '06.11.2023 14:41', '06.11.2023 15:00'],
'Event Date': ['06.11.2023', '06.11.2023', '06.11.2023', '06.11.2023', '06.11.2023'],
'ID': ['1234', '1234', '1234', '1234', '1234'],
'Event Category': ['Working Event', 'Working Event', 'Working Event', 'Failure Event', 'Working Event'],
'What did you do?': ['Cleaning', 'Recording', 'Insert', pd.NA, 'End of Work']
}
df = pd.DataFrame(data)
df['Timestamp'] = pd.to_datetime(df['Timestamp'], format='%d.%m.%Y %H:%M')
df = df.sort_values(by=['ID', 'Event Category', 'Timestamp'])
df['Duration'] = df.groupby(['ID', 'Event Category'])['Timestamp'].diff().dt.seconds.div(60, fill_value=0)
print(df)
</code></pre>
<p>Dataframe</p>
<pre class="lang-py prettyprint-override"><code> Timestamp Event Date ID Event Category What did you do?
0 06.11.2023 14:19 06.11.2023 1234 Working Event Cleaning
1 06.11.2023 14:29 06.11.2023 1234 Working Event Recording
2 06.11.2023 14:37 06.11.2023 1234 Working Event Insert
3 06.11.2023 14:41 06.11.2023 1234 Failure Event <NA>
4 06.11.2023 15:00 06.11.2023 1234 Working Event End of Work
</code></pre>
<p>What I have</p>
<pre class="lang-py prettyprint-override"><code>[OUT]
Timestamp Event Date ID Event Category What did you do? \
3 2023-11-06 14:41:00 06.11.2023 1234 Failure Event <NA>
0 2023-11-06 14:19:00 06.11.2023 1234 Working Event Cleaning
1 2023-11-06 14:29:00 06.11.2023 1234 Working Event Recording
2 2023-11-06 14:37:00 06.11.2023 1234 Working Event Insert
4 2023-11-06 15:00:00 06.11.2023 1234 Working Event End of Work
Duration
3 0.0
0 0.0
1 10.0
2 8.0
4 23.0
</code></pre>
<p>What I want</p>
<pre class="lang-py prettyprint-override"><code> Timestamp Event Date ID Event Category What did you do? Duration
0 06.11.2023 14:19 06.11.2023 1234 Working Event Cleaning 10
1 06.11.2023 14:29 06.11.2023 1234 Working Event Recording 8
2 06.11.2023 14:37 06.11.2023 1234 Working Event Insert 4
3 06.11.2023 14:41 06.11.2023 1234 Failure Event <NA> 19
4 06.11.2023 15:00 06.11.2023 1234 Working Event End of Work -
</code></pre>
| <python><pandas><dataframe> | 2023-11-15 13:40:28 | 2 | 560 | Test |
77,488,173 | 5,040,775 | Can you create some kind of execution file (e.g., exe) to run a python code on a computer that does not have python? | <p>I am writing python code to automatically generate some report. Ultimately, I want other people at my firm to be able to run this easily by using some kind of execution file. None of them have python on their PC and I don't want them to install it (for operational difficult reason). Is it even possible to do that?</p>
| <python> | 2023-11-15 13:40:07 | 1 | 3,525 | JungleDiff |
77,488,041 | 22,538,132 | change background color in open3d.visualization.draw_geometries | <p>I want to change the background color in Open3D when I draw geometries using <code>open3d.visualization.draw_geometries</code>, but I can't figure out how to that, as <a href="http://www.open3d.org/docs/release/python_api/open3d.visualization.draw_geometries.html" rel="nofollow noreferrer">the documentation</a> is not showing how to do it.</p>
<p>can you please tell me how can I change background color please? or show <code>Skymap</code> or example? thanks in advance.</p>
| <python><colors><background><open3d> | 2023-11-15 13:21:12 | 1 | 304 | bhomaidan90 |
77,488,011 | 22,221,987 | How to fill dynamically expanding HTML file with templates filled with python data | <p>I have a HTML template for one message (i receive messages via <code>Slack-API</code>).
I want to fill the template with information from API and than stick templates together, in new HTML file (Output file will look like a chat-wall with every message from the chat).</p>
<p>Because of the chats different size, i can't make full template for the whole chat. So, i'm trying to expand my output-HTML dynamically, with my templates, filled with data.</p>
<p>I defined some CSS-classes in the output HTML, but they are not so important for this question, so, there is a single message html-example, which i append to the output-html:</p>
<pre><code><div class='message_body'>
<div class="message_header">
<strong class="user_name">Some UserName</strong>
<span class="date">11.12.23 2:22</span>
</div>
<hr>
<ol class="ordered_list">
<li class="message_text">Lorem ipsum dolor sit amet, consectetur...<br />And second line</li>
</ol>
</div>
</code></pre>
<p>I thought to save this template as a python string variable, than use "f"strings to add some data in it and than just write every filled template-string in <code>output.html</code>.</p>
<p>But, it doesn't look optimised for me. Is there any better solutions how can i dynamically create HMTL with python-parsed data?</p>
<p>I've heard about <code>Jinja</code> and <code>airum</code> but i can't choose between them.</p>
<p>In addition, is there any other optimised solutions?</p>
<p><strong>UPD</strong>:Added chat-html example.
<a href="https://i.sstatic.net/MNZcb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MNZcb.png" alt="enter image description here" /></a></p>
<p>Every message in this example is created with that html example. <code>Replies</code> - is just an expandable list-menu with same message-templates.</p>
| <python><html><python-3.x><parsing><jinja2> | 2023-11-15 13:16:12 | 0 | 309 | Mika |
77,488,002 | 14,244,437 | Make inner joins by using Q objects in Django | <p>I'm customizing a model's Admin change list page in my project.</p>
<p>This model contains a Many to Many field and I want to allow multi-selection filtering.</p>
<p>I've used the <a href="https://github.com/JobDoesburg/django-admin-multi-select-filter" rel="nofollow noreferrer">Django Admin Multi Select Filter package</a> to do this but I noticed that the default behaviour in Django's <code>__in</code> lookup is to be inclusive and not exclusive.</p>
<p>Imagine I have an object A associated with M2M objects 1 and 2 and object B associated with M2M objects 2 and 3. If I make the following query <code>Model.objects.filter(m2m_model__in=[1,2])</code> I will have both objects A and B being retrieved, since the search will use the following condition:</p>
<p><code>Index Cond: (models_model_m2mfield.model_id = ANY ('{1,2}'::bigint[]))</code></p>
<p>I was able to customize the behaviour of the search by doing the following:</p>
<pre><code> for choice in choices:
queryset = queryset.filter(**{self.lookup_kwarg: choice})
</code></pre>
<p>This generates inner joins and returns the value I'm waiting for (only objects matching every option).</p>
<pre><code>INNER JOIN "models_model_m2mfield" ON ("models_m2mmodel"."id" = "models_model_m2mfield"."m2mfield_id") INNER JOIN "models_model_m2mfield" T4 ON ("models_m2mmodel"."id" = T4."m2mfield_id") WHERE ("models_model_m2mfield"."model_id" = 4 AND T4."model_id" = 5)
</code></pre>
<p>I've tried using Q objects to get the same result, but I couldn't make it work. Is it possible to achieve the same behaviour using the Q class?</p>
| <python><django><django-orm> | 2023-11-15 13:14:43 | 1 | 481 | andrepz |
77,487,973 | 1,936,752 | How to deal with off-by-one issues in convolution (Python)? | <p>I'm trying to write a function to add two random varibles <code>X1</code> and <code>X2</code>. In my case, they are both uniform random variables from <code>0</code> to <code>a1</code> and <code>0</code> to <code>a2</code>. To compute the random variable <code>Y = X1 + X2</code>, I need to take a convolution of the probability distributions of <code>X1</code> and <code>X2</code>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import simps
def convolution(f, g, x_range):
delta = (x_range[-1] - x_range[0])/len(x_range)
result = np.convolve(f(x_range), g(x_range), mode = 'full')*delta
return result
# Define uniform distribution for some a > 0. This part can be adapted to arbitrary distributions
def uniform_dist(x, a):
return np.where((x >= 0) & (x <= a), 1/a, 0)
# Set the range of x values, y values and constants
delta = 0.1
x_lim_low = -5
x_lim_upp = 5
a1 = 1
a2 = 1
x_range = np.arange(x_lim_low,x_lim_upp+delta,delta)
y_range = np.arange(2*x_lim_low,2*x_lim_upp+delta,delta)
# Perform convolution
convolution_pdf = convolution(lambda x: uniform_dist(x, a1), lambda x: uniform_dist(x, a2), x_range)
# Find mean of convolution
convolution_mean = np.sum(convolution_pdf*y_range)*delta
</code></pre>
<p>I've tried various combinations but have small errors in the mean. I think this is because the convolution is an array of dimension <code>2*len(x_range) - 1</code> and it's unclear how to deal with this off by one error.</p>
<p>What is the correct way to convolve to variables such that I can compute the mean of the convolution correctly?</p>
| <python><probability><convolution><off-by-one> | 2023-11-15 13:10:22 | 1 | 868 | user1936752 |
77,487,776 | 1,608,765 | Fittig a rice distribution using scipy | <p>I'm trying to write a fitter for some rice distributed data that I have, but it is not working for some, probably stupid, reason.</p>
<p>The distribution gets created fine, and the fitting routing seems to work from what I am used to with Gaussians. However, when I fit the curve, I just get nonsense. Can't seem to see where I am going wrong.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import rice
from scipy.optimize import curve_fit
# Custom Rice PDF function
def rice_pdf(x, nu, amplitude, b, scale):
return (x / b) * np.exp(-(x**2 + scale**2) / (2 * b**2)) * amplitude
# Function to fit a Rice distribution to a histogram using curve_fit
def fit_rice_distribution_to_histogram(hist_data, bins):
# Calculate bin centers
bin_centers = (bins[:-1] + bins[1:]) / 2
# Initial guesses for the parameters (nu, amplitude, b, scale)
initial_guess = [8.4, 1.0, 1.0, np.mean(bin_centers)]
# Fit the Rice distribution to the histogram data using curve_fit
params, covariance = curve_fit(rice_pdf, bin_centers, hist_data, p0=initial_guess)
nu, amplitude, b, scale = params
# Create the fitted Rice distribution
fitted_distribution = rice(nu, loc=scale, scale=np.sqrt(b**2 + scale**2))
return fitted_distribution, nu, amplitude, b, scale
# Example usage:
if __name__ == "__main__":
# Parameters for the Rice distribution
nu = 8.5
sigma = 10.5
sample_size = 100
# Calculate b from nu and sigma
b = nu / sigma
# Generate random data points from the Rice distribution
data = rice.rvs(b=b, scale=sigma, size=sample_size)
# Create a histogram of the generated data
hist_data, bins, _ = plt.hist(data, bins=20, density=True, alpha=0.5, label="Generated Data")
plt.xlabel("Value")
plt.ylabel("Probability Density")
# Fit a Rice distribution to the histogram using curve_fit
fitted_distribution, fitted_nu, amplitude, fitted_b, fitted_scale = fit_rice_distribution_to_histogram(hist_data, bins)
# Plot the original histogram and the fitted distribution
x = np.linspace(min(bins), max(bins), 1000)
pdf_values = fitted_distribution.pdf(x)
plt.plot(x, pdf_values, 'r', label="Fitted Rice Distribution")
plt.legend()
plt.show()
# Print fitted parameters
print("Fitted Nu:", fitted_nu)
print("Fitted Amplitude:", amplitude)
print("Fitted b:", fitted_b)
print("Fitted Scale:", fitted_scale)
</code></pre>
| <python><scipy><curve-fitting> | 2023-11-15 12:38:53 | 1 | 2,723 | Coolcrab |
77,487,774 | 458,742 | "commands out of sync; you can't run this command now" on first use of cursor | <p>I am using MySQL in Django (Python).</p>
<p>The draft version of this code worked and was stable, but it created cursors without closing them, and I just rewrote it to manage the cursor with enter/exit. Now it is unstable.</p>
<pre><code>class Cursor:
def __init__ (self, db):
self.db = db
self.cursor = db.DB_connection.cursor ()
def __enter__ (self):
return self
def __exit__ (self, ex_type, ex_value, ex_traceback):
if ex_value is not None:
# log stuff
self.cursor.close ()
self.cursor = None
def __del__ (self):
if self.cursor is not None:
# log stuff
self.cursor.close ()
def one (self, query, args):
self.cursor.execute (query, args)
return self.cursor.fetchone ()
class DB:
def __init__ (self, my_stuff):
self.DB_connection = connect (...)
def get (self):
return Cursor (self)
</code></pre>
<p>and in the application:</p>
<pre><code>with DB_connection.get() as db:
result = db.one ("SELECT ...", ...)
</code></pre>
<p>Sometimes this works as before, but sometimes randomly it will fail calling <code>db.one()</code></p>
<pre><code>_mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now
</code></pre>
<p>this exception is seen in Cursor's <code>__exit__</code></p>
<p>Googling this error tells me that this means an earlier SELECT on that cursor still has un-fetched results. But this is nonsense since <code>with DB_connection.get() as db</code> creates a new cursor.</p>
<p>Also sometimes the process simply exists without printing any exception info. Docker log looks like this</p>
<pre><code>www_1 | "GET /test_page HTTP/1.1" 200 16912
docker_www_1 exited with code 245
</code></pre>
<p>These crashes are non-deterministic even though the code in the view which creates the cursor is entirely deterministic.</p>
<p>I have added some print statements which show that there are only ever 0 or 1 cursors simultaneously in existence during the flow of the application.</p>
<p>In the draft version, instead of</p>
<pre><code>with DB_connection.get() as db:
result = db.one ("SELECT ...", ...)
</code></pre>
<p>it would have been something like</p>
<pre><code>db = DB_connection.get()
result = db.one ("SELECT ...", ...)
</code></pre>
<p>And that was very stable, although it leaks the cursor resource. I don't think anything else significantly changed since it was stable, other than wrapping the cursor in enter/exit.</p>
<p>Is this the correct way to use the API?</p>
| <python><mysql><django><database-cursor> | 2023-11-15 12:38:29 | 0 | 33,709 | spraff |
77,487,393 | 10,522,901 | scikit-learn fit function returns immediately with 0 training samples processed | <p>I am experiencing an issue with <code>scikit-learn</code> version <code>1.3.2</code>, where the <code>fit()</code> function of the <code>MLPClassifier</code> returns almost instantly without processing any training samples. This is evident as the model's <code>t_</code> parameter (indicating the number of training samples seen by the solver during fitting) remains at <code>0</code>.</p>
<p>This problem occurred while attempting to fit a dataset with over 100,000 training samples, where the <code>fit()</code> function returned in less than 10 milliseconds. The following minimal example replicates the issue:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.neural_network import MLPClassifier
X = [[1., 2.], [3., 4.]]
y = [1, 0]
clf = MLPClassifier(solver="lbfgs", alpha=1e-5, hidden_layer_sizes=(5, 2))
clf.fit(X, y)
print(clf.t_) # Outputs: 0
</code></pre>
<p>I have searched but haven't found similar issues reported. Any insights into why this might be happening would be greatly appreciated.</p>
<p>Edit: I observed this behavior consistently across multiple <code>scikit-learn</code> models.</p>
| <python><scikit-learn> | 2023-11-15 11:38:02 | 0 | 316 | vegarab |
77,487,354 | 9,353,682 | How suppress `unused-property` error in vulture | <p>I have noticed that <a href="https://github.com/jendrikseipp/vulture" rel="nofollow noreferrer">vulture</a> package started to report new error <code>vulture: unused-property</code>.
The <a href="https://github.com/jendrikseipp/vulture/blob/main/README.md" rel="nofollow noreferrer">documentation</a> specifies following errors code (just like for flake8 - <a href="https://flake8.pycqa.org/en/latest/user/error-codes.html" rel="nofollow noreferrer">https://flake8.pycqa.org/en/latest/user/error-codes.html</a>):</p>
<ul>
<li>F401 - <code>unused-import</code></li>
<li>F841 - <code>unused-variable</code></li>
</ul>
<p>There is no code specified for <code>unused-property</code> though. I would like to have a code so I can suppress just this error (instead of using <code># noqa</code> to suppress all errors in the line) as I use other linters as well.</p>
| <python> | 2023-11-15 11:31:00 | 1 | 722 | Maciek |
77,487,326 | 9,110,646 | Why does dict comprehension doesn't works when for-loops do? | <p>When calling the sample function <code>func</code> in this module, why does it throw an exception
when I use comprehension (can be toggled with parameter)? Can someone explain the meaning of the exception? <code>cycle</code>seems to be overwritten and I can not wrap my head around it.</p>
<h1>Example function with same functionality as loop and comprehension</h1>
<pre><code>import pandas as pd
def func(
x_dict,
keys_list,
start_cycle,
end_cycle,
comprehension=True
):
x_test_dicts = {}
for cycle in range(start_cycle, end_cycle + 1):
print(f"cycle = {cycle}")
if comprehension:
# Fill the dict with comprehension.
x_test_dict = {
f"{key}_input":
x_dict[key].query('cycle == @cycle').values
for key in keys_list
}
else:
# Fill the dict with normal for loop.
x_test_dict = {}
for key in keys_list:
x_test_dict[f"{key}_input"] = \
x_dict[key].query('cycle == @cycle').values
x_test_dicts[cycle] = x_test_dict
return x_test_dicts
</code></pre>
<h1>Creation of test data</h1>
<pre><code>import pandas as pd
import numpy as np
# Create an ID array from 1 to 1000
ids = np.arange(1, 1001)
# Calculate cycle as ID divided by 100
cycles = ids // 100
# Generate random integer values for the remaining columns
# Assuming a range for random integers (e.g., 0 to 100)
col1_int = np.random.randint(0, 101, 1000)
col2_int = np.random.randint(0, 101, 1000)
col3_int = np.random.randint(0, 101, 1000)
# Update the DataFrame with integer values
df = pd.DataFrame({
"ID": ids,
"cycle": cycles,
"col1": col1_int,
"col2": col2_int,
"col3": col3_int
})
df.head() # Display the first few rows of the updated DataFrame
</code></pre>
<h1>Run test cases with functions</h1>
<pre><code>import pandas as pd
df = df.set_index(['ID', 'cycle']) # Use multi-indexing
x_dict = {'Auxin': df} # Create a simple dict with the DataFrame
keys_list = ['Auxin'] # Define a list of keys to work with
# Define ranges for the loop inside `func`
start_cycle = 6
end_cycle = 29
# RUNS SUCCESSFULLY WITHOUT LIST COMPREHENSION
comprehension = False
result = func(
x_dict,
keys_list,
start_cycle,
end_cycle,
comprehension=comprehension
)
print("Worked without dict comprehension!")
# FAILS WITH LIST COMPREHENSION
comprehension = True
result = func(
x_dict,
keys_list,
start_cycle,
end_cycle,
comprehension=comprehension
)
print("Breaks when dict comprehension is used!")
</code></pre>
<h1>The error</h1>
<pre><code>UndefinedVariableError: local variable 'cycle' is not defined
</code></pre>
| <python><pandas><dictionary><dictionary-comprehension> | 2023-11-15 11:26:51 | 1 | 423 | Pm740 |
77,487,292 | 15,452,601 | How do I quiet mypy when testing inheritence from a generic? | <p>The following MWE constructs a mapping between the typevars used in a generic class and their declared values on an instance:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar, get_args
T = TypeVar("T")
class Derived(Generic[T]):
def method(self, val: T) -> T:
return val
d = Derived[int]()
def get_generic_types_mapping(obj: object) -> dict[type, type]:
if isinstance(obj, Generic):
generic_base = next(
origin
for origin in obj.__orig_bases__
if hasattr(origin, "__origin__") and origin.__origin__ is Generic
)
return {
generic: decorated
for generic, decorated in zip(
get_args(generic_base), get_args(obj.__orig_class__)
)
}
else:
return {}
assert get_generic_types_mapping(d) == {T: int}
assert get_generic_types_mapping(object()) == {}
</code></pre>
<p>This code works fine, but mypy (and pyright) doesn't (don't) like it:</p>
<pre class="lang-bash prettyprint-override"><code>$ mypy t.py
t.py:15: error: Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo" [arg-type]
t.py:18: error: "object" has no attribute "__orig_bases__" [attr-defined]
t.py:24: error: "object" has no attribute "__orig_class__"; maybe "__class__"? [attr-defined]
Found 3 errors in 1 file (checked 1 source file)
</code></pre>
<p>This makes sense, as <code>Generic</code> objects really <em>don't</em> have an <code>__orig_bases__</code>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic
assert not hasattr(Generic(), "__orig_bases__")
</code></pre>
<p>How do I tell mypy that <code>obj</code> does have <code>__orig_bases__</code>? [What do I read to understand where it comes from?] Should I be using something other than <code>isinstance</code>?</p>
<p>(I thought of just <code>if hasattr(obj, "__orig_bases__")</code>, but this doesn't remove the error on <code>__orig_class__</code>, and I actually want to raise an error if a generic <em>doesn't</em> have defined types. I could just check for both, but I feel like I'm missing something conceptually.)</p>
| <python><generics><types> | 2023-11-15 11:21:10 | 1 | 6,024 | 2e0byo |
77,487,139 | 929,122 | pyarrow.lib.ArrowTypeError: Expected bytes, got a 'LOB' object - How can I convert CLOBs to Strings at operator level? | <p>I'm trying to copy data from an Oracle database to GCS using Airflow's OracleToGCSOperator:</p>
<pre class="lang-py prettyprint-override"><code>copy_data = OracleToGCSOperator(
task_id='copy_data_task',
oracle_conn_id='my_conn',
sql="SELECT * FROM MY_TABLE",
bucket=MY_BUCKET,
filename=FILEPATH.,
export_format='PARQUET'
)
</code></pre>
<p>When executed I'm getting the error pyarrow.lib.ArrowTypeError: Expected bytes, got a 'LOB' object.</p>
<p>MY_TABLE has more than 800 columns and only 2 of them CLOB. I assume this is what GCS/parquet doesn't like.</p>
<p>Is there any way I convert the CLOB columns to strings at Operator level?</p>
| <python><oracle-database><google-cloud-platform><airflow><google-cloud-composer-2> | 2023-11-15 10:55:56 | 1 | 437 | drake10k |
77,486,881 | 6,448,412 | Flask-CORS returns random Access-Control-Allow-Origin if Origin request header is not provided | <p>I want to enable CORS in my Flask application with a predefined set of allowed origins, as documented <a href="https://flask-cors.corydolphin.com/en/latest/api.html" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app, origins=['http://localhost:3000', 'https://app.my_domain.com'])
</code></pre>
<p>The problem is that if I don't specify the <code>Origin</code> header in my request to the server, an arbitrary value for the <code>Access-Control-Allow-Origin</code> response header will be returned.</p>
<p>So for example, if my web application running on <code>https://app.my_domain.com</code> sends a <code>GET</code> request to the backend without specifying the <code>Origin</code> request header, the backend returns the following response header:</p>
<p><code>Access-Control-Allow-Origin: http://localhost:3000</code></p>
<p>This seems not correct to me. How is this mechanism intended to be used?</p>
| <python><flask><cors><flask-cors> | 2023-11-15 10:18:59 | 1 | 398 | Laugslander |
77,486,849 | 1,521,241 | PyDLL does not work across different Python versions | <p>I am using PyDLL function to bind DLL to Python.</p>
<p>From Python side:</p>
<pre><code>import ctypes as _ct
_path = _parent_path(__file__) / "scisuit_pybind"
pydll = _ct.PyDLL(str(_path))
pydll.c_root_bisect.argtypes = [_ct.py_object, _ct.c_double, _ct.c_double, _ct.c_double, _ct.c_int, _ct.c_char_p, _ct.c_bool]
pydll.c_root_bisect.restype = _ct.py_object
</code></pre>
<p>And from C++ side:</p>
<pre><code>#define EXTERN \
extern "C" DLLPYBIND
EXTERN PyObject * c_root_bisect(PyObject * FuncObj,
double a,
double b,
double tol = 1e-5,
int maxiter = 100,
const char* method = "bf",
bool modified = false);
</code></pre>
<p>Cases are:</p>
<ol>
<li><strong>Works:</strong> Compile the DLL with Python 3.10.6 and run it with Python 3.10.6. <strong>Fails</strong> when run with Python 3.11.</li>
<li><strong>Works</strong> Compile the DLL with Python 3.11 and run with 3.11. <strong>Fails</strong> when run with Python 3.10.</li>
</ol>
<p><strong>Error:</strong> <em>self._handle = _dlopen(self._name, mode)</em></p>
<p>What am I missing?</p>
<hr />
<p><strong>EDIT 1</strong>:</p>
<p>A quick fix (maybe not the best one) is to build DLLs (affected one is small, around 170 KB) for 3.x versions (currently I made them for 3.10 and 3.11) and then from Python side:</p>
<pre><code>_DLLname= "scisuit_pybind"
if sys.version_info.minor == 10:
_DLLname= "scisuit_pybind310"
_path = _parent_path(__file__) / _DLLname
</code></pre>
| <python><c++> | 2023-11-15 10:15:25 | 1 | 1,053 | macroland |
77,486,675 | 628,228 | How to validate a TextIO argument? | <p>I am just coming to terms with Python type hinting and I am confused how to implement argument validation for the following function signature:</p>
<pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -> str:
</code></pre>
<p>My initial attempt at a pythonic implementation was the following:</p>
<pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -> str:
try:
with open(file) as fileIO:
return read_file(fileIO)
except TypeError:
return file.read()
</code></pre>
<p>Although this seems like a totally valid pythonic implementation for <code>read_file</code>, the type checker went all over this one. This is understandable since <code>open</code> does not accept any of the possible types except <code>TypeIO</code> and there is nothing to narrow the type down (although it is surprising that the <code>except TypeError:</code> is not considered).</p>
<p>So I gave up on "asking for forgiveness" and instead tried to explicitly check the argument:</p>
<pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -> str:
if isinstance(file, TextIO):
return file.read()
else:
with open(file) as fileIO:
return read_file(fileIO)
</code></pre>
<p>This turns the whole logic around and checks if file is <code>TextIO</code> so the type checker is all happy. The problem is that this actually makes no sense at runtime since <code>TextIO</code> is not actually a base class you can check against, but rather a type that exists only for type hinting.</p>
<p>Now this is where I started to get really confused, since I realized I actually had no idea how to go about checking whether something is a <code>TextIO</code> at runtime. I dug through all kinds of rabbit holes for checking if a variable is either a path-like or file-like, but it all feels like I'm missing something fundamental here. I mean, if the type checker can know ahead of time that something is <code>TextIO</code> then how can it be hard to narrow down the type in the implementation?</p>
<p>This must be something that is done in all kinds of libraries, but most implementations I found use vague checks for <code>read</code> and iterable, etc. perhaps for backwards compatibility, but I'm aiming for python 3.9+ so was hoping that by now there might be a better solution.</p>
<p><strong>NOTE</strong>: As a clarification given comments and existing answers. I am not asking about what is the correct implementation for opening text files. I am also not asking how to use type hinting <em>in general</em>: I understand that you are supposed to use type narrowing to get the type checker happy.</p>
<p><strong>My question is</strong>: what functions or expressions can you use in your code to do type narrowing specifically of <code>TextIO</code> type hints so that the compiler will be happy to discard that type for a variable? <code>try.. except</code> doesn't work, checking if value is instance of <code>TextIOBase</code> or <code>TextIO</code> also doesn't work.</p>
| <python><python-typing> | 2023-11-15 09:49:25 | 1 | 4,430 | glopes |
77,486,577 | 6,813,417 | Stop pyspark aggregation if condition triggers | <p>Let's say I want to check if a pyspark dataframe has any constant column. Let's work with the dataframe from <a href="https://stackoverflow.com/questions/52113821/fastest-way-to-know-if-a-column-has-a-constant-value-in-a-pyspark-dataframe">this question</a>:</p>
<pre><code>+----------+----------+
| A | B |
+----------+----------+
| 2.0| 0.0|
| 0.0| 0.0|
| 1.0| 0.0|
| 1.0| 0.0|
| 0.0| 0.0|
| 1.0| 0.0|
| 0.0| 0.0|
+----------+----------+
</code></pre>
<p>Isn't it a way of generate:</p>
<pre><code>+----------+----------+
| A | B |
+----------+----------+
| False| True|
+----------+----------+
</code></pre>
<p>Without having to aggregate/filter the whole A column as proposed in this question solution? (let's say, if I dectect two rows aren't equal during aggregation, stop the operation and return False), thus, saving time? Does spark do it internally?</p>
| <python><apache-spark><pyspark> | 2023-11-15 09:35:18 | 1 | 1,058 | Let's try |
77,486,432 | 8,554,611 | `setuptools_scm` includes a committed `.gitignore` to an `sdist` package | <p>I have a flat-layout project like this:</p>
<pre><code>├── project_name
│ └── ...
├── .gitignore
├── pyproject.toml
└── ...
</code></pre>
<p>I follow <a href="https://setuptools.pypa.io/en/latest/userguide/datafiles.html#exclude-package-data" rel="nofollow noreferrer">the <code>setuptools</code> docs</a> to compose the <code>pyproject.toml</code> like this:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ['setuptools>=45', 'setuptools_scm[toml]>=6.2']
[project]
name = 'project_name'
version = '0.0.1'
[tool.setuptools.exclude-package-data]
"*" = [".gitignore"]
</code></pre>
<p>However, when I do</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m build --sdist
</code></pre>
<p>I get the <code>.gitignore</code> file in the resulting <code>*.tar.gz</code> file.</p>
<p>I can forcefully exclude the file using a <code>MANIFEST.in</code>:</p>
<pre><code>exclude .gitignore
</code></pre>
<p>What the use of the <code>[tool.setuptools.exclude-package-data]</code> section, then? Can I do the job without the <code>MANIFEST.in</code> file?</p>
<p>Do I misuse the section? From what the building process reports, I guess the <code>'*'</code> there means the <code>project_name</code> directory. Is there any config key to exclude the <code>.*</code> files from the root of the <code>sdist</code> package?</p>
<hr />
<p>An MWE for @sinoroc (to be run in a shell):</p>
<pre class="lang-bash prettyprint-override"><code># make a dedicated directory and enter it
mkdir package_name
pushd package_name
# make two empty files (only the first one is mandatory)
touch .gitignore README
# fill pyproject.toml
echo "build-system.requires = ['setuptools', 'setuptools_scm[toml]']" >> pyproject.toml
echo "project = {name = 'package_name', version = '0.0.1'}" >> pyproject.toml
echo "[tool.setuptools_scm]" >> pyproject.toml
# prepare a virtual environment
python -m venv venv
source venv/bin/activate
python -m pip install -U pip setuptools build
# commit .gitignore
git init
git add -f .gitignore
git commit -m "add .gitignore"
# build a package
python -m build --sdist
# list the files in the package
tar --list -f dist/package_name-0.0.1.tar.gz
# exit (optional)
deactivate
popd
</code></pre>
<p>This makes the following file tree:</p>
<pre><code>├── .git
│ └── ...
├── dist
│ └── package_name-0.0.1.tar.gz
├── package_name.egg-info
│ └── ...
├── venv
│ └── ...
├── .gitignore
├── pyproject.toml
└── README
</code></pre>
<p>The <code>pyproject.toml</code> file has the following content:</p>
<pre class="lang-ini prettyprint-override"><code>build-system.requires = ['setuptools', 'setuptools_scm[toml]']
project = {name = 'package_name', version = '0.0.1'}
[tool.setuptools_scm]
</code></pre>
<p>The <code>README</code> and <code>.gitignore</code> files are empty. The latter has to be committed for the MWE to work.</p>
<p>It looks that it's the <code>setuptools_scm</code> that's to blame for the inclusion of <code>.gitignore</code>. The latter should be committed.</p>
| <python><setuptools><pyproject.toml><setuptools-scm> | 2023-11-15 09:13:01 | 0 | 796 | StSav012 |
77,486,242 | 21,309,333 | Ctypes 2d array of strings in python stores different strings at same memory address | <p>the python code i have is pretty simple:</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import *
from random import randint
class uni(Union):
_fields_ = [('p', c_char_p),
('a', c_longlong)]
#initializing the array of strings
x = ((c_char_p * 3) * 10) ()
for i in range(10):
for j in range(3):
x[i][j] = str(randint(100, 999)).encode('utf-8')
#it prints what i expect it to print
for i in range(10):
for j in range(3):
print(x[i][j], end = ' ')
print()
print("addresses")
for i in range(10):
for j in range(3):
t = uni()
# getting an integer that points to the string to print string's address
t.p = x[i][j]
print(hex(t.a), end = ' - ')
print(string_at(t.a), end = ' | ')
print()
</code></pre>
<p>This outputs the following:</p>
<pre><code>b'475' b'912' b'805'
b'107' b'986' b'191'
b'389' b'525' b'921'
b'441' b'869' b'452'
b'505' b'788' b'571'
b'111' b'974' b'758'
b'447' b'975' b'671'
b'322' b'633' b'332'
b'924' b'633' b'174'
b'677' b'611' b'431'
addresses
0x7fdfbbbcad80 - b'475' | 0x7fdfbbbcad80 - b'912' | 0x7fdfbbbcad80 - b'805' |
0x7fdfbbbcad80 - b'107' | 0x7fdfbbbcad80 - b'986' | 0x7fdfbbbcad80 - b'191' |
0x7fdfbbbcad80 - b'389' | 0x7fdfbbbcad80 - b'525' | 0x7fdfbbbcad80 - b'921' |
0x7fdfbbbcad80 - b'441' | 0x7fdfbbbcad80 - b'869' | 0x7fdfbbbcad80 - b'452' |
0x7fdfbbbcad80 - b'505' | 0x7fdfbbbcad80 - b'788' | 0x7fdfbbbcad80 - b'571' |
0x7fdfbbbcad80 - b'111' | 0x7fdfbbbcad80 - b'974' | 0x7fdfbbbcad80 - b'758' |
0x7fdfbbbcad80 - b'447' | 0x7fdfbbbcad80 - b'975' | 0x7fdfbbbcad80 - b'671' |
0x7fdfbbbcad80 - b'322' | 0x7fdfbbbcad80 - b'633' | 0x7fdfbbbcad80 - b'332' |
0x7fdfbbbcad80 - b'924' | 0x7fdfbbbcad80 - b'633' | 0x7fdfbbbcad80 - b'174' |
0x7fdfbbbcad80 - b'677' | 0x7fdfbbbcad80 - b'611' | 0x7fdfbbbcad80 - b'431' |
</code></pre>
<p>How? how does it store different strings at the same address??</p>
<p>Note: I found this when I was debugging a program that passes a 2d array of strings to C++ shared object. The C++ function is defined as folows:</p>
<pre class="lang-cpp prettyprint-override"><code>extern "C"
void print2d(char*** arr, int len, int inner_len)
{
std::cout << arr << '\n'; //ok
std::cout.flush();
std::cout << *arr << '\n'; //ok
std::cout.flush();
std::cout << **arr << '\n'; //this segfaults
}
</code></pre>
<p>if anyone has any suggestions to fix this, i would be glad to hear them out</p>
| <python><c++><arrays><string><ctypes> | 2023-11-15 08:42:00 | 0 | 365 | God I Am Clown |
77,486,146 | 7,699,037 | Load element as string when it ends with a colon | <p>I have a YAML document where some values end with a colon, something like:</p>
<pre class="lang-yaml prettyprint-override"><code>foo:
- bar
- baz::
</code></pre>
<p>When I load the document with <code>yaml.load</code>, the <code>baz::</code> element gets converted to a dictionary <code>{'baz:' : ''}</code>. However, I would like to read it as string.</p>
<p>I've tried loading the file with the <code>yaml.BaseLoader</code>, however this did not help. Is there a way to specify that the elements should not be converted to a dict?</p>
| <python><yaml><pyyaml> | 2023-11-15 08:20:37 | 1 | 2,908 | Mike van Dyke |
77,486,094 | 736,662 | Printing using on_start and self in Locust | <p>Having this code:</p>
<pre><code>class MyUser(HttpUser):
host="http://localhost"
def on_start(self):
self.data = data.sample().to_dict()
self.ts_value_random = random.randrange(10000)
@task
def send_request(self):
print("Sending request with data: ", self.data)
print("Random_value: ", self.ts_value_random)
self.client.get("/hello", data=self.data)
</code></pre>
<p>Why is only "Sending request with data:" printed to console when running i.e. 5 Virtual Users?</p>
| <python><locust> | 2023-11-15 08:10:08 | 0 | 1,003 | Magnus Jensen |
77,485,951 | 7,254,635 | Can any dynamic language can write the some prog like the book<Type-Driven Development with idris>'s Chapter6's datastore? | <p>The book's Chapter6's datastore:</p>
<pre>
module Main
import Data.Vect
infixr 5 .+.
data Schema = SString | SInt | (.+.) Schema Schema
SchemaType : Schema -> Type
SchemaType SString = String
SchemaType SInt = Int
SchemaType (x .+. y) = (SchemaType x, SchemaType y)
record DataStore where
constructor MkData
schema : Schema
size : Nat
items : Vect size (SchemaType schema)
addToStore : (store : DataStore) -> SchemaType (schema store) -> DataStore
addToStore (MkData schema size store) newitem = MkData schema _ (addToData store)
where
addToData : Vect oldsize (SchemaType schema) -> Vect (S oldsize) (SchemaType schema)
addToData [] = [newitem]
addToData (x :: xs) = x :: addToData xs
setSchema : (store : DataStore) -> Schema -> Maybe DataStore
setSchema store schema = case size store of
Z => Just (MkData schema _ [])
S k => Nothing
data Command : Schema -> Type where
SetSchema : Schema -> Command schema
Add : SchemaType schema -> Command schema
Get : Integer -> Command schema
Quit : Command schema
parsePrefix : (schema : Schema) -> String -> Maybe (SchemaType schema, String)
parsePrefix SString input = getQuoted (unpack input)
where
getQuoted : List Char -> Maybe (String, String)
getQuoted ('"' :: xs)
= case span (/= '"') xs of
(quoted, '"' :: rest) => Just (pack quoted, ltrim (pack rest))
_ => Nothing
getQuoted _ = Nothing
parsePrefix SInt input = case span isDigit input of
("", rest) => Nothing
(num, rest) => Just (cast num, ltrim rest)
parsePrefix (schemal .+. schemar) input
= case parsePrefix schemal input of
Nothing => Nothing
Just (l_val, input') =>
case parsePrefix schemar input' of
Nothing => Nothing
Just (r_val, input'') => Just ((l_val, r_val), input'')
parseBySchema : (schema : Schema) -> String -> Maybe (SchemaType schema)
parseBySchema schema x = case parsePrefix schema x of
Nothing => Nothing
Just (res, "") => Just res
Just _ => Nothing
parseSchema : List String -> Maybe Schema
parseSchema ("String" :: xs)
= case xs of
[] => Just SString
_ => case parseSchema xs of
Nothing => Nothing
Just xs_sch => Just (SString .+. xs_sch)
parseSchema ("Int" :: xs)
= case xs of
[] => Just SInt
_ => case parseSchema xs of
Nothing => Nothing
Just xs_sch => Just (SInt .+. xs_sch)
parseSchema _ = Nothing
parseCommand : (schema : Schema) -> String -> String -> Maybe (Command schema)
parseCommand schema "add" rest = case parseBySchema schema rest of
Nothing => Nothing
Just restok => Just (Add restok)
parseCommand schema "get" val = case all isDigit (unpack val) of
False => Nothing
True => Just (Get (cast val))
parseCommand schema "quit" "" = Just Quit
parseCommand schema "schema" rest
= case parseSchema (words rest) of
Nothing => Nothing
Just schemaok => Just (SetSchema schemaok)
parseCommand _ _ _ = Nothing
parse : (schema : Schema) -> (input : String) -> Maybe (Command schema)
parse schema input = case span (/= ' ') input of
(cmd, args) => parseCommand schema cmd (ltrim args)
display : SchemaType schema -> String
display {schema = SString} item = show item
display {schema = SInt} item = show item
display {schema = (y .+. z)} (iteml, itemr) = display iteml ++ ", " ++
display itemr
getEntry : (pos : Integer) -> (store : DataStore) ->
Maybe (String, DataStore)
getEntry pos store
= let store_items = items store in
case integerToFin pos (size store) of
Nothing => Just ("Out of range\n", store)
Just id => Just (display (index id (items store)) ++ "\n", store)
processInput : DataStore -> String -> Maybe (String, DataStore)
processInput store input
= case parse (schema store) input of
Nothing => Just ("Invalid command\n", store)
Just (Add item) =>
Just ("ID " ++ show (size store) ++ "\n", addToStore store item)
Just (SetSchema schema') =>
case setSchema store schema' of
Nothing => Just ("Can't update schema when entries in store\n", store)
Just store' => Just ("OK\n", store')
Just (Get pos) => getEntry pos store
Just Quit => Nothing
main : IO ()
main = replWith (MkData (SString .+. SString .+. SInt) _ []) "Command: " processInput
</pre>
<p>it can define scheme of a "database" with String and Int in runtime:</p>
<pre>
Command: schema String String Int
OK
Command: add "Rain Dogs" "Tom Waits" 1985
ID 0
Command: add "Fog on the Tyne" "Lindisfarne" 1971
ID 1
Command: get 1
"Fog on the Tyne", "Lindisfarne", 1971
Command: quit
</pre>
<p>I want to know if any dynamic language like ruby or python can do the same thing?In the static language,the scheme must be defined before the compile.</p>
<p>Thanks!</p>
| <python><ruby><idris> | 2023-11-15 07:39:53 | 1 | 1,757 | wang kai |
77,485,901 | 3,139,811 | Pytest - how to find the test result after a test | <p>After a test has been executed I need to collect the result of that test. But I don't find the result in the FixtureRequest object. I can find the test name and some additional data but nowhere I see something that shows if a test has been passed or failed. Nor if there were some exceptions</p>
<p>example code:</p>
<pre><code>class TestSomething:
@pytest.mark.test_case_id(99999)
def test_example(self, api_interface) -> None:
assert 5 == 5
</code></pre>
<p>and in someother file:</p>
<pre><code>@pytest.fixture
def api_interface(request: FixtureRequest):
</code></pre>
<p>I can see that the request has the title name and a node object but nowhere I see something result or assert like data....</p>
| <python><pytest> | 2023-11-15 07:30:01 | 2 | 857 | John |
77,485,752 | 4,158,016 | Python pandas add column with values based on condition and pattern | <br>
My python pandas dataframe originally using openpyxl engine to handle excel processing,
can be described in simple form as
<pre><code>df1 = pd.DataFrame({"col1":["",99,88,np.nan,66,55,np.nan,11,22],"col2":['Catg0','Asset1','Other','Catg1','H & F','Large Item','Catg2','Fragile','Delicate item'],"col3":["",0,0,np.nan,99,155,np.nan,83,115]})
col1 col2 col3
0 Catg0
1 99 Asset1 0
2 88 Other 0
3 NaN Catg1 NaN
4 66 H & F 99
5 55 Large Item 155
6 NaN Catg2 NaN
7 11 Fragile 83
8 22 Delicate item 115
</code></pre>
<p>While I am trying it to get modified further with having new column (col4) by pivoting value from other column (col2) when other of the column data is empty or nan for that row, untill next such condition satiesfies<br>
It should remove that row after pivoting <br></p>
<pre><code> col1 col4 col2 col3
0 99 Catg0 Asset1 0
1 88 Catg0 Other 0
2 66 Catg1 H & F 99
3 55 Catg1 Large Item 155
4 11 Catg2 Fragile 83
5 22 Catg2 Delicate item 115
import pandas as pd
import numpy as np
df1 = pd.DataFrame({"col1":["",99,88,np.nan,66,55,np.nan,11,22],"col2":['Catg0','Asset1','Other','Catg1','H & F','Large Item','Catg2','Fragile','Delicate item'],"col3":["",0,0,np.nan,99,155,np.nan,83,115]})
df1.insert(1, "col4", 'Catg')
</code></pre>
<p>I am trying to find way to add these pattern or conditin based logic to populate 'col4' and discard that rows</p>
| <python><pandas><pivot> | 2023-11-15 06:53:48 | 2 | 450 | itsavy |
77,485,192 | 6,025,866 | Unable to run pm.sample() function using PyMC Python library even though I re-installed the libraries | <p>I am trying to run a Bayesian Linear Regression , but I am unable to create a posterior distribution using the sample() function from pymc. The code is as follows</p>
<pre><code>import pandas as pd
from random import randint
# Generate date range
dates = pd.date_range(start="2021-01-01", end="2021-01-30")
data = {
"date": dates,
"gcm_direct_Impressions": [randint(10000, 20000) for _ in dates],
"tv_grps": [randint(30, 50) for _ in dates],
"tiktok_direct_Impressions": [randint(10000, 15000) for _ in dates],
"sell_out_quantity": [randint(150, 250) for _ in dates]
}
df = pd.DataFrame(data)
#df.to_csv("dataset.csv", index=False)
max(df['sell_out_quantity'].values)
# Assigning the 'final_data' dataset to a new variable 'data' for further analysis.
import pandas as pd
data = df
# Defining a list of variables for transformation. These include various factors that
# might impact the analysis like trends, seasons, holidays, competitor sales, and different
# marketing channels.
transform_variables = ["gcm_direct_Impressions","tv_grps","tiktok_direct_Impressions"]
# Identifying the channels that have a delay effect. This means the impact of these channels
# on the target variable (like revenue) might not be immediate but delayed.
delay_channels = ["gcm_direct_Impressions","tv_grps","tiktok_direct_Impressions"]
# Listing the media channels that are part of the analysis. These are the channels through
# which advertising or marketing is done.
media_channels = ["gcm_direct_Impressions","tv_grps","tiktok_direct_Impressions"]
# Specifying the control variables. These are the factors that need to be controlled or
# accounted for in the analysis to isolate the effects of the media channels.
#control_variables = ["trend", "season", "holiday", "competitor_sales_B", "events"]
# Defining the target variable for the analysis, which in this case is 'revenue'. This is
# likely the outcome or the dependent variable the analysis aims to predict or explain.
target = "sell_out_quantity"
#!pip install scikit-learn
from sklearn.preprocessing import MinMaxScaler
# Creating a copy of the 'data' dataframe to apply transformations. This ensures that
# the original data remains unchanged.
data_transformed = data.copy()
# Initializing a dictionary to store the MinMaxScaler instances for each feature.
# This will be useful for inverse transformations later.
numerical_encoder_dict = {}
# Looping through each feature in the list of variables to transform.
for feature in transform_variables:
# Initializing a MinMaxScaler. This scaler transforms each feature to a given range,
# usually between 0 and 1, which is helpful for normalization.
scaler = MinMaxScaler()
# Reshaping the data for the feature into a 2D array, as required by the scaler.
original = data[feature].values.reshape(-1, 1)
# Applying the scaler to the feature and transforming the data.
transformed = scaler.fit_transform(original)
# Storing the transformed data back into the 'data_transformed' DataFrame.
data_transformed[feature] = transformed
# Saving the scaler instance in the dictionary for each feature.
# This will be used for reversing the transformation if needed.
numerical_encoder_dict[feature] = scaler
# Placeholder for a potential dependent variable transformation, not utilized here.
dependent_transformation = None
# Scaling the target variable 'revenue' by dividing it by 100,000.
# This kind of scaling might be done to bring the target variable to a smaller range
# or to improve the interpretability of the model's results.
original = data[target].values
data_transformed[target] = original
import pymc as pm
import numpy as np
import pytensor.tensor as tt
# Initializing an empty list to store the mean response from different channels and control variables.
response_mean = []
# Creating a new PyMC3 model context. All the model definitions inside this block
# are part of 'model_2'.
with pm.Model() as model_2:
# Looping through each channel in the list of delay channels.
for channel_name in delay_channels:
print(f"Delay Channels: Adding {channel_name}")
# Extracting the transformed data for the current channel.
x = data_transformed[channel_name].values
# Defining Bayesian priors for the adstock, gamma, and alpha parameters for the current channel.
adstock_param = pm.Beta(f"{channel_name}_adstock", 2, 2)
saturation_gamma = pm.Beta(f"{channel_name}_gamma", 2, 2)
saturation_alpha = pm.Gamma(f"{channel_name}_alpha", 3, 1)
transformed_X1 = tt.zeros_like(x)
for i in range(1, len(x)):
transformed_X1 = tt.set_subtensor(transformed_X1[i], x[i] + adstock_param * x[i - 1])
transformed_X2 = tt.zeros_like(x)
for i in range(1,len(x)):
transformed_X2 = tt.set_subtensor(transformed_X2[i],(transformed_X1[i]**saturation_alpha)/(transformed_X1[i]**saturation_alpha+saturation_gamma**saturation_alpha))
channel_b = pm.HalfNormal(f"{channel_name}_media_coef", sigma = 250)
response_mean.append(transformed_X2 * channel_b)
intercept = pm.Normal("intercept",mu = np.mean(data_transformed[target].values), sigma = 3)
sigma = pm.HalfNormal("sigma", 4)
likelihood = pm.Normal("outcome", mu = intercept + sum(response_mean), sigma = sigma,
observed = data_transformed[target].values)
import arviz as az
# Continuing the model context defined previously as 'model_2'.
with model_2:
# Sampling from the posterior distribution of the model.
# This is the process where PyMC3 generates samples that represent the distribution
# of the parameters given the data and the priors.
# 'pm.sample()' is the main function to perform this sampling.
# The parameters of pm.sample() are set to control the sampling process:
# 1000 samples are drawn after a tuning phase of 1000 iterations.
# 'target_accept' is set to 0.95, which is the target acceptance rate of the sampler.
# A higher acceptance rate can help in achieving better convergence but might slow down the sampling.
# 'return_inferencedata=True' makes the function return an InferenceData object,
# which is useful for further analysis using ArviZ.
trace = pm.sample(1000, tune=1000, target_accept=0.95, return_inferencedata=True)
# Summarizing the trace. This generates a summary of the posterior distribution
# for each parameter in the model, providing statistics like mean, standard deviation,
# and the HPD (highest posterior density) interval.
# This summary is useful for understanding the results of the model and for diagnostics.
trace_summary = az.summary(trace)
</code></pre>
<p>After running the above code, I get the following error which I am unable to solve even though I removed and installed jupyter notebook and re-installed all the packages. This is on a Mac M1 machine.</p>
<p>Libraries and their versions</p>
<ul>
<li>pymc (5.6.1)</li>
<li>numpy (1.23.5)</li>
<li>pytensor (2.12.3)</li>
</ul>
<pre><code>
You can find the C code in this temporary file: /var/folders/jw/qb6bs44j0vgfsf52lxw71h300000gn/T/pytensor_compilation_error_v3472mqt
---------------------------------------------------------------------------
CompileError Traceback (most recent call last)
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map)
1242 thunks.append(
-> 1243 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl)
1244 )
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_thunk(self, node, storage_map, compute_map, no_recycling, impl)
130 try:
--> 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling)
132 except (NotImplementedError, MethodNotDefined):
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_c_thunk(self, node, storage_map, compute_map, no_recycling)
95 raise NotImplementedError("float16")
---> 96 outputs = cl.make_thunk(
97 input_storage=node_input_storage, output_storage=node_output_storage
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs)
1199 init_tasks, tasks = self.get_init_tasks()
-> 1200 cthunk, module, in_storage, out_storage, error_storage = self.__compile__(
1201 input_storage, output_storage, storage_map, cache
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in __compile__(self, input_storage, output_storage, storage_map, cache)
1119 output_storage = tuple(output_storage)
-> 1120 thunk, module = self.cthunk_factory(
1121 error_storage,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache)
1643 cache = get_module_cache()
-> 1644 module = cache.module_from_key(key=key, lnk=self)
1645
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in module_from_key(self, key, lnk)
1239 location = dlimport_workdir(self.dirname)
-> 1240 module = lnk.compile_cmodule(location)
1241 name = module.__file__
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in compile_cmodule(self, location)
1542 _logger.debug(f"LOCATION {location}")
-> 1543 module = c_compiler.compile_str(
1544 module_name=mod.code_hash,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols)
2648 # compile_stderr = compile_stderr.replace("\n", ". ")
-> 2649 raise CompileError(
2650 f"Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}"
CompileError: Compilation failed (return status=1):
/usr/bin/clang++ -dynamiclib -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -Wno-c++11-narrowing -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -fPIC -undefined dynamic_lookup -I/Users/adhokshaja/miniconda3/envs/pymc_env/lib/python3.8/site-packages/numpy/core/include -I/Users/adhokshaja/miniconda3/envs/pymc_env/include/python3.8 -I/Users/adhokshaja/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/c_code -L/Users/adhokshaja/miniconda3/envs/pymc_env/lib -fvisibility=hidden -o /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/me35a44294d03835a76b7f9ad569bbbc122b29dc588c89cb224fa59ca0e0ec6cd.so /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp
/Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp:25480:32: fatal error: bracket nesting level exceeded maximum of 256
if (!PyErr_Occurred()) {
^
/Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp:25480:32: note: use -fbracket-depth=N to increase maximum nesting level
1 error generated.
During handling of the above exception, another exception occurred:
CompileError Traceback (most recent call last)
<ipython-input-6-412f3306e835> in <cell line: 4>()
13 # 'return_inferencedata=True' makes the function return an InferenceData object,
14 # which is useful for further analysis using ArviZ.
---> 15 trace = pm.sample(1000, tune=1000, target_accept=0.95, return_inferencedata=True)
16
17 # Summarizing the trace. This generates a summary of the posterior distribution
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in sample(draws, tune, chains, cores, random_seed, progressbar, step, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, model, **kwargs)
651
652 initial_points = None
--> 653 step = assign_step_methods(model, step, methods=pm.STEP_METHODS, step_kwargs=kwargs)
654
655 if nuts_sampler != "pymc":
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in assign_step_methods(model, step, methods, step_kwargs)
231 selected_steps.setdefault(selected, []).append(var)
232
--> 233 return instantiate_steppers(model, steps, selected_steps, step_kwargs)
234
235
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in instantiate_steppers(model, steps, selected_steps, step_kwargs)
132 args = step_kwargs.get(name, {})
133 used_keys.add(name)
--> 134 step = step_class(vars=vars, model=model, **args)
135 steps.append(step)
136
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/hmc/nuts.py in __init__(self, vars, max_treedepth, early_max_treedepth, **kwargs)
178 `pm.sample` to the desired number of tuning steps.
179 """
--> 180 super().__init__(vars, **kwargs)
181
182 self.max_treedepth = max_treedepth
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/hmc/base_hmc.py in __init__(self, vars, scaling, step_scale, is_cov, model, blocked, potential, dtype, Emax, target_accept, gamma, k, t0, adapt_step_size, step_rand, **pytensor_kwargs)
107 else:
108 vars = get_value_vars_from_user_vars(vars, self._model)
--> 109 super().__init__(vars, blocked=blocked, model=self._model, dtype=dtype, **pytensor_kwargs)
110
111 self.adapt_step_size = adapt_step_size
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/arraystep.py in __init__(self, vars, model, blocked, dtype, logp_dlogp_func, **pytensor_kwargs)
162
163 if logp_dlogp_func is None:
--> 164 func = model.logp_dlogp_function(vars, dtype=dtype, **pytensor_kwargs)
165 else:
166 func = logp_dlogp_func
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/model.py in logp_dlogp_function(self, grad_vars, tempered, **kwargs)
607 if var in input_vars and var not in grad_vars
608 }
--> 609 return ValueGradFunction(costs, grad_vars, extra_vars_and_values, **kwargs)
610
611 def compile_logp(
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/model.py in __init__(self, costs, grad_vars, extra_vars_and_values, dtype, casting, compute_grads, **kwargs)
346 inputs = grad_vars
347
--> 348 self._pytensor_function = compile_pymc(inputs, outputs, givens=givens, **kwargs)
349
350 def set_weights(self, values):
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/pytensorf.py in compile_pymc(inputs, outputs, random_seed, mode, **kwargs)
1194 opt_qry = mode.provided_optimizer.including("random_make_inplace", check_parameter_opt)
1195 mode = Mode(linker=mode.linker, optimizer=opt_qry)
-> 1196 pytensor_function = pytensor.function(
1197 inputs,
1198 outputs,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/__init__.py in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input)
313 # note: pfunc will also call orig_function -- orig_function is
314 # a choke point that all compilation must pass through
--> 315 fn = pfunc(
316 params=inputs,
317 outputs=outputs,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/pfunc.py in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys, fgraph)
365 )
366
--> 367 return orig_function(
368 inputs,
369 cloned_outputs,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/types.py in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys, fgraph)
1754 )
1755 with config.change_flags(compute_test_value="off"):
-> 1756 fn = m.create(defaults)
1757 finally:
1758 t2 = time.perf_counter()
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/types.py in create(self, input_storage, storage_map)
1647
1648 with config.change_flags(traceback__limit=config.traceback__compile_limit):
-> 1649 _fn, _i, _o = self.linker.make_thunk(
1650 input_storage=input_storage_lists, storage_map=storage_map
1651 )
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/basic.py in make_thunk(self, input_storage, output_storage, storage_map, **kwargs)
252 **kwargs,
253 ) -> Tuple["BasicThunkType", "InputStorageType", "OutputStorageType"]:
--> 254 return self.make_all(
255 input_storage=input_storage,
256 output_storage=output_storage,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map)
1250 thunks[-1].lazy = False
1251 except Exception:
-> 1252 raise_with_op(fgraph, node)
1253
1254 t1 = time.perf_counter()
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/utils.py in raise_with_op(fgraph, node, thunk, exc_info, storage_map)
533 # Some exception need extra parameter in inputs. So forget the
534 # extra long error message in that case.
--> 535 raise exc_value.with_traceback(exc_trace)
536
537
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map)
1241 # no_recycling here.
1242 thunks.append(
-> 1243 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl)
1244 )
1245 linker_make_thunk_time[node] = time.perf_counter() - thunk_start
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_thunk(self, node, storage_map, compute_map, no_recycling, impl)
129 )
130 try:
--> 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling)
132 except (NotImplementedError, MethodNotDefined):
133 # We requested the c code, so don't catch the error.
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_c_thunk(self, node, storage_map, compute_map, no_recycling)
94 print(f"Disabling C code for {self} due to unsupported float16")
95 raise NotImplementedError("float16")
---> 96 outputs = cl.make_thunk(
97 input_storage=node_input_storage, output_storage=node_output_storage
98 )
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs)
1198 """
1199 init_tasks, tasks = self.get_init_tasks()
-> 1200 cthunk, module, in_storage, out_storage, error_storage = self.__compile__(
1201 input_storage, output_storage, storage_map, cache
1202 )
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in __compile__(self, input_storage, output_storage, storage_map, cache)
1118 input_storage = tuple(input_storage)
1119 output_storage = tuple(output_storage)
-> 1120 thunk, module = self.cthunk_factory(
1121 error_storage,
1122 input_storage,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache)
1642 if cache is None:
1643 cache = get_module_cache()
-> 1644 module = cache.module_from_key(key=key, lnk=self)
1645
1646 vars = self.inputs + self.outputs + self.orphans
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in module_from_key(self, key, lnk)
1238 try:
1239 location = dlimport_workdir(self.dirname)
-> 1240 module = lnk.compile_cmodule(location)
1241 name = module.__file__
1242 assert name.startswith(location)
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in compile_cmodule(self, location)
1541 try:
1542 _logger.debug(f"LOCATION {location}")
-> 1543 module = c_compiler.compile_str(
1544 module_name=mod.code_hash,
1545 src_code=src_code,
~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols)
2647 # difficult to read.
2648 # compile_stderr = compile_stderr.replace("\n", ". ")
-> 2649 raise CompileError(
2650 f"Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}"
2651 )
CompileError: Compilation failed (return status=1):
</code></pre>
| <python><bayesian><pymc> | 2023-11-15 04:00:50 | 0 | 441 | adhok |
77,485,146 | 3,214,482 | vscode cannot reconnect to existing jupyter notebook server when waking my laptop | <p>I recently switched from browser to vs code for using jupyter notebook and the IDE-like features (e.g. auto-complete, debugging, etc) provided by VS code is really so much better than a browser. But one thing bothers me is that <strong>whenever I wake my laptop from sleep mode</strong>, it often has issue re-connecting to existing jupyter notebook server (in my case localhost:8888). When using browser, I simply refresh the page and it works. But VS code does not seem to be able to "reconnect" to the server. I also tried close/reopen my ipynb notebook, but still no luck. I could just restart the server, but I would lose all my saved results so it is usually not the best option.</p>
<p>Edit:</p>
<p>"Help: about" in my VScode returns:</p>
<pre><code>
Version: 1.84.2
Commit: 1a5daa3a0231a0fbba4f14db7ec463cf99d7768e
Date: 2023-11-09T10:52:33.687Z (1 wk ago)
Electron: 25.9.2
ElectronBuildId: 24603566
Chromium: 114.0.5735.289
Node.js: 18.15.0
V8: 11.4.183.29-electron.0
OS: Darwin x64 22.3.0
</code></pre>
<p>Following are my jupyter extension versions:</p>
<pre><code>@ijmbarr/jupyterlab_spellchecker v0.2.0 enabled OK
</code></pre>
| <python><visual-studio-code><jupyter-notebook> | 2023-11-15 03:48:02 | 0 | 983 | username123 |
77,485,121 | 15,155,978 | Why Python version is not been shown correctly when using pyenv for Python environments? | <p>I have installed pyenv in MacOS Sonoma 14.1.1 version. I have added the following to <code>~/.zshrc</code> and <code>~/.bashrc</code> files according to this <a href="https://stackoverflow.com/questions/71188577/having-trouble-switching-python-versions-using-pyenv-global-command">answer</a>:</p>
<pre><code>export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
export PIPENV_PYTHON="$PYENV_ROOT/shims/python"
plugin=(
pyenv
)
eval "$(pyenv init -)"
eval "$(command pyenv init --path)"
eval "$(pyenv virtualenv-init -)"
</code></pre>
<p>The problem is that when opening a new terminal after activating a pyenv through <code>pyenv activate 3.10.7</code> and asking <code>python --version</code>, I got: <code>zsh: command not found: python</code>. But, if I use these commands:</p>
<pre><code>eval "$(command pyenv init -)"
eval "$(command pyenv init --path)"
</code></pre>
<p>When asking again <code>python --version</code>, <code>Python 3.10.7</code> is shown in the terminal.</p>
<p>I wonder why this is not working correctly after I added the <code>$PATH</code> command in ~/.zshrc and ~/.bashrc files.</p>
| <python><python-3.x><pyenv><macos-sonoma><pyenv-virtualenv> | 2023-11-15 03:39:56 | 0 | 922 | 0x55b1E06FF |
77,485,119 | 3,600,487 | Changing the encryption settings of EC2 EBS devices while creating a launch template using AWS CDK with Python | <p>In my AWS environment, I have an unencrypted AMI. When I launch EC2 instances from it, their EBS volumes are unencrypted, as expected. I'm trying to change the encryption settings of the EBS volumes to be encrypted at the launch time from this unencrypted AMI. I am using AWS CDK (version 2.66.0) with Python (3.9).</p>
<p>The following code I tested and found that I was able to retrieve the AMI from its name saved in <code>ami_name</code>, its <code>BlockDeviceMappings</code> and save devices by changing the encryption settings to <code>true</code> in to the list variable <code>ami_block_device_mappings</code>. However, after adding this to the launch template, the CloudFormation shows the empty list of devices.</p>
<p><strong>Note:</strong> Please change the value of <code>ami_name</code> variable as it applicable for your, if you want to try it.</p>
<p><strong>mystack.py</strong>:</p>
<pre><code>from aws_cdk import (
aws_ec2 as ec2,
Stack,
)
from constructs import Construct
import boto3
# import json
class MyStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
ami_name = "MY-TEST-WEB-AMI"
ec2_client = boto3.client("ec2")
# Get ami_id from ami_name
response = ec2_client.describe_images(
Filters=[{"Name": "name", "Values": [ami_name]}],
)
image = response["Images"][0]
ami_id = response["Images"][0]["ImageId"]
# Change EBS storage encryption settings, if any
ami_block_device_mappings = []
if "BlockDeviceMappings" in image:
for mapping in image["BlockDeviceMappings"]:
if "Ebs" in mapping:
mapping["Ebs"]["Encrypted"] = True
ami_block_device_mappings.append({"DeviceName": mapping["DeviceName"], "Ebs": mapping["Ebs"]})
# print(json.dumps(ami_block_device_mappings,indent=4,default=str))
lt = ec2.CfnLaunchTemplate(
self, "LaunchTemplate",
launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(
image_id=ami_id,
block_device_mappings=ami_block_device_mappings
)
)
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>#!/usr/bin/env python3
import os
import aws_cdk as cdk
from stacks.my_stack import MyStack
app = cdk.App()
MyStack(app, "MyStack")
app.synth()
</code></pre>
<p><strong>Generated Launch Template:</strong></p>
<pre><code>Resources:
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
BlockDeviceMappings:
- {}
- {}
ImageId: ami-0589c6ad4ac8694587
</code></pre>
<p>What mistakes am making here?</p>
<p>If you have a solution using a different <em>class</em> like <code>ec2.LaunchTemplate</code> instead of <code>ec2.cfnLaunchTemplate</code>, it is still fine, but I am looking for a solution in Python CDK.</p>
| <python><amazon-ec2><aws-cloudformation><aws-cdk> | 2023-11-15 03:36:56 | 0 | 1,710 | Rafiq |
77,485,108 | 6,361,531 | Force insert statement in sqlalchemy to use quotes around all columns | <p>Is there a parameter in sqlalchemy when doing an insert using the ORM that you can force the python library to quote all columns?</p>
<p>For example,</p>
<pre><code>INSERT INTO DATABASE_A.SCHEMA_A.TABLE_A (date, col1, "col 2") values (:data1, :data2, :data3)
</code></pre>
<p>is the statement that is currently generated if I am using a pandas dataframe with columns <code>date</code>, <code>col1</code> and <code>col 2</code>. Note, <code>col 2</code> is quoted due to the space already. However, I would like to go ahead and quote <code>date</code> and <code>col1</code> because in my database date is a reserved word and needs to be quoted.</p>
<p>Is there an engine or dialect parameter that will enable quote of all columns such that the generated insert statement will result in:</p>
<pre><code>INSERT INTO DATABASE_A.SCHEMA_A.TABLE_A ("date", "col1", "col 2") values (:data1, :data2, :data3)
</code></pre>
| <python><sqlalchemy> | 2023-11-15 03:32:50 | 1 | 154,219 | Scott Boston |
77,485,101 | 704,262 | SqlAlchemy extending an existing query with additional select columns from raw sql | <p>I'm quite new to SQLAlchemy and Python, and have to fix some bugs in a legacy environment so please bear with me..</p>
<p><strong>Environment</strong>:
<br/>Python 2.7.18
<br/>Bottle: 0.12.7
<br/>SQLAlchemy: 1.3.24
<br/>MySQL: 5.7</p>
<p><strong>Scenario</strong>: I have this SQL Statement below that pivots a linked table and adds rows dynamically as additional columns in the original table - I constructed it in SQL Workbench and it returns the results I want: All specified columns from table t1 plus the additional columns with values from table t2 appear in the result</p>
<pre><code>SET SESSION group_concat_max_len = 1000000; --this is needed to prevent the group_concat function to cut off after 1024 characters, there are quite a few columns involved that easily exceed the original limit
SET @sql = NULL;
SELECT GROUP_CONCAT(DISTINCT CONCAT(
'SUM(
CASE WHEN custom_fields.FIELD_NAME = "', custom_fields.FIELD_NAME, '" THEN
custom_fields_tags_link.field_value END)
AS "', custom_fields.FIELD_NAME, '"')
) AS t0
INTO @sql
FROM tags t1 LEFT OUTER JOIN custom_fields_tags_link t2 ON t1.id = t2.tag_id JOIN
custom_fields ON custom_fields.id = t2.custom_field_id;
SET @sql = CONCAT('SELECT tags.id AS tags_id,
tags.tag_type AS tags_tag_type, tags.name AS
tags_name, tags.version AS tags_version, ', @sql,
' FROM tags LEFT OUTER JOIN custom_fields_tags_link ON tags.id =
custom_fields_tags_link.tag_id JOIN custom_fields ON custom_fields.id =
custom_fields_tags_link.custom_field_id GROUP BY tags.id');
SELECT @sql;
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
</code></pre>
<p><strong>Problem</strong>: I have an already existing SQLAlchemy session, that expands a query to be used for pagination. Currently this query returns all my specified columns from table t1, that are joined with tables t2 and custom_fields to get all necessary columns. The missing part is the SQLAlchemy representation for the SELECT GROUP_CONCAT part of the above statement, the rest is all taken care of - since I have control and know how the Frontend presenation of this table looks and now also the raw SQL version in SQL Workbench, I tried to work backwards to get the SQLAlchemy / Python part right by consulting <a href="https://docs.sqlalchemy.org/en/14/orm/queryguide.html#orm-queryguide-selecting-text" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/orm/queryguide.html#orm-queryguide-selecting-text</a> and <a href="https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.TextClause.columns" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.TextClause.columns</a>, but now I am stuck at how to get this <code>TextClause</code> object converted into a <code>TextualSelect</code> without typing the columns statically in the <code>.column()</code> function cause I don't know what the column names, that the users provide for these custom_fields, will be.</p>
<p><strong>Goal</strong>: concat a dynamically created raw SQL statement to my existing SQLAlchemy query to select these dynamically created fields from a linked table so that I have the same result as when I execute this raw SQL statement in a SQL editor</p>
<p><strong>Attempts</strong>:</p>
<pre><code>#session is a app-wide shared MySQL session object that is created via sqlalchemy.orm's sessionmaker, scoped_session function
alchemy = session.query(Tag)
try:
#mainly the next line will be changed (x)
select_custom_fields_columns_stmt = select().from_statement(text(
'''SET SESSION group_concat_max_len = 1000000;
SET @sql = NULL;
SELECT GROUP_CONCAT(DISTINCT CONCAT(
'SUM(
CASE WHEN custom_fields.FIELD_NAME = "', custom_fields.FIELD_NAME, '" THEN
custom_fields_tags_link.field_value END) AS "', custom_fields.FIELD_NAME, '"'))
AS t0
INTO @sql
FROM tags t1 LEFT OUTER JOIN custom_fields_tags_link t2 ON t1.id = t2.tag_id JOIN custom_fields ON custom_fields.id = t2.custom_field_id;''')) #or here after the statement (y)
# next line is my attempt to add the columns that have been generated by the previous function but of course unsuccessful
alchemy = alchemy.add_columns(select_custom_fields_columns_stmt)
except:
logException()
joined_query = alchemy.outerjoin(model.tag.CustomFieldTagLink)
.join(model.tag.CustomField)
</code></pre>
<p><em>A</em>: This results in this error: <code>AttributeError: 'Select' object has no attribute 'from_statement' </code></p>
<p><em>B</em>: Changing the line (x) above that constructs the query for the additional rows to <code>select_custom_fields_columns_stmt = session.select().from_statement(text(...</code>
--> results in: <code>AttributeError: 'Session' object has no attribute 'select'</code></p>
<p><em>C</em>: adding a <code>.subquery("from_custom_fields")</code> statement at (y) --> results in: <code>AttributeError: 'AnnotatedTextClause' object has no attribute 'alias'</code></p>
<p><em>D</em>: other attempts for (x) substituting <code>select()</code> with <code>session.query()</code> or <code>session.query(Tags)</code> also didn't result in additional columns</p>
<p>What else can I try? Would it be preferable/easier to write the whole raw SQL part in SQLAlchemy and if so, how could I do that?</p>
<p>--</p>
<p><strong>Update & Examples</strong>:</p>
<p>As suggested I have provided a SQLfiddle that has all the relevant information, but doesn't return any results (I am too inexperienced on how to use it):</p>
<p><a href="http://sqlfiddle.com/#!9/90a0c87/2" rel="nofollow noreferrer">http://sqlfiddle.com/#!9/90a0c87/2</a></p>
<p>Also provided a DB-fiddle example with the exact same information to re-create a minimal example and here also the results are returned:</p>
<p><a href="https://dbfiddle.uk/NtPVWU8D" rel="nofollow noreferrer">https://dbfiddle.uk/NtPVWU8D</a></p>
<p><strong>ORM definitions</strong></p>
<p>for the 3 relevant tables, basically the Tags table and the CustomFields table use a CustomFieldTagLink association table to establish the link between these two.</p>
<pre><code>class Tag(Base,Versioned):
__tablename__ = 'tags'
__table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'}
#base
id = Column(Unicode(255), primary_key=True)
tag_type = Column(UnicodeText)
barcode_id = Column(UnicodeText)
name = Column(UnicodeText)
created_at = Column(Integer, index=True)
# version gets created automatically
# Relationship with the custom fields table - syntactic sugar
custom_fields = relationship("CustomField", secondary='custom_fields_tags_link', back_populates='linked_tags')
custom_field_links = relationship("CustomFieldTagLink", back_populates='tag', cascade="save-update, merge, delete, delete-orphan")
class CustomField(Base, Versioned):
__tablename__ = 'custom_fields'
__table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'}
id = Column(Integer, Sequence('custom_template_field_id'), primary_key=True)
field_name = Column(Unicode(255), unique=True, nullable=False)
field_type = Column(Unicode(255), server_default='text')
default_value = Column(TEXT(length=4294967295))
linked_tags = relationship("Tag", secondary='custom_fields_tags_link', back_populates='custom_fields')
tag_links = relationship("CustomFieldTagLink", back_populates='custom_field')
class CustomFieldTagLink(Base,Versioned):
'''
Association Table between Tags and Custom Fields
'''
__tablename__ = 'custom_fields_tags_link'
__table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'}
tag_id = Column(Unicode(255), ForeignKey('tags.id'), primary_key=True)
custom_field_id = Column(Integer, ForeignKey('custom_fields.id'), primary_key=True)
field_value = Column(Unicode(255), nullable=False)
tag = relationship("Tag", back_populates='custom_field_links')
custom_field = relationship("CustomField", back_populates='tag_links')
</code></pre>
<p>I expect the SQL query when executed to return a list of Tags items that are later processed and converted to dicts, however in this instance it is about adding the "pivot table" with the added custom_fields columns to the tags table to the already existing SQLAlchemy query object, as the execution of that statement happens later (filter, sort, paginate etc needs first to be done)</p>
| <python><sql><mysql><python-2.7><sqlalchemy> | 2023-11-15 03:30:59 | 0 | 784 | hreimer |
77,484,996 | 10,620,003 | Build a df based on another df which only has 1/0 | <p>I have a df which only has 1/0. I want to build a df with the same shape.
I want to consider the first two 1 in the case which we have consecutive 1.
For example, in a case we have 0,0,1,1,1,1, 0,0 I want to convert it to 0,0,1,1,0,0,0,0. Here is an example,</p>
<pre><code>import pandas as pd
df_dr = pd.DataFrame()
df_dr['0'] = [0,0]
df_dr['1'] = [1,0]
df_dr['2'] = [1,1]
df_dr['3'] = [1,1]
df_dr['4'] = [0,1]
df_dr['5'] = [1,0]
df_dr['6'] = [1,0]
df_dr['7'] = [1,0]
df_dr['8'] = [1,1]
df_dr['9'] = [0,1]
df_dr['10'] = [0,1]
df_dr['11'] = [0,1]
df_dr['12'] = [0,1]
</code></pre>
<p>and here is the output:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12
0 0 1 1 0 0 1 1 0 0 0 0 0 0
1 0 0 1 1 0 0 0 0 1 1 0 0 0
</code></pre>
<p>Could you please help me with that? Thanks</p>
| <python><pandas> | 2023-11-15 03:00:59 | 1 | 730 | Sadcow |
77,484,926 | 22,371,917 | How to scroll in a nav element using seleniumbase in python | <pre class="lang-html prettyprint-override"><code><nav class="flex h-full w-full flex-col p-2 gizmo:px-3 gizmo:pb-3.5 gizmo:pt-0" aria-label="Menu">
</code></pre>
<p>This is the nav although, its a-lot longer its full of divs</p>
<p>I just want to know how to scroll till the end of the menu.
Edit:
loading element that has to be accounted for</p>
<pre class="lang-html prettyprint-override"><code><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" class="animate-spin text-center" height="1em" width="1em" xmlns="http://www.w3.org/2000/svg">
</code></pre>
<p>and under it there are many line elements</p>
<p>top of nav xpath:</p>
<pre class="lang-html prettyprint-override"><code>/html/body/div[1]/div[1]/div[1]/div/div/div/div/nav
</code></pre>
<p>top of svg xpath:</p>
<pre class="lang-html prettyprint-override"><code>/html/body/div[1]/div[1]/div[1]/div/div/div/div/nav/div[2]/div[2]/div[2]/svg
</code></pre>
<p>do you need scrollbar html?</p>
| <python><html><selenium-webdriver><scroll><seleniumbase> | 2023-11-15 02:35:23 | 2 | 347 | Caiden |
77,484,746 | 9,905,667 | Getting an error when installing tensorflow | <p>I am getting an error when trying to download tensorflow through pip.</p>
<pre><code>PS C:\Users\12158> pip install tensorflow
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions:
none)
ERROR: No matching distribution found for tensorflow
</code></pre>
<p>Very strange. Through research there seems to be an issue with newer python versions and installing tensorflow. Is there an option to not have to downgrade python?</p>
<pre><code>PS C:\Users\12158> python -c "import struct; print(8 *struct.calcsize('P'))"
64
PS C:\Users\12158>
</code></pre>
<p>My version is Python 3.12.0.</p>
<p>Thanks!</p>
| <python><powershell><tensorflow><pip> | 2023-11-15 01:32:55 | 1 | 726 | SantiClaus |
77,484,738 | 5,635,892 | Loss function doesn't have requires_grad=True in pytorch | <p>Hello I have the following code (simplified version of my code but able to reproduce the error):</p>
<pre><code>import numpy as np
from numpy import linalg as LA
import torch
import torch.optim as optim
import torch.nn as nn
def func(x,pars):
a = pars[0]
b = pars[1]
c = pars[2]
d = pars[3]
x = x.int()
H = torch.tensor([[a,b,1],[2,3,c],[4,d,7]])
eigenvalues, eigenvectors = np.linalg.eigh(H)
trans_freq = eigenvalues[x]
return torch.tensor(trans_freq)
x_index = torch.tensor([1,2])
y_vals = torch.tensor([0.5,12])
params = torch.tensor([1.,2.,3.,4.])
params.requires_grad=True
opt = optim.SGD([params], lr=100)
mse_loss = nn.MSELoss()
for i in range(10):
opt.zero_grad()
loss = mse_loss(func(x_index,params),y_vals)
print(x_index.requires_grad)
print(params.requires_grad)
print(y_vals.requires_grad)
print(loss.requires_grad)
loss.backward()
opt.step()
print(loss)
</code></pre>
<p>The output is:</p>
<pre><code>False
True
False
False
</code></pre>
<p>and I am getting this error: <code>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn</code> from this line: <code>loss.backward()</code>. Indeed the loss doesn't have <code>requires_grad=True</code> but why is that the case (setting it manually in the for loop doesn't work either). What should I do? Thank you!</p>
| <python><python-3.x><pytorch><gradient><loss-function> | 2023-11-15 01:29:06 | 2 | 719 | Silviu |
77,484,622 | 9,905,667 | ValueError: 'editdistance/bycython.pyx' doesn't match any files when downloading keras_ocr | <p>When trying to download kera_ocr through pip, I get the following error:</p>
<pre><code>ValueError: 'editdistance/bycython.pyx' doesn't match any files
</code></pre>
<p>I have tried everything I can think of.</p>
<ol>
<li>Upgrading pip</li>
<li>Upgrading python</li>
<li>Installing dependencies</li>
</ol>
| <python><keras><pip> | 2023-11-15 00:47:29 | 1 | 726 | SantiClaus |
77,484,490 | 11,091,148 | Pydantic optinal throws `Field required` in nested json list | <p>I have a nested json that I validate with pydantic:</p>
<pre><code>app_dict={'apps': [{'app_id': 'a_1',
'group_id': '123',
'report_id': '456',
'principal_id': 'p_1'},
{'app_id': 'a_2',
'group_id': '789',
'report_id': '987'}]}}
class PbiApps(BaseModel):
app_id: ty.Required[pty.StrictStr]
group_id: ty.Required[pty.StrictStr]
report_id: ty.Required[pty.StrictStr]
principal_id: ty.Optional[pty.StrictStr]
class PbiMain(BaseModel):
apps: ty.Optional[ty.List[PbiApps]]
</code></pre>
<p>But if I try to parse it into <code>PbiMain</code> it throws an ValidationError for <code>a_2</code></p>
<pre><code>PbiRoot(**app_dict)
ValidationError: 1 validation error for Settings
apps.1.principal_id
Field required [type=missing, input_value={'app_id': 'a_2', '...test', 'report_id': '789'}, input_type=DictConfig]
For further information visit https://errors.pydantic.dev/2.5/v/missing
</code></pre>
<p>I can set the <code>principal_id: ty.Optional[pty.StrictStr] = None</code> to make it work but I would rather have the filed not present than of type None.</p>
<p>Is there a way to achieve this?</p>
| <python><python-typing><pydantic> | 2023-11-14 23:56:00 | 2 | 526 | Bennimi |
77,484,372 | 437,456 | asyncpg DataError: invalid input for query argument expected str, got int | <p>I admittedly don't understand how asyncpg's codecs work, but it seems to work counter to how I'd expect:</p>
<pre><code>import asyncio
import asyncpg
async def main():
conn = await asyncpg.connect('postgresql://postgres@localhost/test')
print(await conn.fetchval("select $1", 'a')) # this works: prints 'a'
print(await conn.fetchval("select $1", 1)) # invalid input for query argument $1: 1 (expected str, got int)
asyncio.run(main())
</code></pre>
<p>It seems asyncpg wants the parameter to always be a string, but I want it to be an int. Why does this fail?</p>
| <python><postgresql><asyncpg> | 2023-11-14 23:12:40 | 1 | 5,340 | DMac the Destroyer |
77,484,338 | 20,235,789 | How can I mock an imported dependency from my function file? | <p>I'm attempting to mock a config that is being imported in my security file here:</p>
<pre><code>import aiohttp
from fastapi import Header, HTTPException
from .util.config import config
async def get_user_profile_details(
user_profile_id: str, ..., ...
):
user_profile_url = f"{config.entity.ENTITY_BASE_URL}/.../..."
async with aiohttp.ClientSession() as session:
async with session.get(user_profile_url, headers=auth_header) as ...
....
</code></pre>
<p>here is my test setup:</p>
<pre><code>import unittest
from unittest.mock import patch
from aioresponses import aioresponses
from my_module.security import get_user_profile_details
@patch('my_module.util.config.es', autospec=True)
async def test_get_user_profile_details_success(self, mock_config):
mock_config.entity.ENTITY_BASE_URL = "http://mocked-entity-url"
user_profile_id = "123"
...
...
</code></pre>
<p>my test is failing even before it gets to this test function.
It imports that <code>get_user_profile_details</code>,
goes down its imports <code>from .util.config import config</code>
, importing the config and failing when its trying to get an es instance(also in the config setup)</p>
<pre><code>
class Config(BaseSettings):
...
...
es = ElasticsearchConfig() #this is what I'm trying to mock
entity = EntityAPIConfig()
...
config: Config = Config()
</code></pre>
<p>I've tried several ways.</p>
<ul>
<li><p>mocking the config like you see now</p>
</li>
<li><p><code>@patch('my_module.util.config.config', autospec=True)</code></p>
</li>
<li><p><code>@patch('my_module.util.Config', autospec=True)</code></p>
</li>
<li><p>tried using a:</p>
</li>
</ul>
<pre><code>
async def asyncSetUp(self):
self.aiohttp_mock = aioresponses()
</code></pre>
<p>and mocking the request:</p>
<pre><code> self.aiohttp_mock.get(...
</code></pre>
<p>Here is the error:</p>
<pre><code> es = ElasticsearchConfig()
E google.api_core.exceptions.PermissionDenied: 403 Permission denied on resource project test.
</code></pre>
<p>What is the correct way to mock that config?</p>
| <python><python-unittest><python-unittest.mock> | 2023-11-14 23:02:15 | 0 | 441 | GustaMan9000 |
77,484,062 | 836,026 | Read Image from numpy and display it: ValueError: conversion from L to PNG not supported | <p>I have a 2D array of an image (<strong>slice</strong>). The array dimension is (224, 224) and when I read it as an image as shown below, <em><strong>the image mode is shown as "F"</strong></em>, for some reason I need to save as "PNG" and display it. I'm gettig error message "ValueError: conversion from L to PNG not supported"</p>
<p>The real image is shown below. See the below code.</p>
<pre><code>from matplotlib import pyplot as plt
from PIL import Image
from matplotlib import cm
# reproducible data
slice = np.ones([224, 224], dtype = float)
print(slice.shape)
img_m3= Image.fromarray(slice)#.convert('RGB')#.convert('PNG')
print("img .mode",img_m3 .mode)
if img_m3 .mode != 'PNG':
img_m3 = img_m3.convert('PNG')
img_m3.save("/tmp/myimageb.png", "PNG")
im = Image.open("/tmp/myimageb.png")
plt.imshow(im )#, vmin=0, vmax=255)
plt.show()
</code></pre>
<p>Update:</p>
<p>I also tried saving it as JPG, no errors, but I got blank image:</p>
<pre><code>if img_m3 .mode != 'RGB':
img_m3 = img_m3.convert('RGB')
img_m3.save("/tmp/myimageb.jpg", 'JPEG')
#plt.imshow(img , cmap='gray', vmin=0, vmax=255)
im = Image.open("/tmp/myimageb.jpg")
plt.imshow(im )#, vmin=0, vmax=255)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/6PAtO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PAtO.png" alt="original image" /></a></p>
<p>Update2:</p>
<p>I tried the solution on the answer, now I can see the image. But the quality degraded. See below, on top is the original image and bottom is the result.</p>
<pre><code>import cv2
from matplotlib import pyplot as plt
from PIL import Image
print(image[0].shape)
print("slice.max()",slice.max())
print("slice.min()",slice.min())
print("slice.mean()",slice.mean())
slice.min() and slice.mean()
#img = Image.fromarray((slice*255).astype(np.uint8)).convert('L')
img = Image.fromarray(slice.astype(np.uint8))
img.save("/tmp/myimageb.png", "PNG")
plt.imshow(img )#, cmap='gray', vmin=0, vmax=255)
plt.show()
(224, 224)
slice.max() 0.5193082
slice.min() -2.0836544
slice.mean() -0.37065452
</code></pre>
<p><a href="https://i.sstatic.net/3Gacy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Gacy.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/xRMOc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xRMOc.png" alt="enter image description here" /></a></p>
| <python><image><matplotlib> | 2023-11-14 21:53:44 | 1 | 11,430 | user836026 |
77,484,060 | 9,415,459 | efficient iteration & application of a function in pandas, polars or torch? Is lazy possible? | <p><strong>Goal</strong>:
Find an efficient/fastest way to iterate over a table by column and run a function on each column, in python or with a python library.</p>
<p><strong>Background</strong>:
I have been exploring methods to improve the speed of my functions. This is because I have two models/algorithms that I want to run one small, one large (uses torch) and the large is slow. I have been using the small one for testing. The small model is seasonal decomposition of each column.</p>
<p><strong>Setup</strong>:</p>
<pre><code>Testing environment: ec2, t2 large. X86_64
Python version: 3.11.5
Polars: 0.19.13
pandas: 2.1.1
numpy: 1.26.0
</code></pre>
<p>demo data in pandas/polars:</p>
<pre><code>rows = 11020
columns = 1578
data = np.random.rand(rows, columns)
df = pd.DataFrame(data)
# df_p = pl.from_pandas(df) # convert if needed.
</code></pre>
<p><strong>Pandas</strong></p>
<p>pandas and dict:</p>
<pre><code>from statsmodels.tsa.seasonal import seasonal_decompose
import pandas as pd
class pdDictTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
def process_col(self, column_data: pd.Series = None) -> torch.Tensor:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
trend = result.trend.fillna(0).values
return trend
@classmethod
def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame:
trend_data_dict = {}
for column in dataframe.columns:
trend_data_dict[column] = cls().process_col(dataframe[column])
trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index)
return trend_dataframes
import timeit
start = timeit.default_timer()
trend_tensor = pdDictTrendExtractor.process_df(df)
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+str(execution_time))
</code></pre>
<p>Program Executed in 14.349091062998923</p>
<p>with list comprehension instead of for loop:</p>
<pre><code>class pdDict2TrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
def process_col(self, column_data: pd.Series = None) -> pd.Series:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
trend = result.trend.fillna(0).values
return trend
@classmethod
def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame:
trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns}
trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index)
return trend_dataframes
</code></pre>
<p>Program Executed in 14.343959668000025</p>
<p>Class using pandas and torch:</p>
<pre><code>from statsmodels.tsa.seasonal import seasonal_decompose
import torch
import pandas as pd
class pdTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
# Store data as an instance variable
def process_col(self, column_data: pd.Series = None) -> torch.Tensor:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
trend = result.trend.fillna(0).values
return torch.tensor(trend, dtype=torch.float32).view(-1, 1)
@classmethod
def process_df(cls, dataframe: pd.DataFrame) -> torch.Tensor:
trend_dataframes = torch.Tensor()
for column in dataframe.columns:
trend_data = cls().process_col(dataframe[column])
trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1)
return trend_dataframes
start = timeit.default_timer()
trend_tensor = pdTrendExtractor.process_df(df_p)
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+str(execution_time))
</code></pre>
<p>Program Executed in 23.14214362200073</p>
<p>with dict, multiprocessing & list comprehension:
As suggested by @roganjosh & @jqurious below.</p>
<pre><code>from multiprocessing import Pool
class pdMTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
def process_col(self, column_data: pd.Series = None) -> pd.Series:
result = seasonal_decompose(column_data, model=self._model, period=self._period)
trend = result.trend.fillna(0).values
return trend
@classmethod
def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame:
with Pool() as pool:
trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns])))
return pd.DataFrame(trend_data_dict, index=dataframe.index)
</code></pre>
<p>Program Executed in 4.582350738997775, Nice and fast.</p>
<p><strong>Polars</strong></p>
<p>Polars & torch:</p>
<pre><code>class plTorTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
# Store data as an instance variable
def process_col(self, column_data: pl.Series = None) -> torch.Tensor:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
trend = result.trend[np.isnan(result.trend)] = 0
return torch.tensor(trend, dtype=torch.float32).view(-1, 1)
@classmethod
def process_df(cls, dataframe: pl.DataFrame) -> torch.Tensor:
trend_dataframes = torch.Tensor()
for column in dataframe.columns:
trend_data = cls().process_col(dataframe[column])
trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1)
return trend_dataframes
</code></pre>
<p>Program Executed in 13.813817326999924</p>
<p>polars & lamdba:</p>
<pre><code>start = timeit.default_timer()
df_p = df_p.select([
pl.all().map_batches(lambda x: pl.Series(seasonal_decompose(x, model="Additive", period=365).trend)).fill_nan(0)
]
)
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+str(execution_time))
</code></pre>
<p>Program Executed in 82.5596211330012</p>
<p>I suspect this is written poorly & the reason it is so slow. I have yet find a better method.</p>
<p>So far I have tried, apply_many, apply, map, map_batches or map_elements.. with_columns vs select and a few other combinations.</p>
<p>polars only, for loop:</p>
<pre><code>class plTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
# Store data as an instance variable
def process_col(self, column_data: pl.Series = None) -> pl.DataFrame:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
# Handle missing values by replacing NaN with 0
result.trend[np.isnan(result.trend)] = 0
return pl.DataFrame({column_data.name: result.trend})
@classmethod
def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame:
trend_dataframes = pl.DataFrame()
for column in dataframe.columns:
trend_data = cls().process_col(dataframe[column])
trend_dataframes = trend_dataframes.hstack(trend_data)
return trend_dataframes
</code></pre>
<p>Program Executed in 13.34212675299932</p>
<p>with list comprehensions:</p>
<p>I tried with polars and list comprehension. But having difficulty with polars syntax.</p>
<p>with a dict & for loop:</p>
<p>Program Executed in 13.743039597999996</p>
<p>with dict & list comprehension:</p>
<pre><code>class plDict2TrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
def process_col(self, column_data: pl.Series = None) -> pl.Series:
self.data = column_data
result = seasonal_decompose(self.data, model=self._model, period=self._period)
result.trend[np.isnan(result.trend)] = 0
return pl.Series(result.trend)
@classmethod
def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame:
trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns}
trend_dataframes = pl.DataFrame(trend_data_dict)
return trend_dataframes
</code></pre>
<p>Program Executed in 13.008102383002552</p>
<p>with dict, multiprocessing & list comprehension:
As suggested by @roganjosh & @jqurious below.</p>
<pre><code>from multiprocessing import Pool
class plMTrendExtractor:
def __init__(self, period: int = 365) -> None:
self._period = period
self._model = 'Additive'
def process_col(self, column_data: pl.Series = None) -> pl.Series:
result = seasonal_decompose(column_data, model=self._model, period=self._period)
result.trend[np.isnan(result.trend)] = 0
return pl.Series(result.trend)
@classmethod
def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame:
with Pool() as pool:
trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns])))
return pl.DataFrame(trend_data_dict)
</code></pre>
<p>Program Executed in 4.997288776001369, Nice!.</p>
<p>With lazyFrame?</p>
<p>I can add lazy & collect to the <code>df_p.select()</code> method above but doing this does not improve the time. One of the key issues seems to be that the function that is passed to lazy operations needs to be lazy too. I was hoping that it might run each column in parallel.</p>
<p><strong>current conclusions & notes</strong></p>
<ul>
<li>I am getting a second to half a second of variation for some of the runs.</li>
<li>Pandas and dict, seems to be reasonable. If you care about the index, then this can be a good option.</li>
<li>Polars with dict and list comprehension are the "fastest". But not by much. Considering the variation even smaller diff.</li>
<li>both options also have the benefit of not needing additional packages.</li>
<li>There seems to be room for improvement in polars. In terms of better code, but not sure if this would improve time much. As the main, compute time is seasonal_decompose. Which takes ~0.012 seconds per column, if run alone.</li>
<li>open to any feedback on improvements</li>
<li>warning: i haven't done full output validation <strong>yet</strong> on the functions above.</li>
<li>how the variable is returned from process_col does have minor impacts on speed. As expected, and part of what I was tuning here. For example, with polars if I returned numpy array I got slower time. If I returned a numpy array, but declare -> pl.series this seems about the same speed, with one or two trials being faster (then above).</li>
</ul>
<p><strong>after feedback/added multiprocessing</strong></p>
<ul>
<li>surprise surprise, multiprocessing for the win. This seems to be regardless of pandas or polars.</li>
</ul>
| <python><python-3.x><pandas><pytorch><python-polars> | 2023-11-14 21:53:14 | 1 | 385 | Aaron C |
77,484,040 | 10,664,542 | With Python unittest OOP base & child class, when executed the base class is observed to run the constructor & test_ method more times than expected | <p><strong>Technical areas:</strong></p>
<ul>
<li>Python Programming</li>
<li>Python OOP</li>
<li>Python unittest</li>
</ul>
<hr />
<p><strong>Description:</strong></p>
<p>With Python OOP (unittest scenario), I am a little confused.</p>
<p>I have a base/parent class. I run it and see the constructor called once and a method in the class called once as expected.</p>
<pre><code>import unittest
import datetime
class TestBase(unittest.TestCase):
"""
All test classes should inherit this to get: self.config
"""
def __init__(self, *args, **kwargs):
super(TestBase, self).__init__(*args, **kwargs)
self.now = str(datetime.datetime.now())
print('TestBase.__init__(): now=[' + self.now + ']', flush=True)
def test_canary_base(self):
"""
A test that is known good and will always pass
"""
print("TestBase.test_canary_base()", flush=True)
</code></pre>
<hr />
<p>Now I create a child class derived from the base/parent class and I run it.</p>
<pre><code>import datetime
from test.test_base import TestBase
class TestChild(TestBase):
"""
"""
def __init__(self, *args, **kwargs):
super(TestChild, self).__init__(*args, **kwargs)
self.now = str(datetime.datetime.now())
print('TestChild.__init__(): now=[' + self.now + ']')
def test_canary_child(self):
"""
A test that is known good and will always pass
"""
print("TestChild.test_canary_child()", flush=True)
</code></pre>
<p>I see the <strong>base/parent class constructor called three times</strong> and a <strong>test_</strong> method in the base/parent class called <strong>twice</strong>. This is unexpected.</p>
<hr />
<p>Additionally, I see the <strong>child constructor called twice</strong> and a <strong>test_</strong> method in the class called once.</p>
<p>Why is the derived class constructor called twice?</p>
<hr />
<p>Why, in both the base/parent class and the child class are the constructor and test_ method not called just once?</p>
<hr />
<p>I have a git repo from which actual code can be cloned and run locally to demonstrate the problem (will commit/push in a moment).</p>
<p><strong><a href="https://github.com/devlocalca/python-unittest-oop" rel="nofollow noreferrer">https://github.com/devlocalca/python-unittest-oop</a></strong></p>
<hr />
<p><strong>To reproduce:</strong></p>
<ol>
<li>clone the git repo</li>
<li>setup IDE project</li>
<li>run the base/parent class: test/test_base.py</li>
<li>observe the output</li>
</ol>
<p>You will see the following output:</p>
<pre><code>TestBase.__init__(): now=[2023-11-14 14:32:52.052573]
TestBase.test_canary_base()
</code></pre>
<p>now run the child class:
test/test_child.py</p>
<p>You will see the following output (that I would not expect):</p>
<pre><code>TestBase.__init__(): now=[2023-11-14 14:26:02.137958]
TestBase.__init__(): now=[2023-11-14 14:26:02.137958]
TestChild.__init__(): now=[2023-11-14 14:26:02.137958]
TestBase.__init__(): now=[2023-11-14 14:26:02.137958]
TestChild.__init__(): now=[2023-11-14 14:26:02.137958]
TestBase.test_canary_base()
TestBase.test_canary_base()
TestChild.test_canary_child()
</code></pre>
<hr />
<p>now run a <strong>test_grandchild.py</strong> that inherits from <strong>test_child.py</strong></p>
<hr />
<p>How would this be fixed so that I would see the expected output of the base/parent constructor and <strong>test_</strong> method called only once (along with the child class being called only once).</p>
<p>I believe a working example would help me explore this issue.</p>
| <python><python-3.x><oop><python-unittest> | 2023-11-14 21:49:38 | 0 | 1,346 | user10664542 |
77,483,936 | 356,875 | How can I convert from string, apply the timezone offset, make naive and convert back to string a Pandas Series Index in Python? | <p>I have a Pandas time-series object with an Index like this:</p>
<pre><code>Index(['2023-05-31T00:05:00+0300', '2023-05-31T00:06:00+0300',
...
'2023-09-15T13:48:00+0300', '2023-09-15T13:49:00+0300'],
dtype='object', length=76106)
</code></pre>
<p>and I need to convert into this:</p>
<pre><code>Index(['2023-05-30T21:05:00', '2023-05-30T21:06:00',
...
'2023-09-15T10:48:00', '2023-09-15T10:49:00'],
dtype='object', length=76106)
</code></pre>
<p>What I am trying to do, effectively, is convert the string to datetime, subtract the timezone offset, make it naive and convert it back to string.</p>
<p>I know how to do this in a lot of (somewhat complicated) steps which will involve converting the series to dictionary, converting the (datetime) keys one by one, making the dictionary a new series, but is there a way to do this in few steps, probably in-place?</p>
<p>Any help will be greatly appreciated.</p>
| <python><pandas><datetime><timezone> | 2023-11-14 21:28:59 | 1 | 8,468 | xpanta |
77,483,933 | 18,020,941 | Django base.html block overwriting | <p>I want to define, and append to blocks defined in the base.html template.</p>
<p>Say I have the following template</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>My Project</title>
{% block append_to_me %}{% endblock %}
</head>
<body>
{% block content %}{% endblock content %}
</body>
</html>
</code></pre>
<p>I would then use the following template for my views,
my views render some wagtail components, and those components might
want to use the append_to_me block.</p>
<p>This goes not only for the <em>wagtail blocks</em>, but also for <strong>plain django template tags</strong></p>
<pre class="lang-html prettyprint-override"><code>{% extends "base.html" %}
{% block content %}
<h2>Content for My App</h2>
<p>Stuff etc etc.</p>
{# I want it to not matter where I use this. #}
{% my_custom_tag %}
{% endblock content %}
</code></pre>
<p>Where <code>{% my_custom_tag %}</code> would do something like this:</p>
<pre class="lang-py prettyprint-override"><code>@register.simple_tag(takes_context=True)
def my_custom_tag(context):
objects = Preload.objects.all()
for object in objects:
append_to_header_block(object.html)
</code></pre>
<p>Wagtail block example:</p>
<pre class="lang-py prettyprint-override"><code>class MyBlock(blocks.StructBlock):
title = blocks.CharBlock()
content = blocks.RichTextBlock()
class Meta:
template = 'myapp/myblock.html'
</code></pre>
<p>myapp/myblock.html</p>
<pre class="lang-html prettyprint-override"><code>{% add_to_header IMG "img/my-img.jpeg" %}
...
</code></pre>
<p>I also want to be able to keep the content of previous calls to the <code>add_to_header</code> function,
as to not overwrite the previous content.</p>
<p>I just cannot figure out how I would implement this, because there are a few issues:</p>
<ul>
<li>Order of evaluation, I am pretty sure the content of base.html gets rendered before any of the other templates.
<ul>
<li>Maybe this can be fixed by <em>somehow</em> overwriting the <code>append_to_me</code> block from anywhere; and calling <code>block.super</code> every time? <em>Somehow</em> would be my question.</li>
</ul>
</li>
<li>Wagtail blocks might not even know they are not used in base.html due to their versatility.</li>
</ul>
<p>Any ideas as to how I would implement this?
I am not even sure if this is possible, but I would love to hear your thoughts on this.</p>
| <python><django><django-templates><wagtail><templatetags> | 2023-11-14 21:28:02 | 0 | 1,925 | nigel239 |
77,483,917 | 2,823,719 | Scopes confusion using SMTP to send email using my Gmail account with XOAUTH2 | <p>My application has an existing module I use for sending emails that accesses the SMTP server and authorizes using a user (email address) and password. Now I am trying to use Gmail to do the same using my Gmail account, which, for the sake of argument, we say is booboo@gmail.com (it's actually something different).</p>
<p>First, I created a Gmail application. On the consent screen, which was a bit confusing, I started to add scopes that were either "sensitive" or "restricted". If I wanted to make the application "production" I was told that it had to go through a verification process and I had to produce certain documentation. This was not for me as I, the owner of this account, am only trying to connect to it for the sake of sending emails programmatically. I them created credentials for a desktop application and downloaded it to file <em>credentials.json</em>.</p>
<p>Next I acquired an access token with the following code:</p>
<pre class="lang-py prettyprint-override"><code>from google_auth_oauthlib.flow import InstalledAppFlow
SCOPES = ['https://mail.google.com/']
def get_initial_credentials(*, token_path, credentials_path):
flow = InstalledAppFlow.from_client_secrets_file(credentials_path, SCOPES)
creds = flow.run_local_server(port=0)
with open(token_path, 'w') as f:
f.write(creds.to_json())
if __name__ == '__main__':
get_initial_credentials(token_path='token.json', credentials_path='credentials.json')
</code></pre>
<p>A browser window opens up saying that this is not a verified application and I am given a chance to go "back to safety" but I click on the Advanced link and eventually get my token.</p>
<p>I then try to send an email with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import smtplib
from email.mime.text import MIMEText
import base64
import json
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
SCOPES = ['https://www.googleapis.com/auth/gmail.send']
def get_credentials(token_path):
with open(token_path) as f:
creds = Credentials.from_authorized_user_info(json.load(f), SCOPES)
if not creds.valid:
creds.refresh(Request())
with open(token_path, 'w') as f:
f.write(creds.to_json())
return creds
def generate_OAuth2_string(access_token):
auth_string = f'user=booboo\1auth=Bearer {access_token}\1\1'
return base64.b64encode(auth_string.encode('utf-8')).decode('ascii')
message = MIMEText('I need lots of help!', "plain")
message["From"] = 'booboo@gmail.com'
message["To"] = 'booboo@gmail.com'
message["Subject"] = 'Help needed with Gmail'
creds = get_credentials('token.json')
xoauth_string = generate_OAuth2_string(creds.token)
with smtplib.SMTP('smtp.gmail.com', 587) as conn:
conn.starttls()
conn.docmd('AUTH', 'XOAUTH2 ' + xoauth_string)
conn.sendmail('booboo', ['booboo@gmail.com'], message.as_string())
</code></pre>
<p>This works but note that I used a different scope <strong><a href="https://www.googleapis.com/auth/gmail.send" rel="nofollow noreferrer">https://www.googleapis.com/auth/gmail.send</a></strong> instead of the <strong><a href="https://mail.google.com/" rel="nofollow noreferrer">https://mail.google.com/</a></strong> I used to obtain the initial access token.</p>
<p>I then edited the application to add the scope <strong><a href="https://www.googleapis.com/auth/gmail.send" rel="nofollow noreferrer">https://www.googleapis.com/auth/gmail.send</a></strong>. That required me to put the application in testing mode. I did not understand the section to add "test users", that is I had no idea what I could have/should have entered here. I then generated new credentials and a new token as above. Then when I go to send my email, I see (debugging turned on):</p>
<pre class="lang-None prettyprint-override"><code>...
reply: b'535-5.7.8 Username and Password not accepted. Learn more at\r\n'
reply: b'535 5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp\r\n'
reply: retcode (535); Msg: b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp'
...
</code></pre>
<p>But I never sent up a password, but rather the XOAUTH2 authorization string. I don't know whether this occurred because I hadn't added test users. For what it's worth, I do not believe that this new token had expired yet and therefore it was not refreshed.</p>
<p>I didn't try it, but had I made the application "production", would it have worked? Again, I don't want to have to go through a whole verification process with Gmail. Unfortunately, I don't have a specific question other than I would like to define an application with the more restricted scope and use that, but it seems impossible without going through this verification. Any suggestions?</p>
| <python><oauth-2.0><gmail><google-oauth><smtp-auth> | 2023-11-14 21:24:30 | 1 | 45,536 | Booboo |
77,483,914 | 9,021,875 | How to maintain a pool of processes | <p>I have a project in which I run multiple processes to do multiple jobs, the jobs are distributed via a queue.</p>
<pre><code>queue = multiprocessing.Queue()
process = multiprocessing.spawn(
run_alg,
args=(queue,),
nprocs=process_num,
join=False,
)
process.join()
</code></pre>
<p>However, it is possible for the processes to stop working or to crash mid-operation.
I want to create a pool of x processes where every time a process is shut down, a new process will be created to take its place until the queue is empty.</p>
<p>Is it possible, how?</p>
| <python><multiprocessing><python-multiprocessing><multiprocessing-manager> | 2023-11-14 21:23:28 | 0 | 1,839 | Yedidya kfir |
77,483,889 | 5,036,928 | PyVista: Getting mesh indices (triangles) | <p>I was originally using <a href="https://github.com/pmneila/PyMCubes" rel="nofollow noreferrer">https://github.com/pmneila/PyMCubes</a> which conveniently outputs the mesh points and indices but have since moved to PyVista since for whatever reason PyMCubes applied some sort of weird transformation that isn't ideal. How can I extract the same information from PyVista?</p>
<p>For a simple sphere:</p>
<pre><code>def sphere(x, y, z):
scalar = 8 * np.ones(len(x*y*z))
dist = (x**2 + y**2 + z**2)**(1/2)
scalar[(dist >= 1**2) | (dist <= 0.75**2) | (z <= 0)] = 0
return scalar
# create a uniform grid to sample the function with
n = 100
x_min, y_min, z_min = -2, -2, -2
grid = pv.ImageData(
dimensions=(n, n, n),
spacing=(abs(x_min) / n * 2, abs(y_min) / n * 2, abs(z_min) / n * 2),
origin=(x_min, y_min, z_min),
)
x, y, z = grid.points.T
values = sphere(x, y, z)
fig = go.Figure([go.Scatter3d(x=x, y=y, z=z, name='inner', mode='markers', marker=dict(size=values)),])
# fig.show(renderer='browser')
mesh = grid.contour([1], values, method='marching_cubes').smooth()
dist = np.linalg.norm(mesh.points, axis=1)
mesh.plot(scalars=dist, smooth_shading=True, specular=1, opacity=0.3, cmap="plasma", show_scalar_bar=False)
triangulated = mesh.extract_surface()
triangles = triangulated.surface_indices().reshape(-1, 3)
vertices = mesh.points
</code></pre>
<p>The vertices and mesh points I have grabbed definitely do not correspond to each other since plotting them independently of PyVista generates garbage.</p>
| <python><3d><mesh><pyvista> | 2023-11-14 21:16:45 | 0 | 1,195 | Sterling Butters |
77,483,721 | 8,998,172 | Is it possible to continuously substract values from two different columns in a pandas dataframe? | <p>I have two pandas dataframes; one dataframe contains orders, the other one contains items that are in stock.</p>
<pre><code># Sample DF 1 - Orders
# +--------------+------------+---------------+----------------+
# | T-Shirt Size | Ordered By | Shop Location | Order Quantity |
# +--------------+------------+---------------+----------------+
df_1 = pd.DataFrame(
columns = ['size', 'ordered_by', 'shop_location', 'order_quantity'],
data = [['L', 'Tom', 'London', 1],
['M', 'Alice', 'Manchester', 1],
['S', 'Alice', 'Manchester', 1],
['S', 'Georgia', 'Newcastle', 1],
['L', 'Bart', 'Manchester', 3],
['M', 'Bob', 'Manchester', 1],
['L', 'Toby', 'London', 2]]
)
</code></pre>
<pre><code># Sample DF 2 - Stock
# +--------------+---------------+--------+-------+
# | T-Shirt Size | Shop Location | Stock | Price |
# +--------------+---------------+--------+-------+
df_2 = pd.DataFrame(
columns = ['size', 'shop_location', 'stock', 'price'],
data = [['S', 'London', '5', '7.99'],
['M', 'London', '9', '7.99'],
['L', 'London', '3', '8.99'],
['XL', 'London', '7', '8.99'],
['S', 'Manchester', '2', '7.99'],
['M', 'Manchester', '2', '7.99'],
['L', 'Manchester', '15', '8.99'],
['XL', 'Manchester', '8', '8.99'],
['S', 'Newcastle', '2', '7.99'],
['M', 'Newcastle', '11', '7.99'],
['L', 'Newcastle', '4', '8.99'],
['XL', 'Newcastle', '1', '8.99']]
)
</code></pre>
<p>After merging the dataframes, I would like to deduct the order amount from the amount of items that are in stock in a specific shop location in an ongoing fashion.</p>
<p>I'm getting as far as this:</p>
<pre><code>df_merge = pd.merge(
df_1,
df_2,
how='left',
left_on=['shop_location', 'size'],
right_on=['shop_location','size']
)
df_merge['stock'] = df_merge['stock'].astype(int) - df_merge['order_quantity'].astype(int)
df_merge
</code></pre>
<pre><code> size ordered_by shop_location order_quantity stock price
0 L Tom London 1 2 8.99
1 M Alice Manchester 1 1 7.99
2 S Alice Manchester 1 1 7.99
3 S Georgia Newcastle 1 1 7.99
4 L Bart Manchester 3 12 8.99
5 M Bob Manchester 1 1 7.99
6 L Toby London 2 1 8.99
</code></pre>
<p>Obviously, this does what it is supposed to and simply deducts the value order_quantity in the order quantity column from the value in the stock column.</p>
<p>However, what I would like to achieve is something like this:</p>
<pre><code> size ordered_by shop_location order_quantity stock price
...
1 M Alice Manchester 1 1 7.99
...
5 M Bob Manchester 1 0 7.99
...
</code></pre>
| <python><pandas> | 2023-11-14 20:44:10 | 1 | 1,005 | holger |
77,483,713 | 5,013,143 | How can I change from the interpreter a variable value which is contained in a module of a python project? | <p>Say I have a project with a main.py file like this</p>
<pre><code>from pyFiles.module1 import v
from pyFiles.module1 import myClass1
from pyFiles.module2 import myClass2
def main():
#main function
print("start up!")
if __name__ == "__main__":
main()
</code></pre>
<p>a folder "pyFiles" with an <strong>init</strong>.py file and two modules:</p>
<p>module1.py:</p>
<pre><code>x = 2
class myClass1():
def __init__(self, a, b, c):
self.a = a + x
self.b = b
self.c = c
def method1(self):
return self.a + self.b + self.c
</code></pre>
<p>module2.py</p>
<pre><code>from .module1 import myClass1
class myClass2(myClass1):
def __init__(self, a, b, c):
super().__init__(a, b, c)
self.a = 2*a
self.b = 2*b
self.c = 2*c
def method2(self):
return self.a * self.b*self.b
</code></pre>
<p>the issue I have is about that x in module1.py, which I want it to be changeble from the user from the interpreter in manner like this os symilar:</p>
<p>once the user type</p>
<pre><code>x = 12
</code></pre>
<p>than that x is going to be changed for ALL modules from 2 to 12. How can I do it?</p>
| <python> | 2023-11-14 20:43:22 | 1 | 7,483 | Stefano Fedele |
77,483,427 | 267,482 | How to implement a optional last arguments for a command with Typer in Python? | <p>How do I achieve the syntax similar to</p>
<pre><code>my_app --named_arg1=val1 -- misc_arg1 misc_arg2 ...
</code></pre>
<p>or the same without <code>--</code>?</p>
| <python><typer> | 2023-11-14 19:47:28 | 1 | 18,954 | bobah |
77,483,100 | 3,611,472 | Tensorflow terribly slow on Mac Studio M1 Ultra | <p>I have a Mac Studio M1 Ultra and I am trying to train a simple RNN to forecast time series.</p>
<p>The code is the following</p>
<pre><code>import numpy as np
import keras
import tensorflow as tf
def generate_time_series(batch_size, n_steps):
freq1, freq2, offsets1, offsets2 = np.random.rand(4, batch_size, 1)
time = np.linspace(0, 1, n_steps)
series = 0.5 * np.sin((time - offsets1) * (freq1 * 10 + 10)) # wave 1
series += 0.2 * np.sin((time - offsets2) * (freq2 * 20 + 20)) # + wave 2
series += 0.1 * (np.random.rand(batch_size, n_steps) - 0.5) # + noise
return series[..., np.newaxis].astype(np.float32)
np.random.seed(42)
n_steps = 50
series = generate_time_series(10000, n_steps + 1)
X_train, y_train = series[:7000, :n_steps], series[:7000, -1]
X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -1]
X_test, y_test = series[9000:, :n_steps], series[9000:, -1]
# Implementing simple RRN
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([keras.layers.SimpleRNN(1, input_shape=[None, 1])])
optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=0.005)
model.compile(loss="mse", optimizer=optimizer)
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
</code></pre>
<p>On my Mac M1, I have installed <code>tensorflow</code> 2.13.0 following the instructions <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">here</a> and I have <code>keras</code> 2.13.1. Python version is 3.11.</p>
<p>When I run the code, I get the following output during training</p>
<pre><code>2023-11-14 17:55:24.172843: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
Epoch 1/20
219/219 [==============================] - 148s 676ms/step - loss: 0.4310 - val_loss: 0.2155
Epoch 2/20
219/219 [==============================] - 143s 651ms/step - loss: 0.1627 - val_loss: 0.1514
Epoch 3/20
219/219 [==============================] - 142s 649ms/step - loss: 0.1462 - val_loss: 0.1488
Epoch 4/20
219/219 [==============================] - 141s 644ms/step - loss: 0.1474 - val_loss: 0.1475
Epoch 5/20
219/219 [==============================] - 142s 649ms/step - loss: 0.1477 - val_loss: 0.1508
Epoch 6/20
219/219 [==============================] - 142s 649ms/step - loss: 0.1006 - val_loss: 0.0617
...
</code></pre>
<p>which shows that it takes ~140s per epoch.</p>
<p>however, if I run the same code on Google Colaboratory, I get</p>
<pre><code>Epoch 1/20
219/219 [==============================] - 6s 24ms/step - loss: 0.0485 - val_loss: 0.0173
Epoch 2/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0132 - val_loss: 0.0118
Epoch 3/20
219/219 [==============================] - 4s 17ms/step - loss: 0.0119 - val_loss: 0.0113
Epoch 4/20
219/219 [==============================] - 4s 18ms/step - loss: 0.0116 - val_loss: 0.0110
Epoch 5/20
219/219 [==============================] - 5s 23ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 6/20
219/219 [==============================] - 2s 8ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 7/20
219/219 [==============================] - 2s 8ms/step - loss: 0.0114 - val_loss: 0.0109
</code></pre>
<p>which is 100x faster!</p>
<p>Why such a difference?</p>
<p>I have read the tensorflow has some performance problem on Silicon Apple. Is there any way I can reach such performances?</p>
| <python><tensorflow><keras><apple-m1> | 2023-11-14 18:42:16 | 0 | 443 | apt45 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.