QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,435,040
| 1,906,494
|
Add python to a existing docker image
|
<p>I have a docker-compose file like this:</p>
<pre><code>version: '3'
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
container_name: namenode
volumes:
- ./hdfs/namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=hive
env_file:
- ./hadoop-hive.env
ports:
- "50070:50070"
</code></pre>
<p>but after <code>docker exec -it namenode /bin/bash</code>
I can not find python in it .
How do I add python in these containers ?</p>
<p>I tried installing python after logging into containers (apt-get install) but it fails to fetch required files from deb.debian</p>
<p>Failed to fetch <a href="http://deb.debian.org/debian/pool/main/p/python3.4/python3.4-minimal_3.4.2-1_amd64.deb" rel="nofollow noreferrer">http://deb.debian.org/debian/pool/main/p/python3.4/python3.4-minimal_3.4.2-1_amd64.deb</a> 404 Not Found</p>
<p>Thanks</p>
|
<python><docker>
|
2023-06-08 19:14:01
| 0
| 2,703
|
vijay shanker
|
76,434,958
| 15,524,510
|
Pandas calculate "bars since high"
|
<p>I am trying to calculate a rolling "bars since high" number that resets as new highs are made in Pandas. I can calculate the rolling highs but not the number of rows since that happened.</p>
<p>For example:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([0,1,2,3,10,3,4,5,25],columns=['price'])
df['high'] = df['price'].rolling(window=100000,min_periods=1).max()
</code></pre>
<p>in this case, the desired output would be:</p>
<pre><code>df['barssincehigh'] = [0,0,0,0,0,1,2,3,0]
</code></pre>
<p>But I can't think of a way of calculating number of rows since the most recent high.</p>
|
<python><pandas><rolling-computation>
|
2023-06-08 19:02:24
| 1
| 363
|
helloimgeorgia
|
76,434,784
| 17,523,352
|
OpenCV: not authorized to capture video - mac python
|
<p>I'm on macOS Monterey 12.5.1, and I try to execute this code in python with IDLE, to capture my camera video:</p>
<pre><code>import cv2
cap = cv2.VideoCapture(0)
</code></pre>
<p>And that returns an error:</p>
<pre class="lang-none prettyprint-override"><code>OpenCV: not authorized to capture video (status 0), requesting...
Process ended with exit code -9.
</code></pre>
<p>I go in System preferences -> Security and privacy -> privacy -> camera
I want to enable the camera access for IDLE but the IDLE app doesn't appears in the list of applications, and I can't add any apps.</p>
<p>I don't know how to enable the camera permission for IDLE or how to resolve the problem.</p>
|
<python><macos><opencv>
|
2023-06-08 18:35:10
| 0
| 342
|
RadoTheProgrammer
|
76,434,462
| 727,238
|
Is there a way to enforce StringIO.seek() for non-ascii sequence to operate on bytes instead of characters (the same as for text file)?
|
<p>When implementing a unit test for function searching for something in a large text file, I met the problem with <code>io.StringIO.seek()</code>. Unlike normal text file streams it operates on characters and not bytes, so I'm unable to place a file pointer in the middle of a multi-byte character. Is there any workaround for this behavior?
Is there any elegant way to create a text file stream containing specific contents without creating an actual file?</p>
|
<python><python-3.x><unit-testing>
|
2023-06-08 17:45:52
| 0
| 2,071
|
ardabro
|
76,434,382
| 21,420,742
|
How to Check X amount of days back to check a column in python
|
<p>I have two datasets</p>
<p>df1</p>
<pre><code> ID Name StartDate EndDate Team
101 Adam 1/2/2022 3/2/2022 Sales
101 Adam 3/2/2022 7/3/2022 Sales
101 Adam 7/3/2022 12/31/2022 Sales
102 Beth 4/20/2022 6/10/2022 Tech
102 Beth 6/10/2022 8/10/2022 Advisor
102 Beth 8/10/2022 12/31/2022 Advisor
103 Carl 5/20/2022 8/25/2022 Sales
103 Carl 8/25/2022 9/25/2022 HR
103 Carl 9/25/2022 12/31/2022 HR
.....
150 Sue 3/4/2022 6/4/2022 HR
</code></pre>
<p>df2</p>
<pre><code>RequestTeam Manager_ID Person_replaced Request_Date
Tech 107 Beth 7/10/2022
Sales 137 Carl 10/1/2022
.....
Advisor 112 Claire 11/22/2022
</code></pre>
<p>The Goal is to look x amount of days back from the time of the <code>Request_Date</code> (I am using 30 days back as an example) and see if the person that has been requested was apart of the new team. Using <code>StartDate</code> as the field of reference.</p>
<p>Desired Output</p>
<pre><code>RequestTeam Manager_ID Person_replaced ID Request_Date Team_prior_Request PriorDate
Tech 107 Beth 101 6/10/2022 Tech 5/10/2022
Sales 137 Carl 103 10/1/2022 HR 9/1/2022
</code></pre>
<p>First I merged the datasets together on <code>ID</code> made sure to convert the dates to a datetime for those that weren't already. Then I tried to create a new field that would show dates 30 days back by row. All code is below:</p>
<p><code>df3 = pd.merge(df1, df2, left_on= "ID", right_on= "Manager_ID", how = "left")</code></p>
<p><code>df3["Request_Date"] = pd.to_datetime(df3["Requested_Date"])</code></p>
<p><code>df3["days prior"] = df3["Request_Date"] - dt(days = 30)</code></p>
<p><code>df3["Team_prior_Request"] = np.where(((df3["n days prior"] <= df3["StartDate"]) & (df3["StartDate"] < df3["Request_Date"])), df3["Team"],df3['Request_Team'])</code></p>
<p>I hope this is enough information, I am trying something new and trying to explain it right. Thank you in advance.</p>
|
<python><python-3.x><pandas><dataframe><datetime>
|
2023-06-08 17:36:51
| 0
| 473
|
Coding_Nubie
|
76,434,311
| 4,075,155
|
How to get the logits of the model with a text classification pipeline from HuggingFace?
|
<p>I need to use <code>pipeline</code> in order to get the tokenization and inference from the <code>distilbert-base-uncased-finetuned-sst-2-english</code> model over my dataset.</p>
<p>My data is a list of sentences, for recreation purposes we can assume it is:</p>
<p><code>texts = ["this is the first sentence", "of my data.", "In fact, thats not true,", "but we are going to assume it", "is"]</code></p>
<p>Before using <code>pipeline</code>, I was getting the logits from the model outputs like this:</p>
<pre><code>with torch.no_grad():
logits = model(**tokenized_test).logits
</code></pre>
<p>Now I have to use pipeline, so this is the way I'm getting the model's output:</p>
<pre><code> selected_model = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(selected_model)
model = AutoModelForSequenceClassification.from_pretrained(selected_model, num_labels=2)
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
print(classifier(text))
</code></pre>
<p>which gives me:</p>
<p><code>[{'label': 'POSITIVE', 'score': 0.9746173024177551}, {'label': 'NEGATIVE', 'score': 0.5020197629928589}, {'label': 'NEGATIVE', 'score': 0.9995120763778687}, {'label': 'NEGATIVE', 'score': 0.9802979826927185}, {'label': 'POSITIVE', 'score': 0.9274746775627136}]</code></p>
<p>And I cant get the 'logits' field anymore.</p>
<p>Is there a way to get the <code>logits</code> instead of the <code>label</code> and <code>score</code>? Would a custom pipeline be the best and/or easiest way to do it?</p>
|
<python><huggingface-transformers><sentiment-analysis><huggingface><large-language-model>
|
2023-06-08 17:26:56
| 1
| 2,380
|
Lucas Azevedo
|
76,434,183
| 13,991,234
|
PolyData with nonconvex polygons
|
<p>Just to replicate the error, let's consider the succession of points describing a nonconvex polygon:</p>
<pre><code>[0. , 0. , 0. ],
[1. , 0. , 0. ],
[1. , 0.5, 0. ],
[0.5, 0.5, 0. ],
[0.5, 1. , 0. ],
[1. , 1. , 0. ],
[1. , 1.5, 0. ],
[0. , 1.5, 0. ]
</code></pre>
<p>This data should represent a nonconvex polygon looking like <a href="https://i.sstatic.net/9XGRp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9XGRp.png" alt="c shape polygon" /></a></p>
<p>But when trying to set this up using PyVista's <code>PolyData</code> constructor, as:</p>
<pre><code>import numpy as np
import pyvista
b = 0.5
c = 0.5
points = np.array([
[0, 0, 0],
[1, 0, 0],
[1, b, 0],
[1-b, b, 0],
[1-b, b+c, 0],
[1, b+c, 0],
[1, 2*b+c, 0],
[0, 2*b+c, 0],
])
face = np.concatenate([
[9],
np.arange(9),
[0],
])
polygon = pyvista.PolyData(
points,
face)
polygon.plot()
</code></pre>
<p>I somehow get a distorted version of it:</p>
<p><a href="https://i.sstatic.net/o6AJi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o6AJi.png" alt="c shaped polygon with a triangular artifact in the hole of the "c"" /></a></p>
<p>Is there something I'm missing from pyvista documentation?</p>
|
<python><convex><pyvista>
|
2023-06-08 17:03:03
| 1
| 438
|
Franco Milanese
|
76,434,151
| 10,952,140
|
Conditionally modify import statements
|
<p>I have a collection of .py files in the following file structure:</p>
<pre><code>packagename\
mod_1.py
mod_2.py
script_1.py
etc.
</code></pre>
<p>These files do two things:</p>
<ol>
<li>They're bundled into an application (a JMP/JSL add-in, although not relevant here). In this application <code>script_1.py</code> is called, and it contains import statements like <code>import mod_1</code>. <code>mod_1.py</code> in turn imports <code>mod_2.py</code> as <code>import mod_2</code>. This is fine.</li>
<li>The files are also bundled up into a python package called <code>packagename</code>. In this context when I want to import <code>mod_2.py</code> inside <code>mod_1.py</code> I call <code>import packagename.mod_2</code>. This is also fine.</li>
</ol>
<p>I presently implement this in <code>mod_1.py</code>, very painfully, as:</p>
<pre><code>if 'addin' in __file__:
import mod_2
else:
import packagename.mod_2
</code></pre>
<p>This works but I've got lots and lots of modules (more than just mod_1, mod_2). Is there a way to do something like this (pseudocode)?:</p>
<pre><code>if 'addin' in __file__:
import_prefix = ''
else:
import_prefix = 'packagename.'
import import_prefix + mod_2
</code></pre>
<p>This answer seems to cover half of what I'm looking for, but not everything:
<a href="https://stackoverflow.com/questions/12229580/python-importing-a-sub-package-or-sub-module">Python: importing a sub‑package or sub‑module</a></p>
|
<python>
|
2023-06-08 16:59:31
| 1
| 3,680
|
Greg
|
76,433,917
| 889,053
|
What functional paradigm is this
|
<p>I am writing some python code and I am implementing a pattern that I have done repeatedly and in numerous languages. I keep thinking this is some sort of functional paradigm, and that this can be implemented functionally (without use of a <code>for</code> loop), but I am not seeing it. <code>Map</code> takes a series and returns a new series. <code>Reduce</code> takes a series and reduces it to a single result. This is neither of those.</p>
<p>The idea of this function below is to repeatedly apply a transformation to a single string. In this case:</p>
<ol>
<li>Replace all uuids</li>
<li>then replace all number groups between 5-9 digits</li>
<li>There will be more transforms</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def to_template(message):
mappers = [
('[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}', '{.uuid}'),
(' [0-9]{5,9}.', ' {.request_id}.')
]
result = message
for pattern, replacement in mappers:
result = re.sub(pattern, replacement, result)
return result
</code></pre>
<p><strong>The questions:</strong></p>
<ol>
<li>Can this be implemented functionally?</li>
<li>Is there a known functional paradigm for this?</li>
</ol>
|
<python><functional-programming>
|
2023-06-08 16:22:48
| 1
| 5,751
|
Christian Bongiorno
|
76,433,888
| 6,500,048
|
Error when trying to apply a function to a dataframe
|
<p>I've written a function that takes in a string and a dictionary then matches the key string to the keys in the dictionary, fairly standard stuff but I can't work out how to apply it to a dataframe here is the function i want to apply:</p>
<pre><code>def key_check(d={}, key=''):
new = {}
if not key:
return
if key in d:
new[key] = d[key]
else:
new[key] = ''
return new
</code></pre>
<p>And I'm attempting to apply it like this:</p>
<pre><code>df["centre"] = df["centre"].apply(key_check, axis=1, args=df["name"])
</code></pre>
<p>I'm getting this error but don't understand what could be ambiguous, the column name is either a string or a null value, the column centre is a dictionary. How can I fix this?</p>
<pre><code>
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/series.py:4433, in Series.apply(self, func, convert_dtype, args, **kwargs)
4323 def apply(
4324 self,
4325 func: AggFuncType,
(...)
4328 **kwargs,
4329 ) -> DataFrame | Series:
4330 """
4331 Invoke function on values of Series.
4332
(...)
4431 dtype: float64
4432 """
-> 4433 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py:1065, in SeriesApply.__init__(self, obj, func, convert_dtype, args, kwargs)
1055 def __init__(
1056 self,
1057 obj: Series,
(...)
1061 kwargs,
1062 ):
1063 self.convert_dtype = convert_dtype
-> 1065 super().__init__(
1066 obj,
1067 func,
1068 raw=False,
1069 result_type=None,
1070 args=args,
1071 kwargs=kwargs,
1072 )
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py:119, in Apply.__init__(self, obj, func, raw, result_type, args, kwargs)
117 self.obj = obj
118 self.raw = raw
--> 119 self.args = args or ()
120 self.kwargs = kwargs or {}
122 if result_type not in [None, "reduce", "broadcast", "expand"]:
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/generic.py:1527, in NDFrame.__nonzero__(self)
1525 @final
1526 def __nonzero__(self):
-> 1527 raise ValueError(
1528 f"The truth value of a {type(self).__name__} is ambiguous. "
1529 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1530 )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<python><dataframe><apply>
|
2023-06-08 16:18:26
| 2
| 1,279
|
iFunction
|
76,433,883
| 4,442,337
|
Pandas read_csv dropping duplicate header rows?
|
<p>I have multiple csv in cloud which I have to download as bytes. Those csv's are all equals in format so I'm expecting always the same number of data.</p>
<pre><code>itemid timestamp y y_lower
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
itemid timestamp y y_lower
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
itemid timestamp y y_lower
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
43406 2023-05-29T16:00:00 27.61612174350883 4.7486855702091635
</code></pre>
<pre class="lang-py prettyprint-override"><code> dataset_bytes_array, dataset_metadata = download_object_directory_bytes(
dataset_storage.bucket_name, prefix=f'{dataset_storage.object_path}/datasets',
)
dataset_bytes_data = b''.join(dataset_bytes_array)
</code></pre>
<p>After obtaining the final bytes array, I create a Pandas dataframe in the following way:</p>
<pre class="lang-py prettyprint-override"><code> dataset_df = pd.read_csv(
BytesIO(dataset_bytes_data), on_bad_lines='warn', keep_default_na=False, dtype=object,
)
</code></pre>
<p>I thought that <code>on_bad_lines</code> could help me skip the duplicate header rows but this doesn't seem to happen. Is there a very generic way to drop duplicate header rows?</p>
|
<python><pandas>
|
2023-06-08 16:17:54
| 1
| 2,191
|
browser-bug
|
76,433,838
| 34,934
|
Django: how to get a URL view name from a request?
|
<p>In Django url configs, you can use url "view names" to aid various activities like redirects. Seen here in this example <code>urls.py</code> file (the final <code>name</code> arguments):</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path('', views.home_view, name='home'),
path('search/', views.search_view, name='search'),
....
</code></pre>
<p>Now, in my Django view function, I would like to inspect the current <code>request</code> object to discover the current url "view name" for the current request. This could be useful to create a general-purpose "handler" function that could be called from multiple views (or better yet to use in a view decorator).</p>
<p>This would allow your general purpose function to redirect to the current page on a POST request for example. But to redirect via <code>redirect()</code> requires knowing the "view name" you want to redirect to.</p>
<p>How can I find the "view name" by inspecting the current <code>request</code>?</p>
|
<python><django>
|
2023-06-08 16:12:18
| 1
| 11,347
|
Todd Ditchendorf
|
76,433,688
| 73,382
|
Sending data to Azure Event Hub using Synapse Spark
|
<p>Working on Synapse Analytics Studio using PySpark and while I was able to read the Event Hub messages, I am not able to produce messages. Getting an error when using the save method.</p>
<pre class="lang-py prettyprint-override"><code>### %pip install azure-eventhub
import json
connectionString = "Endpoint=sb://::hidden::"
ehConf = { }
ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
# Create the positions
startingEventPosition = {
"offset": -1,
"seqNo": -1, #not in use
"enqueuedTime": None, #not in use
"isInclusive": True
}
ehConf["eventhubs.startingPosition"] = json.dumps(startingEventPosition)
df = spark.read.format("eventhubs").options(**ehConf).load()
display(df)
</code></pre>
<p>Works... I can see the messages complete with sequenceNumber, body, et al</p>
<p>Now, I want to write to the event hub from a data frame.</p>
<pre class="lang-py prettyprint-override"><code>df1 = spark.read.parquet(silver_path) # confirmed to have data via display(df1)
df1 \
.select(struct(*[c for c in df1.columns]).alias("body")) \
.write \
.format("eventhubs") \
.options(**ehConf) \
.save()
</code></pre>
<p>Results in:</p>
<pre><code>Py4JJavaError: An error occurred while calling o4089.save.
: java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
at org.apache.spark.sql.eventhubs.EventHubsWriter$.validateQuery(EventHubsWriter.scala:58)
at org.apache.spark.sql.eventhubs.EventHubsWriter$.write(EventHubsWriter.scala:70)
at org.apache.spark.sql.eventhubs.EventHubsSourceProvider.createRelation(EventHubsSourceProvider.scala:124)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:183)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:97)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:108)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:104)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:104)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:88)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:82)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:136)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:901)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:382)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:249)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
</code></pre>
|
<python><azure><apache-spark><pyspark><azure-synapse>
|
2023-06-08 15:56:07
| 1
| 1,583
|
Rob Koch
|
76,433,682
| 5,858,752
|
How to check if a function is called for the first time or a subsequent call?
|
<p>I have a function</p>
<pre><code>def compute_daily_quantities():
# compute some quantities, one of which is the variable, value
</code></pre>
<p>After the first call to <code>compute_daily_quantities</code>, we need to add a constant, <code>c</code> to value.</p>
<p>How can I check if it's the first or a subsequent call to <code>compute_daily_quantities</code>?</p>
<p>I come from a C++ background, and in C++, we can introduce a static variable within the function to check. I know you can do something similar in Python, but are there other ways in which this can be done in Python?</p>
<p>The solution I was envisioning is:</p>
<pre><code>def compute_daily_quantities():
# compute some quantities, one of which is the variable, value
if not hasattr(compute_daily_quantities, "is_first_call"):
compute_daily_quantities.is_first_call = True
else:
value += c
compute_daily_quantities.is_first_call = False
</code></pre>
|
<python><static-variables>
|
2023-06-08 15:55:00
| 2
| 699
|
h8n2
|
76,433,662
| 3,611,472
|
Python GET request not matching CURL output
|
<p>I am working with the SAXO OpenAPI. I need to authenticate using some non-confidential credentials as a first step of the authentication flow.</p>
<p>The following CURL request works fine and returns the response that I expect</p>
<pre><code>curl -X GET -H “Content-Type: application/x-www-form-urlencoded” "https://sim.logonvalidation.net/authorize?response_type=code&client_id=37656c896c424f8cab64774e603bc927&state=y90dsygas98dygoidsahf8sa&redirect_uri=http%3A%2F%2Flocalhost%2FmyAppTest"
</code></pre>
<p>However, when I try to do the same request in python, I get a status code 400 signaling an error. This is my python code</p>
<pre><code>import requests
def authenticate(config: dict = None):
if config is None:
return False
endpoint = config['AuthorizationEndpoint']
head = {'Content-Type': 'application/x-www-form-urlencoded'}
data = {'response_type': 'code',
'client_id': config['client_id'],
'state': 'y90dsygas98dygoidsahf8sa',
'redirect_uri': config['url']}
response = requests.request('GET', url=endpoint, data=data, headers=head)
print(response)
config = {'AuthorizationEndpoint': 'https://sim.logonvalidation.net/authorize',
'client_id': '37656c896c424f8cab64774e603bc927',
'url': 'http://localhost/myAppTest'}
authenticate(config)
</code></pre>
<p>I thought that the python code should have done the same request as in CURL, but apparently something is different. Can you tell me in what they differ?</p>
|
<python><curl><python-requests>
|
2023-06-08 15:52:15
| 2
| 443
|
apt45
|
76,433,627
| 16,319,191
|
For loop to select two cols at a time using index in python
|
<p>Select 1st and 2nd, 1st and 3rd, 1st and 4th cols so on in a pandas df... until the last one.
Essentially I need to loop over 2 in the following command</p>
<pre><code>newdf = df.iloc[:,[1,2]] # loop over the column index starting at 2 ending at the last col
</code></pre>
|
<python><pandas><for-loop>
|
2023-06-08 15:48:29
| 1
| 392
|
AAA
|
76,433,097
| 11,329,736
|
Snakemake wrappers suddenly stopped working
|
<p>I have this wrappers in my <code>snakemake</code> file</p>
<pre><code>rule fastqc:
input:
"reads/{sample}_trimmed.fq.gz"
output:
html="qc/fastqc/{sample}.html",
zip="qc/fastqc/{sample}_fastqc.zip" # the suffix _fastqc.zip is necessary for multiqc to find the file
params:
extra = "--quiet"
log:
"logs/fastqc/{sample}.log"
threads: config["resources"]["fastqc"]["cpu"]
conda:
"envs/qc.yaml"
wrapper:
"v1.31.1/bio/fastqc"
</code></pre>
<p>qc.yaml:</p>
<pre><code>name: qc
channels:
- bioconda
dependencies:
- python
- fastqc
- multiqc
</code></pre>
<p>This normally works, but suddenly it stopped working (I have not changed the code) and I get this error:</p>
<pre><code>[Thu Jun 8 15:34:53 2023]
rule fastqc:
input: reads/S15_trimmed.fq.gz
output: qc/fastqc/S15.html, qc/fastqc/S15_fastqc.zip
log: logs/fastqc/S15.log
jobid: 2
reason: Missing output files: qc/fastqc/S15_fastqc.zip
wildcards: sample=S15
threads: 4
resources: tmpdir=/tmp
[Thu Jun 8 15:34:53 2023]
rule fastqc:
input: reads/L8_trimmed.fq.gz
output: qc/fastqc/L8.html, qc/fastqc/L8_fastqc.zip
log: logs/fastqc/L8.log
jobid: 6
reason: Missing output files: qc/fastqc/L8_fastqc.zip
wildcards: sample=L8
threads: 4
resources: tmpdir=/tmp
[Thu Jun 8 15:34:53 2023]
rule fastqc:
input: reads/S8_trimmed.fq.gz
output: qc/fastqc/S8.html, qc/fastqc/S8_fastqc.zip
log: logs/fastqc/S8.log
jobid: 4
reason: Missing output files: qc/fastqc/S8_fastqc.zip
wildcards: sample=S8
threads: 4
resources: tmpdir=/tmp
python -c "from __future__ import print_function; import sys, json; print(json.dumps([sys.version_info.major, sys.version_info.minor]))"
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
python -c "from __future__ import print_function; import sys, json; print(json.dumps([sys.version_info.major, sys.version_info.minor]))"
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
python -c "from __future__ import print_function; import sys, json; print(json.dumps([sys.version_info.major, sys.version_info.minor]))"
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
Environment defines Python version < 3.7. Using Python of the main process to execute script. Note that this cannot be avoided, because the script uses data structures from Snakemake which are Python >=3.7 only.
Environment defines Python version < 3.7. Using Python of the main process to execute script. Note that this cannot be avoided, because the script uses data structures from Snakemake which are Python >=3.7 only.
/home/user/mambaforge/envs/snakemake/bin/python3.11 /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpx1titff7.wrapper.py
/home/user/mambaforge/envs/snakemake/bin/python3.11 /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpw1yrfmqi.wrapper.py
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
Environment defines Python version < 3.7. Using Python of the main process to execute script. Note that this cannot be avoided, because the script uses data structures from Snakemake which are Python >=3.7 only.
/home/user/mambaforge/envs/snakemake/bin/python3.11 /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpnh80ovk0.wrapper.py
Activating conda environment: .snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
Traceback (most recent call last):
File "/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpx1titff7.wrapper.py", line 17, in <module>
from snakemake_wrapper_utils.snakemake import get_mem
ModuleNotFoundError: No module named 'snakemake_wrapper_utils'
Traceback (most recent call last):
File "/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpw1yrfmqi.wrapper.py", line 17, in <module>
from snakemake_wrapper_utils.snakemake import get_mem
ModuleNotFoundError: No module named 'snakemake_wrapper_utils'
Traceback (most recent call last):
File "/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/scripts/tmpnh80ovk0.wrapper.py", line 17, in <module>
from snakemake_wrapper_utils.snakemake import get_mem
ModuleNotFoundError: No module named 'snakemake_wrapper_utils'
[Thu Jun 8 15:34:54 2023]
Error in rule fastqc:
jobid: 4
input: reads/S8_trimmed.fq.gz
output: qc/fastqc/S8.html, qc/fastqc/S8_fastqc.zip
log: logs/fastqc/S8.log (check log file(s) for error details)
conda-env: /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
[Thu Jun 8 15:34:54 2023]
Error in rule fastqc:
jobid: 2
input: reads/S15_trimmed.fq.gz
output: qc/fastqc/S15.html, qc/fastqc/S15_fastqc.zip
log: logs/fastqc/S15.log (check log file(s) for error details)
conda-env: /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
[Thu Jun 8 15:34:54 2023]
Error in rule fastqc:
jobid: 6
input: reads/L8_trimmed.fq.gz
output: qc/fastqc/L8.html, qc/fastqc/L8_fastqc.zip
log: logs/fastqc/L8.log (check log file(s) for error details)
conda-env: /mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/32ae7e363cfd65f035e232e794d5bc2b_
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
</code></pre>
<p>There seems to be a Python version conflict suddenly that cannot be avoided. How can I solve this?</p>
|
<python><snakemake>
|
2023-06-08 14:49:31
| 1
| 1,095
|
justinian482
|
76,433,042
| 8,261,345
|
SQLAlchemy: get values of column where the only value for another column is X without using DISTINCT ON?
|
<p>I have a table model:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Data(Base):
id = Column(Integer, primary_key=True)
stock = Column(String)
customer = Column(String)
</code></pre>
<p>I want to return the distinct values of <code>stock</code> for which the only value in <code>customer</code> is <code>"Alice"</code>. For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>stock</th>
<th>customer</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Alice</td>
</tr>
<tr>
<td>A</td>
<td>Alice</td>
</tr>
<tr>
<td>A</td>
<td>Bob</td>
</tr>
<tr>
<td>B</td>
<td>Alice</td>
</tr>
<tr>
<td>C</td>
<td>Bob</td>
</tr>
</tbody>
</table>
</div>
<p>In this case, I want the result to be only <code>["B"]</code>.</p>
<p>This is fairly straightforward with Postgres:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine('connection_string')
Session = sessionmaker(bind=engine)
session = Session()
result = session.query(Data.stock).distinct(Data.stock).filter(
Data.customer == "Alice"
).group_by(Data.stock).having(
~Data.stock.in_(session.query(Data.stock).filter(Data.customer != "Alice"))
)
print([r for r in result])
</code></pre>
<p>This works, but makes use of Postgres' DISTINCT ON, and I want my query to be database-agnostic (MySQL, SQLite etc.). I'm not sure how to adapt it to be so. In fact, when running this with SQLite, I get the warning:</p>
<pre><code>SADeprecationWarning: DISTINCT ON is currently supported only by the PostgreSQL dialect. Use of DISTINCT ON for other backends is currently silently ignored, however this usage is deprecated, and will raise CompileError in a future release for all backends that do not support this syntax.
</code></pre>
<p>How can I perform this query in SQLAlchemy and be database-agnostic?</p>
|
<python><postgresql><sqlite><sqlalchemy>
|
2023-06-08 14:44:26
| 1
| 694
|
Student
|
76,432,971
| 3,650,477
|
How can I use LangChain Callbacks to log the model calls and answers into a variable
|
<p>I'm using <a href="https://python.langchain.com/en/latest/index.html" rel="nofollow noreferrer">LangChain</a> to build a NL application. I want the interactions with the LLM to be recorded in a variable I can use for logging and debugging purposes. I have created a very simple chain:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, Dict
from langchain import PromptTemplate
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
handler = MyCustomHandler()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.run(number=2)
</code></pre>
<p>To record what's going on, I have created a custom <a href="https://python.langchain.com/en/latest/modules/callbacks/getting_started.html#creating-a-custom-handler" rel="nofollow noreferrer">CallbackHandler</a>:</p>
<pre class="lang-py prettyprint-override"><code>class MyCustomHandler(BaseCallbackHandler):
def on_text(self, text: str, **kwargs: Any) -> Any:
print(f"Text: {text}")
self.log = text
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
print("Chain started running")
</code></pre>
<p>This works more or less as expected, but it has some side effects that I cannot figure out where they are coming from. The output is:</p>
<p><a href="https://i.sstatic.net/EgrL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EgrL2.png" alt="enter image description here" /></a></p>
<p>And the <code>handler.log</code> variable contains:</p>
<p><code>'Prompt after formatting:\n\x1b[32;1m\x1b[1;3m1 + 2 = \x1b[0m'</code></p>
<p>Where are the "Prompt after formatting" and the ANSI codes setting the text as green coming from? Can I get rid of them?</p>
<p>Overall, is there a better way I'm missing to use the callback system to log the application? This seems to be poorly documented.</p>
|
<python><nlp><openai-api><langchain>
|
2023-06-08 14:36:58
| 1
| 2,729
|
Pythonist
|
76,432,915
| 2,523,899
|
How can I install Kallithea on Windows Server 2022 with Python 3.11?
|
<p>I installed current Python (3.11).
Then <code>pip install pywin32 --upgrade</code>, created folders and <code>python -m venv ...</code> called <code>activate</code> in Scripts folder and <code>pip install --upgrade pip setuptools</code>.
Finally <code>pip install kallithea</code>. Waiting full of anticipation...</p>
<p><strong>ImportError: Python version not supported</strong></p>
<p>Is it still possible to get Kallithea running on Windows (how?) or can you actually consider it dead now and have to use something else?</p>
<p>Adam Jenča is right - Kallithea uses an outdated FormEncode:</p>
<blockquote>
<p>Collecting FormEncode<1.4,>=1.3.1 (from kallithea)
Using cached FormEncode-1.3.1.tar.gz (197 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error</p>
</blockquote>
|
<python><kallithea>
|
2023-06-08 14:30:28
| 0
| 907
|
The incredible Jan
|
76,432,752
| 9,795,817
|
Tuning while loops in pyspark (persisting or caching dataframes in a loop)
|
<p>I am writing a PySpark implementation of an algorithm that is iterative in nature. Part of the algorithm involves iterating a strategy until no more improvements can be made (i.e., a local maximum has been greedily reached).</p>
<p>The function <code>optimize</code> returns a three-column dataframe that looks as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>current_value</th>
<th>best_value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>This function is used in a while loop until <code>current_value</code> and <code>best_value</code> are identical (meaning that no more optimizations can be made).</p>
<pre class="lang-py prettyprint-override"><code># Init while loop
iterate = True
# Start iterating until optimization yields same result as before
while iterate:
# Create (or overwrite) `df`
df = optimizeAll(df2) # Uses `df2` as input
df.persist().count()
# Check stopping condition
iterate = df.where('current_value != best_value').count() > 0
# Update `df2` with latest results
if iterate:
df2 = df2.join(other=df, on='id', how='left') # <- Should I persist this?
</code></pre>
<p>This function runs very quickly when I pass it the inputs manually. However, I have noticed that the time it takes for the function to run increases exponentially as it iterates. That is, the first iteration runs in milliseconds, the second one in seconds and eventually it takes up to 10 minutes per pass.</p>
<p><a href="https://stackoverflow.com/questions/65236290/cacheing-and-loops-in-pyspark">This question</a> suggests that if <code>df</code> isn't cached, the while loop will start running from scratch on every iteration. Is this true?</p>
<p>If so, which objects should I persist? I know that persisting <code>df</code> will be triggered by the <code>count</code> when defining <code>iterate</code>. However, <code>df2</code> has no action, so even if I persist it, will it make the while loop start from scratch every time? Likewise, should I unpersist either table at some point in the loop?</p>
|
<python><apache-spark><pyspark><caching><while-loop>
|
2023-06-08 14:11:39
| 1
| 6,421
|
Arturo Sbr
|
76,432,633
| 14,777,704
|
How to plot bar graph with two column values parallely in a single figure using plotly?
|
<p>I have a pandas dataframe which shows in each shopping mall, how much the males spend money and how much the females spend money. Its like this -</p>
<pre><code> ShoppingMall MaleSpends FemaleSpends
0 XX 5600.20 4500.70
1 YY 9000.00 100000.00
2 zz 7809.45 5600.89
</code></pre>
<p>In one graph I have to plot the malespends and femalespends.</p>
<p>I can plot the graphs separately in plotly -</p>
<pre><code>import plotly.express as px
fig1=px.bar(data,x='ShoppingMall',y='MaleSpends',barmode='group')
fig2=px.bar(data,x='ShoppingMall',y='FemaleSpends',barmode='group')
</code></pre>
<p>But I need to plot the money spent by males and females in one figure.
A screenshot is shown .<a href="https://i.sstatic.net/giqYq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/giqYq.png" alt="enter image description here" /></a></p>
|
<python><plotly><plotly-dash>
|
2023-06-08 13:58:04
| 2
| 375
|
MVKXXX
|
76,432,451
| 14,947,895
|
Changing compression parameter in python-blosc2
|
<p>I wanted to test out <a href="https://www.blosc.org/python-blosc2/python-blosc2.html" rel="nofollow noreferrer">python-blosc2</a>.</p>
<p>When trying do compress data with a user-defined Filter however, I stumbled across a for me unexplainable error.</p>
<pre class="lang-py prettyprint-override"><code>
import blosc2
import numpy as np
a = np.random.rand(1000, 1000)
blosc2.compress(a, codec='blosclz', clevel=5, filter=blosc2.Filter.SHUFFLE)
</code></pre>
<p>I receive a <strong><code>AttributeError: 'str' object has no attribute 'name'</code></strong></p>
<p>as the documentation said, one should pass the `enum blosc2.Filter` as argument. However, I tried multiple ways, including (but receiving the same error):</p>
<pre><code>blosc2.compress(a, codec='blosclz', clevel=5, filter=blosc2.Filter(0))
</code></pre>
<hr />
<p>I did miss, to uses the enum objects insted of the string for as also pointed out in the <a href="https://www.blosc.org/python-blosc2/reference/autofiles/top_level/blosc2.compress.html" rel="nofollow noreferrer">documentation</a>.</p>
|
<python><compression><codec><blosc>
|
2023-06-08 13:38:59
| 1
| 496
|
Helmut
|
76,431,589
| 264,136
|
find value or NA if the key does not exist in dict of dicts
|
<pre><code> results = {
"IPSEC-IMIX": {
"throughput": 5974,
"kpps": 2114,
"rate": 31.56,
"qfp": 86,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSEC_QOS_DPI_FNF-IMIX": {
"throughput": 5913,
"kpps": 2065,
"rate": 31.223,
"qfp": 90,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSEC_QOS_DPI_FNF_SNAT_ZBFW-IMIX": {
"throughput": 2640,
"kpps": 922,
"rate": 13.942,
"qfp": 85,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSEC_QOS_DPI_FNF_TNAT-IMIX": {
"throughput": 5884,
"kpps": 2055,
"rate": 31.067,
"qfp": 95,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSEC_MCAST-IMIX": {
"throughput": 3330,
"kpps": 1163,
"rate": 35.164,
"qfp": 64,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSEC_QOS_DPI_FNF_MCAST-IMIX": {
"throughput": 3407,
"kpps": 1190,
"rate": 36.002,
"qfp": 87,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSECV6-IMIX": {
"throughput": 5673,
"kpps": 1919,
"rate": 29.904,
"qfp": 25,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
},
"IPSECV6_QOSV6_DPI_FNF-IMIX": {
"throughput": 5768,
"kpps": 1951,
"rate": 30.403,
"qfp": 38,
"label": "BLD_V179_THROTTLE_LATEST_20230513_170843"
}
}
</code></pre>
<p>I have a string <code>the_feature</code> which maps to to the keys in the above dict (example <code>IPSEC-IMIX</code>) and I want to get either the <code>throughput</code> or "NA" if the feature does not exist.</p>
<p>Tried the below but it does not work when an inexistent <code>the_feature</code> is used.</p>
<pre><code>answer = results[the_feature]["throughput"] if results[the_feature]["throughput"] else "NA"
Traceback (most recent call last):
File "C:\code\api_server_imix.py", line 1113, in get_results
item["release_speed"] = release_obj.results[the_feature].get("throughput", "NA")
value = super().__getitem__(key)
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'IPSEC-1400'
</code></pre>
|
<python>
|
2023-06-08 11:53:30
| 4
| 5,538
|
Akshay J
|
76,431,358
| 20,920,790
|
How to correct overlapped annotation for bar plots in for-loop
|
<p>I'm trying to make bar plot with many bars.
I want make code easier to use with varios data.
I make 'for' loop to iters columns in dataframe, bars looks fine, but some annotations is overlapped.
I think I need correct this loop:</p>
<blockquote>
<p>for (month, value), rect in zip(labels[l].items(), other_bars):</p>
</blockquote>
<p>Here's my code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patheffects import Normal, Stroke
import numpy as np
data = {
'all_ses': [3, 3, 7, 19, 27, 37, 59, 71, 101, 119, 161, 192, 223, 335, 466, 593, 675, 935, 1356, 1142],
'canceled_ptc': [
0.0, 0.0, 28.57, 21.05, 14.81, 18.92, 10.17, 12.68, 10.89, 20.17, 18.63,
17.19, 15.25, 17.91, 16.09, 18.21, 13.33, 17.11, 15.93, 16.37
],
'month_dt': [
'2021-02-01', '2021-03-01', '2021-04-01', '2021-05-01', '2021-06-01',
'2021-07-01', '2021-08-01', '2021-09-01', '2021-10-01', '2021-11-01',
'2021-12-01', '2022-01-01', '2022-02-01', '2022-03-01', '2022-04-01',
'2022-05-01', '2022-06-01', '2022-07-01', '2022-08-01', '2022-09-01'
],
'ses_canceled': [0, 0, 2, 4, 4, 7, 6, 9, 11, 24, 30, 33, 34, 60, 75, 108, 90, 160, 216, 187],
'ses_finished': [3, 3, 5, 15, 23, 30, 53, 62, 90, 95, 131, 159, 189, 275, 391, 485, 585, 775, 1140, 955]
}
df7_2 = (
pd.DataFrame(data).astype({'month_dt': 'datetime64[ns]'})
.set_index('month_dt')
)
# quantiles
quantiles = df7_2.quantile([0, 0.25, 0.5, 0.75, 1], numeric_only=True, interpolation='nearest')
quantiles = pd.concat([quantiles, df7_2[-1:]])
# quantiles filter
to_label = df7_2.apply(lambda s: s.isin(quantiles[s.name]))
labels = df7_2.where(to_label)
# set axis
fig, bar_ax = plt.subplots(figsize=(14, 6))
bars_columns_list = list(df7_2[['ses_finished', 'ses_canceled', 'all_ses']].columns)
df7_2_bars = df7_2[bars_columns_list]
bars_arr = list(np.array(range(0, len(bars_columns_list))))
bars_colors = ['#3049BF', '#BF9530', 'grey']
for i, l, c in zip(list(np.array(bars_arr)), bars_columns_list, bars_colors):
if l == bars_columns_list[0]:
# first bar
first_bar = bar_ax.bar(
x=df7_2_bars.index,
height=df7_2_bars[bars_columns_list[i]],
label=f'{l}',
linewidth=0,
width=20,
color=c
)
bar_labels = {}
for (month, value), rect in zip(labels[l].items(), first_bar):
if pd.isna(value):
continue
center_x, _ = rect.get_center()
bar_labels[month] = bar_ax.annotate(
f'{value:.0f}',
(center_x, rect.get_y() + rect.get_height()),
textcoords='offset points',
xytext=(0, 1),
ha='center',
color=rect.get_facecolor(),
path_effects=[Stroke(linewidth=0, foreground='white'), Normal()]
)
fig.canvas.draw_idle()
else:
# another bars
other_bars = bar_ax.bar(
x=df7_2_bars.index,
height=df7_2_bars[bars_columns_list[i]],
bottom=df7_2_bars[bars_columns_list[0:i]].sum(axis=1),
label=f'{l}',
linewidth=0,
width=20,
color=c
)
# THIS LOOP MUST BE CORRECTED
for (month, value), rect in zip(labels[l].items(), other_bars):
if pd.isna(value):
continue
center_x, _ = rect.get_center()
existing_label = bar_labels.get(month)
if existing_label:
label_top = (
bar_ax.transData.inverted()
.transform(existing_label.get_window_extent())
.flat[-1]
)
else:
label_top = 0
bar_y = max(rect.get_y() + rect.get_height(), label_top)
bar_labels[month] = bar_ax.annotate(
f'{value:.0f}',
(center_x, bar_y),
textcoords='offset points',
xytext=(0, 2),
ha='center',
color=rect.get_facecolor(),
path_effects=[Stroke(linewidth=0, foreground='gainsboro'), Normal()]
)
bar_ax.grid(False)
bar_ax.legend()
plt.show()
</code></pre>
<p>I marked, that I need to correct:
<a href="https://i.sstatic.net/f7Pzy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f7Pzy.png" alt="enter image description here" /></a></p>
<p>P. S. If you give me useful articles or guides it'll be great.</p>
|
<python><matplotlib><stacked-bar-chart><plot-annotations>
|
2023-06-08 11:24:57
| 1
| 402
|
John Doe
|
76,431,180
| 20,220,485
|
How do I conditionally group rows of a dataframe?
|
<p>In column <code>2</code> of <code>df</code>, there are three possible values: <code>X</code>, <code>Y</code>, <code>Z</code>. I want to group rows by the value <code>X</code> along with any trailing <code>Y</code> values in the columns directly following <code>X</code>. I am not interested in preserving the <code>Z</code> values in the groups.</p>
<p>I have tried using <code>groupby()</code> like this: <code>df.groupby(df[2] == 'X')</code>, however this obviously only grabs the <code>X</code> values.</p>
<p>How could I go about creating the groupings that I am after?</p>
<pre><code>df = pd.DataFrame({'1':['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p'],
'2':['Z','X','Y','Z','Z','X','X','Z','X','Y','Y','Z','X','Z','X','Y']})
</code></pre>
<p>Desired groupings:</p>
<pre><code>1 b X
2 c Y
---------
5 f X
---------
6 g X
---------
8 i X
9 j Y
10 k Y
---------
12 m X
---------
14 o X
15 p Y
</code></pre>
|
<python><pandas><group-by>
|
2023-06-08 11:04:27
| 3
| 344
|
doine
|
76,431,147
| 297,150
|
Connection Error While Synchronizing Permission for Keycloak Realm in Django
|
<p>When I try to synchronize permission for a realm from Keycloak in Django, I got the following error. Ayone knows the way of it?</p>
<p><a href="https://i.sstatic.net/dNAYX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dNAYX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/jnpo1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jnpo1.png" alt="enter image description here" /></a></p>
|
<python><django><keycloak>
|
2023-06-08 11:00:21
| 1
| 361
|
Yasir Arefin
|
76,431,083
| 17,638,206
|
Can't read a ".jpg" image using cv2.imread()
|
<p>I have a ".jpg" image and I can't read it using cv2.imread():</p>
<pre><code>img=cv2.imread(file_path)
print(img.shape)
AttributeError: 'NoneType' object has no attribute 'shape'
</code></pre>
<p>Even if I try :</p>
<pre><code>img=cv2.imread(file_path,cv2.IMREAD_UNCHANGED)
print(img.shape)
AttributeError: 'NoneType' object has no attribute 'shape'
</code></pre>
<p>However, if I try :</p>
<pre><code>img=plt.imread(file_path)
print(img.shape)
</code></pre>
<p>It works successfully:</p>
<pre><code>(3300, 2550, 3)
</code></pre>
<p>However, I want to work with <code>cv2.imread()</code>, so why its not working here ?</p>
<p>Update: The file basename in the file path contains arabic letters, when I change the base name to english letters, the cv2.imread() works and read the image, but I need to pass it using its original name, so I have tried :</p>
<pre><code>img=cv2.imread(u'{}'.format(file_path))
</code></pre>
<p>But still it doesnt work.</p>
|
<python><opencv><unicode>
|
2023-06-08 10:51:18
| 0
| 375
|
AAA
|
76,430,970
| 2,545,680
|
Tensorflow package is not found when running python program
|
<p>I have a super simple Python setup generated through PyCharm:</p>
<pre><code>import numpy as np
from tensorflow.keras import layers, models, optimizers, utils, datasets
def print_hi(name):
print(f'Hi, {name}') # Press Ctrl+F8 to toggle the breakpoint.
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
print_hi('PyCharm')
</code></pre>
<p>When I run it, it gives me the error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\max\PycharmProjects\pythonProject\main.py", line 2, in <module>
from tensorflow.keras import layers, models, optimizers, utils, datasets
ModuleNotFoundError: No module named 'tensorflow'
</code></pre>
<p>But the package is installed, I can see it here in the virtual env:</p>
<p><a href="https://i.sstatic.net/HKte0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HKte0.png" alt="enter image description here" /></a></p>
<p>I suspect it might be looking for the package in the global env, not virtual env. How to check that?</p>
<p>I see the following interpreters. Tried selecting both, none worked.</p>
<p><a href="https://i.sstatic.net/tZ7R5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tZ7R5.png" alt="enter image description here" /></a></p>
|
<python><pycharm>
|
2023-06-08 10:34:19
| 1
| 106,269
|
Max Koretskyi
|
76,430,719
| 4,107,002
|
How do I store tiled images as tif efficiently using Python
|
<p>I am developing an application that produces very large (up to gigapixel) images consisting of individual 4-channel 16 bit tiles. I am currently creating a folder per image and then just writing a bunch of separate files into this folder,each representing a "tile". The tiles are of equal size (3300x2700) but their alignment isn't perfectly uniform and though they observe a grid-like arrangement their specific location has minute variation along both the X and Y axis, I would like to store them in a format that is aware of such offsets and allows me to add it to the metadata. I.e. I would like a file format format comparable to layers in photoshop or GIMP. Each layer consisting of image data and it's placement within the 'canvas' of the large image. Obviously, given the size of the image, I would also like to avoid to having to store incredible amounts of black/transparent pixels bloating the file size. What I would like to end up with is a file that displays the image correctly, but also gives me the ability to recover the individual tiles that form it.</p>
<p>I vaguely recall, that Hugin(more specifically it's <code>PTStitcher</code> i.e. the part of the program that does image registrations and alignment) actually outputs a format consisting of multiple "layers" in a tif-file and I have my eye on the python <code>tifffile</code> project. Despite repeated efforts of looking for a snippet of code that achieves this or even something similar, I haven't been able to find any leads. Possibly for lack of the right terminology to research this problem with....</p>
<p>An ideal solution would be compatible with GIMP, so that I can open the file in GIMP and have individual, pre-aligned layers. But this is not a must...</p>
<p>I attached an image that illustrates what the tiles look like, arranged in a grid, though my use case would obviously see these tiles overlap in reality</p>
<p><a href="https://i.sstatic.net/KQd40.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KQd40.jpg" alt="Example image grid" /></a></p>
<p>Any suggestions are welcome</p>
<p>I have researched the matter and found formats such as MicroManager Indexmap or geospatial formats, but no code snippet that shows how to encode this information using the <code>tifffile</code> library, though I suspect it is possible.</p>
|
<python><image-processing><tiff>
|
2023-06-08 10:01:54
| 0
| 411
|
MTTI
|
76,430,698
| 1,314,503
|
Unable to move the cache: Access is denied and Unable to create cache
|
<p>when I run streamlit show this</p>
<blockquote>
<p>C:\Users...\src>streamlit run streamlit.py</p>
</blockquote>
<pre><code>C:\Users\...\src>
[4772:0604/211416.186:ERROR:cache_util_win.cc(20)] Unable to move the cache: Access is denied. (0x5)
[4772:0604/211416.186:ERROR:disk_cache.cc(205)] Unable to create cache
</code></pre>
<p>I really don’t know why this is happening! But I ran the example in the tutorial earlier, which had @st.cache_data in it, and I ran it several times, and then the system crashed, and then I ran it again and it showed the above. I deleted streamlit and installed it again. I tried to replace it with a different version, but it didn’t work.</p>
|
<python><python-3.x><streamlit>
|
2023-06-08 09:59:37
| 1
| 5,746
|
KarSho
|
76,430,697
| 13,217,286
|
Make a new column with list of unique values grouped by, or over, another column in Polars
|
<p>Before Polars 0.18.0, I was able to create a column with a list of all unique pokemon by type. I see <a href="https://github.com/pola-rs/polars/pull/8165" rel="nofollow noreferrer">Expr.list() has been refactored to implode()</a>, but I'm having trouble replicating the following using the new syntax:</p>
<p><code>df.with_columns(lst_of_pokemon = pl.col('name').unique().list().over('Type 1'))</code></p>
|
<python><dataframe><unique><window-functions><python-polars>
|
2023-06-08 09:59:37
| 2
| 320
|
Thomas
|
76,430,578
| 230,866
|
creating a top level span in python opentelemetry
|
<p>I have a python service that I am trying to add opentelemetry to. I am manually tracking user requests and subtasks and this seems to work ok: I am able to look at the spans in jaeger.</p>
<p>However, my system may decide to start a background asyncio task in the middle of a request. The request will end immediately, but the background task will go on forever. If I track code called by this task as spans, they end up being children of the request's span, although the request already ended. I'd rather prefer to show these spans as top level, like not children of any parent.</p>
<p>The documentation seems not to address this specific case, so I am trying to find the way</p>
<p>This is a simplified version of what I am attempting:</p>
<pre class="lang-py prettyprint-override"><code>from opentelemetry import trace
from opentelemetry import context
tracer = trace.get_tracer(__name__)
ROOT_CONTEXT = context.get_current()
tasks = set()
async def background_task(some_req_info):
with tracer.start_as_current_span("background task", context=ROOT_CONTEXT):
# do stuff
...
async def request(some_req_info):
with tracer.start_as_current_span("request"):
# more logic
task = asyncio.create_task(background_task(some_req_info))
tasks.add(task)
task.add_done_callback(tasks.discard)
</code></pre>
<p>How can I achieve parent-less spans in background tasks?</p>
|
<python><python-asyncio><open-telemetry>
|
2023-06-08 09:46:36
| 1
| 1,101
|
chaos.ct
|
76,430,575
| 14,820,295
|
split string into many columns with value in the same string Python
|
<p>Starting from the column "code", I need (by Python code) to pivot columns with the name of each string that contain the associated number (see example below)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>code</th>
<th>C</th>
<th>HD</th>
<th>HT</th>
<th>S</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>74C + 24HD</td>
<td>74</td>
<td>24</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>23C + 14HT + 3S</td>
<td>23</td>
<td>0</td>
<td>14</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>Thank u!</p>
|
<python><python-3.x><string><pivot><strip>
|
2023-06-08 09:45:57
| 2
| 347
|
Jresearcher
|
76,430,540
| 14,243,731
|
How to remove "Trusted_Connection=Yes" from pyodbc using SQLAlchemy Events and Azure Active Directory Token with classes?
|
<p>The <a href="https://docs.sqlalchemy.org/en/20/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc" rel="nofollow noreferrer">SQLAlchemy documentation</a> shows how to remove the trusted connection part of the connection string using an event.listens_for decorator using an engine that isn't created inside a function or a class - here is the example:</p>
<pre class="lang-py prettyprint-override"><code>import struct
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
from azure import identity
SQL_COPT_SS_ACCESS_TOKEN = 1256 # Connection option for access tokens, as defined in msodbcsql.h
TOKEN_URL = "https://database.windows.net/" # The token URL for any Azure SQL database
connection_string = "mssql+pyodbc://@my-server.database.windows.net/myDb?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_string)
azure_credentials = identity.DefaultAzureCredential()
@event.listens_for(engine, "do_connect")
def provide_token(dialect, conn_rec, cargs, cparams):
# remove the "Trusted_Connection" parameter that SQLAlchemy adds
cargs[0] = cargs[0].replace(";Trusted_Connection=Yes", "")
# create token credential
raw_token = azure_credentials.get_token(TOKEN_URL).token.encode("utf-16-le")
token_struct = struct.pack(f"<I{len(raw_token)}s", len(raw_token), raw_token)
# apply it to keyword arguments
cparams["attrs_before"] = {SQL_COPT_SS_ACCESS_TOKEN: token_struct}
</code></pre>
<p>The problem I am facing is that I connect to my database using a class called <code>Db</code> which has a <code>.engine</code> attribute created after instantiation (the engine that is created is dependent upon the environment). How can I use the decorator to remove the "Trusted_Connection=Yes" from my engine when it is an attribute of my class created after instantiation?</p>
<p>The code below results in the error saying: <code>AttributeError: type object 'Db' has no attribute 'engine'</code></p>
<pre class="lang-py prettyprint-override"><code>import os
import struct
from azure.identity import DefaultAzureCredential
from sqlalchemy.engine.url import URL
from sqlalchemy import create_engine, event
from sqlalchemy import inspect
class Db:
def __init__(self, config: object) -> None:
url = URL.create(
drivername="mssql+pyodbc",
port=1433,
query=dict(driver='ODBC Driver 18 for SQL Server'),
host=f"tcp:{os.environ.get(f'SERVER_{config.environment}')}",
database=os.environ.get(f'DATABASE_{config.environment}')
)
self.engine = create_engine(url=url, connect_args={"autocommit": True})
self.connection = self.engine.connect()
self.inspector = inspect(subject=self.engine)
def close(self) -> None:
self.connection.close()
@event.listens_for(target=Db.engine, identifier="do_connect")
def provide_token(dialect, conn_rec, cargs, cparams):
# remove the "Trusted_Connection" parameter that SQLAlchemy adds
cargs[0] = cargs[0].replace(";Trusted_Connection=Yes", "")
azure_credentials = identity.DefaultAzureCredential()
raw_token = azure_credentials.get_token(TOKEN_URL).token.encode("utf-16-le")
token_struct = struct.pack(f"<I{len(raw_token)}s", len(raw_token), raw_token)
# apply it to keyword arguments
cparams["attrs_before"] = {1256: token_struct}
</code></pre>
|
<python><azure><sqlalchemy><azure-sql-database><pyodbc>
|
2023-06-08 09:41:33
| 1
| 328
|
Adventure-Knorrig
|
76,430,327
| 17,580,381
|
Server socket accepts connection without listen
|
<p>It has always been my understanding that it is <strong>essential</strong> for a server socket to <em>listen()</em> before it can accept connections.</p>
<p>However, this appears not to be the case.</p>
<pre><code>import socket
import threading
ADDRESS = 'localhost', 10001
EVENT = threading.Event()
def server():
with socket.create_server(ADDRESS) as _socket:
#_socket.listen()
EVENT.set()
conn, _ = _socket.accept()
with conn:
print('Got client connection')
(_server := threading.Thread(target=server)).start()
EVENT.wait()
with socket.socket() as _socket:
_socket.connect(ADDRESS)
_server.join()
</code></pre>
<p>Running this gives:</p>
<pre><code>Got client connection
</code></pre>
<p>i.e., it worked without a call to listen() on the server socket.</p>
<p>There is no difference in behaviour if I invoke <em>_socket.listen()</em></p>
<p>Is this something that's peculiar to Python or am I misunderstanding something fundamental?</p>
<p>Python 3.11.3 on MacOS 13.4</p>
|
<python><sockets>
|
2023-06-08 09:15:01
| 1
| 28,997
|
Ramrab
|
76,430,295
| 2,274,981
|
Extract JSON arrays from binary file
|
<p>I have a binary file that I have read into a text file. This contains two JSON arrays that both have data that I wish to take out.</p>
<p>I have tried cleaning the file, I have attempted to decode via <code>utf-8</code> and <code>ascii</code> and get errors several times.</p>
<p>The best I have so far is to read into a text file as a <code>string</code> from the binary file and then I figured I could regex the json arrays out?</p>
<p>This is all I have so far</p>
<pre><code>try:
with open('TestReplays/replay_2023-04-14_21-28-07.rpl3', 'rb') as f:
file = f.read()
text = str(file)[2:-1]
print(text)
except Exception as e:
print(e)
</code></pre>
<p>I've cut the below down as the actual file itself is over 7million characters long.</p>
<pre><code>ESAV\x00\x00\x00\x03\x00\x1f\x8eB\x90\nQRmodd\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00rply\x00\x00\x00\x00\x00\x00\x089\x00\x00\x00\x00{"game":{"GameMode":"1","IsNetworkMode":"1","NbMaxPlayer":"2","NbPlayersAndIA":"2","AllowObservers":"1","ObserverDelay":"120","Seed":"1322943104","Private":"0","ServerName":"Lynchie vs Tiro Game 2","WithHost":"1","ServerProtocol":"1.0","TimeLeft":"0","Version":"94077","GameState":"0","NeedPassword":"0","NbIA":"0","TickRate":"10","UniqueSessionId":"1ae482a7:5597c6e6:45d98090:3ee8f755","ModList":"","ModTagList":"","ServerTag":"","GameType":"1","Map":"_2x3_BlackForestStorm","InitMoney":"1500","TimeLimit":"2400","ScoreLimit":"2000","DeploymentMode":"1","CombatRule":"2","IncomeRate":"3","WarmupCountdown":"10","DeploiementTimeMax":"180","DebriefingTimeMax":"180","LoadingTimeMax":"1200","NbMinPlayer":"10","DeltaMaxTeamSize":"10","MaxTeamSize":"10","PhaseADuration":"-1","PhaseBDuration":"-1","MapSelection":"0","MapRotationType":"0","CoopVsAI":"0","InverseSpawnPoints":"0","DivisionTagFilter":"DEFAULT","AutoFillAI":"0","DeltaTimeCheckAutoFillAI":"60"},"player_1":{"PlayerUserId":"684602","PlayerIALevel":"-1","PlayerObserver":"0","PlayerAlliance":"1","PlayerReady":"1","PlayerElo":"1600.75803065","PlayerLevel":"9","PlayerName":"Lynchie","PlayerTeamName":"","PlayerAvatar":"VirtualData/SteamGamerPicture/76561198043635455","PlayerDeckName":"","PlayerDeckContent":"FBE40YlpKTHojDCpywASZwLQGQAQwzHwLgAQLAALRTEQGwAQGwAIIgAIyAAIhwLIuwAJ8gAQtQdQvAARwwAQJgAQJgAQxTEQGwAWQQ9ovwAQ1w9wFAAIDQALlQAK/zHoiTEIDwAIEAAQLgAJLjEBAA==","PlayerSkinIndexUsed":"","PlayerIsAIAutoFilled":"0","PlayerScoreLimit":"2000","PlayerIncomeRate":"1"},"player_2":{"PlayerUserId":"323931","PlayerIALevel":"-1","PlayerObserver":"0","PlayerAlliance":"0","PlayerReady":"0","PlayerElo":"1559.63376877","PlayerLevel":"10","PlayerName":"Tiro","PlayerTeamName":"","PlayerAvatar":"VirtualData/SteamGamerPicture/76561198065998403","PlayerDeckName":"","PlayerDeckContent":"FBF6aIS1tYAFtYAGLwAKKYAEeAAEfUVIfYAIfYAEgoAKKUVGLUVKL4AEwYAEwYAGMoAIfYAIqsVKK8RKK8REq0VqUkRIfYAKIoAKIoAKJwAKUQAGJMRKLA9UegAEeIAEfgAGKsVGMQAKUQAAgA==","PlayerSkinIndexUsed":"","PlayerIsAIAutoFilled":"0","PlayerScoreLimit":"2000","PlayerIncomeRate":"1"},"ingamePlayerId":1}star\x00\x00\x00\x00\x00\x00\x00"\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x003\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\xa4\x00\x00\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x00\xff\xff\xff\xff\xaf\x04\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x00\xff\xff\xff\xff\xae\x04\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x00\xff\xff\xff\xff\xad\x04\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x00\xff\xff\xff\xff\xac\x04\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x00\xff\xff\xff\xff\xab\x04\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x00\x00\xff\xff\xff\xff3\x00\x00\x004\x00\x00\x00\xff\xff\xff\xff\x01\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00x\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00j\x00\x02\x1c\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc>\r\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x1c\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc>\r\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00x\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00j\x00\x02\x1d\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc>\r\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x1d\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00\xcd\xcc\xcc>\r\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x03\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x06\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x07\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x08\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\t\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\n\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0b\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\r\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\'\x00\x00\x00\x00\r\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x19\x00(\x00\x00\x00\x00\x1c\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\r\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\'\x00\x00\x00\x00\x0e\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x19\x00(\x00\x00\x00\x00\x1d\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x0f\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x11\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x11\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x12\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x13\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x14\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x15\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x15\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x15\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x15\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x16\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x16\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x16\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x16\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x17\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x17\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x17\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x17\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x18\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x19\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x19\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1a\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1b\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1b\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1d\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1d\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1e\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x1f\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x1f\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x1f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00 \x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00!\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00!\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00!\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00!\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00"\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00"\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00#\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00#\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00#\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00#\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00$\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00$\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00$\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00$\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00%\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00%\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00%\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00%\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00&\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00&\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00&\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00&\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\'\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\'\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00(\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00(\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00(\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00(\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00)\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00)\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00)\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00)\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00*\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00*\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00*\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00*\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00+\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00+\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00+\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00+\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00,\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00,\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00,\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00,\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00-\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00-\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00.\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00.\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00.\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00.\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00/\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00/\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00/\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00/\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x000\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x001\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x001\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x001\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x001\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x002\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x002\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x002\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x002\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x003\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x003\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x003\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x003\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x004\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x004\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x004\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x004\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x005\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x005\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x005\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x005\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x006\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x006\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x006\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x006\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x007\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x007\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x007\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x007\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x008\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x008\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x008\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x009\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x009\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x009\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x009\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00:\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00:\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00:\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00:\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00;\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00;\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00;\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00;\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00<\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00?\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00?\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00?x0c\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00A\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00A\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00A\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00H\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00endt\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00H\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00star\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00I\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00I\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00pact\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\{"result":{"Duration":"1247","Victory":"6"}}
</code></pre>
<p>The only other thing I can think of is using an index to get "game" and "results" and then forward search until I get the next bracket that closes off the JSON, or is that too complex?</p>
|
<python><json><binaryfiles>
|
2023-06-08 09:10:29
| 1
| 1,149
|
Lynchie
|
76,430,192
| 19,546,216
|
Getting TypeError: WebDriver.__init__() got an unexpected keyword argument ‘desired_capabilities’ when using Appium with Selenium 4.10
|
<p>Error:
<code>HOOK-ERROR in before_scenario: TypeError: WebDriver.__init__() got an unexpected keyword argument 'desired_capabilities' </code></p>
<p>Hello, we currently cannot run our script together with the latest Selenium 4.10. Is this Appium error or Python error?</p>
<p>Here is the capabilities that we used. We're currently trying to get the capabilities for platformName by <code>targetOS = self.driver.capabilities['platformName']</code> but we're hit with this error</p>
<pre><code>capabilities = {
"platformName": "Android",
"appium:platformVersion": "11.0",
"appium:deviceName": "emulator-5554",
"appium:app": "/Users/faithberroya/Downloads/test.apk",
"appium:automationName": "UiAutomator2",
"appium:appPackage": "com.test.school.assignment.rc",
"appium:appActivity": "com.test.school.assignment.ui.SplashActivity"
}
# launch app
context.driver = webdriver.Remote("http://0.0.0.0:4723/wd/hub", capabilities)
# add wait time
context.driver.implicitly_wait(20)
# app
context.app = Application(context.driver)
</code></pre>
<p>Current pip list</p>
<pre><code>Appium-Python-Client 2.10.1
behave 1.2.6
certifi 2023.5.7
pip 23.1.1
requests 2.31.0
selenium 4.9.0
</code></pre>
|
<python><selenium-webdriver><appium><python-appium>
|
2023-06-08 08:56:44
| 3
| 321
|
Faith Berroya
|
76,430,157
| 4,442,337
|
Pandas: speed up dataframe column conversion?
|
<p>I am trying to make an utility function that allows me to convert columns of a generic Pandas DF based on some column specifications metadata.</p>
<pre class="lang-py prettyprint-override"><code>
def convert_pandas_columns(df: pd.DataFrame, metadata: dict | None) -> pd.DataFrame:
"""
Convert a pandas DataFrame `df` columns starting from a metadata dict `cols`:
```
{
"columns": {
"col1": {
"type": "str"
},
"col2": {
"type": "datetime"
},
"col3": {
"type": "float"
}
}
}
```
"""
if metadata is None or 'columns' not in metadata:
# Skip conversion if no metadata is available
return df
mappings = {
'str': lambda x: x.astype(str, errors='ignore'),
'int': lambda x: pd.to_numeric(x, downcast='integer', errors='coerce'),
'float': lambda x: pd.to_numeric(x, downcast='float', errors='coerce'),
'bool': lambda x: x.astype(bool, errors='ignore'),
'datetime': lambda x: pd.to_datetime(x, errors='coerce'),
'list': lambda x: json.loads(x) if x else [], # always expect a json object
}
dtypes = {}
for k, v in metadata['columns'].items():
pd_type = mappings.get(v['type'], None)
if pd_type is not None:
dtypes[k] = pd_type
return df.transform(
{
**{
column: lambda x: x
for column in df.columns
},
**dtypes,
},
)
</code></pre>
<p>This works but it's extremely slow. Is there anything I could change to improve its performance? Previously I was using the <code>astype</code> function but it's not as flexible as this approach.</p>
|
<python><pandas><dataframe>
|
2023-06-08 08:52:01
| 1
| 2,191
|
browser-bug
|
76,430,038
| 15,387,588
|
AWS Lambda Python aioboto3 exception AttributeError: 'ResourceCreatorContext' object has no attribute 'Table'
|
<p>I have done a project with AWS SAM CLI in Python and I'm trying to read DynamoDB items asynchronously from Lambda functions using <code>aioboto3</code> + <code>asyncio</code>.</p>
<p>My code seems fine but I am getting an exception every time I read or write to the dynamo database.</p>
<p>Handler code:</p>
<pre><code>import os
import json
import asyncio
import datetime
import aioboto3
def get_hello(event, context):
loop = asyncio.get_event_loop()
return loop.run_until_complete(get_hello_async(event, context))
async def get_hello_async(event, context):
name = event['queryStringParameters']['name']
item = await get_item(name)
if item['date']:
date = datetime.datetime.fromtimestamp(item['date'] / 1000)
message = 'Was greeted on {}'.format(date.strftime('%m/%d/%Y, %H:%M:%S'))
return {
'statusCode': 200,
'headers': {
'Access-Control-Allow-Origin': '*',
},
'body': json.dumps(message)
}
async def get_item(name):
dynamodb = aioboto3.Session().resource('dynamodb')
table = await dynamodb.Table(os.environ['TABLE_NAME'])
record = await table.get_item(Key={'name': name})
return record['Item'] if 'Item' in record else None
</code></pre>
<p>The exception I'm getting via AWS Cloudwatch:</p>
<pre><code>[ERROR] AttributeError: 'ResourceCreatorContext' object has no attribute 'Table'
Traceback (most recent call last):
File "/var/task/app.py", line 11, in get_hello
return loop.run_until_complete(get_hello_async(event, context))
File "/var/lang/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/var/task/app.py", line 17, in get_hello_async
item = await get_item(name)
File "/var/task/app.py", line 34, in get_item
table = await dynamodb.Table(os.environ['TABLE_NAME'])
</code></pre>
<p>It is as if the <code>aioboto3</code> library does not have the <code>Table</code> attribute when the DynamoDB resource is called, however the official <code>boto3</code> library does.</p>
|
<python><amazon-web-services><boto3>
|
2023-06-08 08:38:28
| 1
| 450
|
Eduardo G
|
76,429,921
| 2,550,114
|
QST: What is the canonical way to convert a column of type string[pyarrow] to boolean within a pandas dataframe?
|
<p>I'm wanting to convert string data which is indeed boolean (or null) e.g. values are y / n / NA, or true / false / NA, or even a mix of these.</p>
<p>When using pandas with <code>numpy</code> backend as default, conversion from string data to boolean works smoothly:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"col1": ["true", None, "false"]})
assert df.dtypes["col1"] == "object", df.dtypes["col1"]
# convert to boolean
df["col1"] = df["col1"].replace({'true': True, 'false': False}).astype(bool)
assert df.dtypes["col1"] == bool, df.dtypes["col1"]
</code></pre>
<p>However, when using the pyarrow backend (in my use case, I was actually using <code>pd.read_parquet</code> with <code>dtype_backend</code> - but I set the type explicitly in the example below):</p>
<pre class="lang-py prettyprint-override"><code>df_pyarrow = pd.DataFrame(
{"col1": ["true", None, "false"]}, dtype="string[pyarrow]"
)
assert df_pyarrow.dtypes["col1"] == "string", df_pyarrow.dtypes["col1"]
df_pyarrow["col1"] = (
df_pyarrow["col1"]
.replace({'true': True, 'false': False}) # fails at this step
.astype(bool)
)
assert df_pyarrow.dtypes["col1"] == "bool[pyarrow]", df_pyarrow.dtypes["col1"]
</code></pre>
<p>but this fails at the .replace() because pyarrow complains, rightly!, that <code>True</code> and <code>False</code> are not valid values for a string[pyarrow]: <code>TypeError: Scalar must be NA or str</code>.</p>
<p>I have found that this method works:</p>
<pre class="lang-py prettyprint-override"><code>df_pyarrow["col1"] = df_pyarrow["col1"] == "true"
assert df_pyarrow.dtypes["col1"] == "bool[pyarrow]", df_pyarrow.dtypes["col1"]
</code></pre>
<p>However:</p>
<ul>
<li><code>df_pyarrow.info()</code> still says <code>col1</code> is <code>string[pyarrow]</code></li>
<li>this method isn't as flexible: what if there were multiple values for True/False</li>
</ul>
<p>What is the canonical way to convert a column of type <code>string[pyarrow]</code> to boolean within a pandas dataframe?</p>
|
<python><pandas><pyarrow>
|
2023-06-08 08:20:28
| 2
| 8,415
|
James Owers
|
76,429,847
| 6,224,975
|
Explode/expand result of groupby and to same ordering/index as before groupby
|
<p>Say I have the following dataframe and function that I want to apply within each group</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"id": [1, 2, 1, 2], "amount": [-1, 10, 20, -5]})
def is_positive(grp_row):
return grp_row["amount"].values > 0
df_result = df.groupby("id").apply(is_positive)
#id
# 1 False
# True
# 2 True
# False
</code></pre>
<p>(note this is not my real problem but for illustration purpose only).</p>
<p>How do I explode/expand the resulting dataframe such that it have the same index/ordering as <code>df</code>, such that <code>df.iloc[i]</code> corresponds to <code>df_result.iloc[i]</code>?</p>
<p>I have purposely removed the series-operation from the <code>is_positive</code>, thus the <code>.values>0</code>, such that we don't get the original index in the result (since I don't get that in my real function).</p>
|
<python><pandas><dataframe>
|
2023-06-08 08:08:23
| 1
| 5,544
|
CutePoison
|
76,429,754
| 2,534,479
|
Undefined variable standalone_static_library in binding.gyp while trying to load binding.gyp
|
<p><strong>MacOS Monterey v12.6</strong></p>
<p>Node Version <strong>v20.2.0</strong></p>
<p><strong>Installed Version:</strong></p>
<pre><code>pyenv --version
pyenv 2.3.18
python2 --version
Python 2.7.18
python3 --version
Python 3.9.6
python --version
Python 3.9.6
</code></pre>
<p><strong>~/.zshrc</strong></p>
<pre><code>alias python=/usr/bin/python3
# eval "$(pyenv init --path)"
# Python Version Manager
export PATH="$PATH:$HOME/.pyenv/bin"
eval "$(pyenv init -)"
</code></pre>
<p>Runnning <code>yarn install</code> in console, full error logs:</p>
<pre><code>yarn install v1.22.19
warning package.json: No license field
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
warning No license field
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
[3/4] 🔗 Linking dependencies...
warning " > stimulus-sortable@3.3.0" has unmet peer dependency "@hotwired/stimulus@^3.1.1".
warning " > stimulus-sortable@3.3.0" has incorrect peer dependency "@rails/request.js@^0.0.6".
warning " > stimulus-sortable@3.3.0" has unmet peer dependency "sortablejs@^1.15.0".
warning " > webpack-dev-server@3.11.3" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0".
warning "webpack-dev-server > webpack-dev-middleware@3.7.3" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0".
[4/4] 🔨 Building fresh packages...
[1/4] ⠐ fsevents
[-/4] ⠐ waiting...
[3/4] ⠐ node-sass
error /Users/testuser/apps/node_modules/node-sass: Command failed.
Exit code: 1
Command: node scripts/build.js
Arguments:
Directory: /Users/testuser/apps/node_modules/node-sass
Output:
Building: /opt/homebrew/Cellar/node/20.2.0/bin/node /Users/testuser/apps/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
gyp info it worked if it ends with ok
gyp verb cli [
gyp verb cli '/opt/homebrew/Cellar/node/20.2.0/bin/node',
gyp verb cli '/Users/testuser/apps/node_modules/node-gyp/bin/node-gyp.js',
gyp verb cli 'rebuild',
gyp verb cli '--verbose',
gyp verb cli '--libsass_ext=',
gyp verb cli '--libsass_cflags=',
gyp verb cli '--libsass_ldflags=',
gyp verb cli '--libsass_library='
gyp verb cli ]
gyp info using node-gyp@3.8.0
gyp info using node@20.2.0 | darwin | arm64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "python2" in the PATH
gyp verb `which` succeeded python2 /Users/testuser/.pyenv/shims/python2
gyp verb check python version `/Users/testuser/.pyenv/shims/python2 -c "import sys; print "2.7.18
gyp verb check python version .%s.%s" % sys.version_info[:3];"` returned: %j
gyp verb get node dir no --target version specified, falling back to host node version: 20.2.0
gyp verb command install [ '20.2.0' ]
gyp verb install input version string "20.2.0"
gyp verb install installing version: 20.2.0
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp verb install version is already installed, need to check "installVersion"
gyp verb got "installVersion" 9
gyp verb needs "installVersion" 9
gyp verb install version is good
gyp verb get node dir target node version installed: 20.2.0
gyp verb build dir attempting to create "build" dir: /Users/testuser/apps/node_modules/node-sass/build
gyp verb build dir "build" dir needed to be created? /Users/testuser/apps/node_modules/node-sass/build
gyp verb build/config.gypi creating config file
gyp verb build/config.gypi writing out config file: /Users/testuser/apps/node_modules/node-sass/build/config.gypi
gyp verb config.gypi checking for gypi file: /Users/testuser/apps/node_modules/node-sass/config.gypi
gyp verb common.gypi checking for gypi file: /Users/testuser/apps/node_modules/node-sass/common.gypi
gyp verb gyp gyp format was not specified; forcing "make"
gyp info spawn /Users/testuser/.pyenv/shims/python2
gyp info spawn args [
gyp info spawn args '/Users/testuser/apps/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/testuser/apps/node_modules/node-sass/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/testuser/apps/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/testuser/.node-gyp/20.2.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/testuser/.node-gyp/20.2.0',
gyp info spawn args '-Dnode_gyp_dir=/Users/testuser/apps/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/testuser/.node-gyp/20.2.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/testuser/apps/node_modules/node-sass',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp: Undefined variable standalone_static_library in binding.gyp while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/Users/testuser/apps/node_modules/node-gyp/lib/configure.js:345:16)
gyp ERR! stack at ChildProcess.emit (node:events:511:28)
gyp ERR! stack at ChildProcess._handle.onexit (node:internal/child_process:293:12)
gyp ERR! System Darwin 21.6.0
gyp ERR! command "/opt/homebrew/Cellar/node/20.2.0/bin/node" "/Users/testuser/apps/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd /Users/testuser/apps/node_modules/node-sass
gyp ERR! node -v v20.2.0
gyp ERR! node-gyp -v v3.8.0
</code></pre>
|
<python><npm><node-modules><yarnpkg><gyp>
|
2023-06-08 07:56:31
| 1
| 3,723
|
aldrien.h
|
76,429,688
| 8,703,313
|
pandas: sequential merge adds new columns instead of replacing NaN values
|
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df_all = pd.DataFrame(columns=["uid", "a", "b"], data=[["uid1", 12, 15],
["uid2", 13, 16],
["uid3", 14, 17],
["uid4", 15, 18]])
df_additional_info1 = pd.DataFrame(columns=["uid", "c", "d"], data=[["uid1", 12, 15],
["uid3", 14, 17]])
df_additional_info2 = pd.DataFrame(columns=["uid", "c", "d"], data=[["uid2", 12, 15]])
</code></pre>
<p>I need to merge <strong>df_all</strong> twice with additional information. First with df_additional_info1, then with df_additional_info2 and so on. <strong>They will always contain additional info for already existing rows and only for those rows that were not yet updated.</strong></p>
<p>When I do following:</p>
<pre class="lang-py prettyprint-override"><code>df_all = df_all.merge(df_additional_info1, how="left", on="uid")
df_all = df_all.merge(df_additional_info2, how="outer", on="uid")
</code></pre>
<p>I get duplicated columns (*_x, *_y):</p>
<p><a href="https://i.sstatic.net/vviX4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vviX4.png" alt="what I get" /></a></p>
<p>But I need this
<a href="https://i.sstatic.net/Memj0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Memj0.png" alt="what I need" /></a></p>
<p>Any suggestion?</p>
|
<python><pandas><dataframe>
|
2023-06-08 07:49:40
| 2
| 310
|
Honza S.
|
76,429,590
| 1,473,517
|
How can I plot just the colorbar of a heatmap?
|
<p>I would like to plot just the colorbar for a heatmap. Here is a MWE:</p>
<pre><code>import seaborn as sns
import io
import matplotlib.pyplot as plt
import numpy as np
A = np.random.rand(100,100)
g = sns.heatmap(A)
plt.show()
</code></pre>
<p>How can I plot just its colorbar and not the heatmap itself?</p>
|
<python><matplotlib><seaborn>
|
2023-06-08 07:35:28
| 2
| 21,513
|
Simd
|
76,429,520
| 1,473,517
|
Why isn't the full palette being used in an animated gif?
|
<p>I am making an animated gif of heatmaps using pillow. As there are a lot of different values and each frame can only have 256 colors I was hoping pillow would give me a different palette per frame. But it seems it isn't using most of the available colors. Here is a MWE:</p>
<pre><code>from PIL import Image
import seaborn as sns
import io
import matplotlib.pyplot as plt
import scipy
import math
import numpy as np
import imageio
vmin = 0
vmax = 0.4
images = []
for i in range(3):
mu = 0
variance = i+0.1
sigma = math.sqrt(variance)
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 400)
row = scipy.stats.norm.pdf(x, mu, sigma)
matrix = []
for i in range(400):
matrix.append(row)
cmap = "viridis"
hmap = sns.heatmap(matrix, vmin=vmin, vmax=vmax, cmap=cmap, cbar=False)
hmap.set_xticks(range(0, 101, 4))
buffer = io.BytesIO()
plt.savefig(buffer, format='png')
buffer.seek(0)
images.append(Image.open(buffer))
plt.clf()
images[0].save("out.gif", save_all=True, duration=1000, loop=1, append_images=images[1:])
</code></pre>
<p>The animated gif produced is:</p>
<p><a href="https://i.sstatic.net/pyU7G.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pyU7G.gif" alt="enter image description here" /></a></p>
<p>You can see that later frames use fewer than 256 colors. If I look at the palettes with</p>
<pre><code>identify -verbose out.gif|grep Colors
</code></pre>
<p>I get these:</p>
<ul>
<li>For the first frame: Colors: 46</li>
<li>For the second frame: Colors: 28</li>
<li>For the third frame: Colors: 19</li>
</ul>
<p>If I save png's instead we can see the third frame, for example, has many more than 19 colors.</p>
<p><a href="https://i.sstatic.net/o0QoV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o0QoV.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong?</p>
|
<python><matplotlib><python-imaging-library><seaborn><animated-gif>
|
2023-06-08 07:26:08
| 1
| 21,513
|
Simd
|
76,429,493
| 2,051,151
|
Cannot use AWS KMS key with AWS Encryption CLI but works with AWS Encryption SDK for Java
|
<p>We have implemented encryption/decryption on individual database fields using the AWS Encryption SDK for Java. We use a AWS KSM key for the task and this works as it should.</p>
<p>Now we also need to access the decrypted data from a Python utility script. Using the example code we wrote:</p>
<pre><code>def decrypt_string(key_arn, ciphertext, botocore_session=None):
# Set up an encryption client with an explicit commitment policy. If you do not explicitly choose a
# commitment policy, REQUIRE_ENCRYPT_REQUIRE_DECRYPT is used by default.
client = aws_encryption_sdk.EncryptionSDKClient(commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT)
# Create an AWS KMS master key provider
kms_kwargs = dict(key_ids=[key_arn])
if botocore_session is not None:
kms_kwargs["botocore_session"] = botocore_session
master_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(**kms_kwargs)
# Decrypt the ciphertext
plaintext, decrypted_header = client.decrypt(source=ciphertext, key_provider=master_key_provider)
print(plaintext)
</code></pre>
<p>The commitment policy is the same for the Java and Python implementation. When we invoke the function with the same ARN for the key and the raw cipher text we get the following error:</p>
<pre><code>aws_encryption_sdk.exceptions.NotSupportedError: Unsupported signing algorithm info
</code></pre>
<p>Thinking we might have done something wrong we also installed the AWS Encryption CLI and tried to decode the same cipher text:</p>
<pre><code>echo '<base64-encoded cipher text>' | \
aws-encryption-cli \
--decrypt
--decode
--wrapping-keys key=<KMS KEY ARN>
--commitment-policy require-encrypt-require-decrypt
-S
--input -
--output -
</code></pre>
<p>But this resulted in the same error.</p>
<p>Next we just tried to encode some sample text using the AWS Encryption CLI but again we got the same error.</p>
<p>The KMS key is described as:</p>
<pre><code>{
"KeyMetadata": {
"AWSAccountId": "<account-id>",
"KeyId": "<key-id>",
"Arn": "arn:aws:kms:eu-central-1:<account-id>:key/<key-id>",
"CreationDate": "2023-06-07T10:35:18.466000+02:00",
"Enabled": true,
"Description": "",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "Enabled",
"Origin": "AWS_KMS",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
],
"MultiRegion": false
}
}
</code></pre>
<p>Is the key not created properly or am I missing something else?</p>
<p>Software installed with <code>pip</code>:</p>
<ul>
<li>aws-encryption-sdk: 3.1.1</li>
<li>aws-encryption-sdk-cli: 4.1.0</li>
<li>boto3: 1.26.148</li>
</ul>
<p>Software installed for the Java implementation:</p>
<ul>
<li>org.bouncycastle.bcprov-ext-jdk18on: 1.73</li>
<li>com.amazonaws.aws-encryption-sdk-java: 2.4.0</li>
</ul>
|
<python><java><amazon-web-services><encryption>
|
2023-06-08 07:22:21
| 1
| 581
|
Kees de Bruin
|
76,429,378
| 7,218,871
|
Pandas data manipulation, calculating a column value based on other rows of the same column
|
<p>I wish to do a data manipulation as follows in a pandas dataframe:</p>
<pre><code>a = {'idx': range(8),
'col': [47,33,23,33,32,31,22,5],
}
df = pd.DataFrame(a)
print(df)
idx col
0 47
1 33
2 23
3 33
4 32
5 31
6 22
7 5
</code></pre>
<p>My desired output is:</p>
<pre><code>idx col desired
0 47 14
1 33 10
2 23 -10
3 33 1
4 32 1
5 31 9
6 22 17
7 5 5
</code></pre>
<p>The calculation is as follows.</p>
<p><a href="https://i.sstatic.net/vO7Yp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vO7Yp.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pandas><dataframe>
|
2023-06-08 07:07:16
| 3
| 620
|
Abhishek Jain
|
76,429,328
| 10,396,491
|
State-action transformation of collected experience in stable baselines replay buffer
|
<p>I am working with stable baselines 3 applied to a very expensive problem. I have set everything up for maximum sample-efficiency already and would like to implement the method described in this article: <a href="https://arxiv.org/pdf/2111.03454.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2111.03454.pdf</a> Namely, for every step taken, I would like to apply geometric transformations in order to produce valid experiences and add them to the replay buffer together with the single experience that has actually been simulated. Does any one know if there's an existing way to do this? I have reviewed the documentation and her replay buffer class but I admit that the answer is still not obvious to me.</p>
|
<python><reinforcement-learning><stable-baselines>
|
2023-06-08 06:58:47
| 1
| 457
|
Artur
|
76,429,311
| 297,150
|
Django Keycloak Integration
|
<p>I am trying to integrate Keycloak (single sign on) in Django framework. There are couple of packages I found online, however, none of them are working as per the expectation.</p>
<p>Is there any documentation or example source codes from which I can get the help? Thanks in advance.</p>
|
<python><django><keycloak>
|
2023-06-08 06:56:43
| 1
| 361
|
Yasir Arefin
|
76,429,292
| 9,907,733
|
Can we train Sentence transformer model for Sequence classification
|
<p>Can we use a fine-tuned Sentence Transformer model for fine-tuning <code>AutoModelForSequenceClassification</code>?</p>
<ol>
<li>I have a Fine tuned Sentence transformer model which has pooling layers also.</li>
<li>Now, I take this model and again fine tune on AutoModelForSequenceClassification() for binary classes.</li>
</ol>
<p>I only get this warning:</p>
<blockquote>
<p>Some weights of <code>BertForSequenceClassification</code> were not initialized from the model checkpoint at sent_tranf_model/ and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.</p>
</blockquote>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("sent_tranf_model/")
NUM_LABELS = len(idx_to_label)
model = AutoModelForSequenceClassification.from_pretrained("ssent_tranf_model/",num_labels = NUM_LABELS)
</code></pre>
<p>Is it okay to do this?</p>
|
<python><nlp><huggingface-transformers><text-classification>
|
2023-06-08 06:53:15
| 1
| 1,546
|
MAC
|
76,429,277
| 956,424
|
In fuglu plugin, how can I send custom message for bounced email back to sender?
|
<p>When a mail gets REJECTED due to some policy, I would like to send out a custom message from the plugin stating the reason for bounce or mail reject. How can this be achieved using FUGLU. version 0.10 ? eg. I have created 2 lists of recipients as follows:</p>
<pre><code> if allow_group_list:
suspect.recipients = allow_group_list
self._logger().info(
'888 ==== allowed suspect.recipients: %s' %(suspect.recipients))
if deny_group_list:
suspect.recipients = deny_group_list
self._logger().info(
'9999 ====== denied suspect.recipients: %s' %(suspect.recipients))
self.send_bounce_message(suspect,deny_group_list,msubject)
self._mark_bounce_sent(message_id, deny_group_list,msubject)
return REJECT
return DUNNO
def send_bounce_message(self, suspect,mto,msubject):
blockinfo = 'Please add the specific user and resend'
queueid = suspect.get_tag('specificuser.bounce.queueid')
#suspect.set_tag(mto_address,suspect.recipient)
if queueid:
self._logger().info(
f'{suspect.id} already sent specific user block bounce '
f'to {suspect.from_address} with queueid {queueid}')
else: #tmm
self._logger().info(f"{suspect.id} sending specific user block "
f"bounce to {suspect.from_address}")
bounce = Bounce(self.config)
queueid = bounce.send_template_file(suspect.from_address,
self.template_bounce,
suspect,
{'blockinfo':blockinfo,
'mto':mto,
'msubject':msubject
})
suspect.set_tag('specificuser.bounce.queueid', queueid)
</code></pre>
<p>Is this possible in my custom plugin. Because AFAIK a mail has only 1 return value. The above sends out the actual mail to both the recipients in allowed as well as denied list and sends out a bounce to the denied list. Anything wrong in this logic?
#======
#Another option tried:
What i tried was a simple: created 2 lists to allow and deny recipients. If mail is sent from sender who exists in the denied list, mail should bounce with custom message. <strong>Used Fuglu 1.2.</strong>
settings in** fuglu.conf:**</p>
<pre><code>plugins=custom_msg
disablebounces=0
[PluginAlias]
custom_msg=custom_msg.CustomMessagePlugin
# custom_msg.py contains the plugin code:
from fuglu.shared import ScannerPlugin, DUNNO, DELETE, Suspect, REJECT
class CustomMessagePlugin(ScannerPlugin):
def __init__(self, *args, **kwargs):
ScannerPlugin.__init__(self, *args, **kwargs)
self.allowed_recipients = ['test9@y.com']
self.denied_recipients = ['test8@y.com','another@y.com']
def examine(self, suspect):
sender = suspect.from_address.lower()
recipient = suspect.recipients[0].lower()
if recipient in self.denied_recipients:
# Send bounce message to the sender
bounce_message = "Your email to {} has been rejected.".format(recipient)
return REJECT, bounce_message
else:
return DUNNO
def lint(self):
return
def __str__(self):
return "Custom Plugin"
</code></pre>
<p>when mail sent from another@y.com doesnt get bounce mail when sent to itself ie another@y.com
How to go about? Any syntax issues? Where can i get help for bounce customization in fuglu</p>
|
<python>
|
2023-06-08 06:51:33
| 2
| 1,619
|
user956424
|
76,429,170
| 1,539,757
|
Reduce spawn process execution time from seconds to miliseconds in nodejs
|
<p>I want to execute python code in Node js application so I used below</p>
<pre><code>const { spawn } = require('node:child_process');
const process = spawn('node', ['-c', "python code content will come here"]);
process.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
</code></pre>
<p>Its working as expected but taking around seconds to print the output in <strong>stdout</strong>,
some python scripts return data in large amount so i used <strong>process.stdout</strong></p>
<p>Now my main concern is time , how I can reduce this time which should be in miliseconds but taking seconds to execute and print the output.</p>
|
<python><node.js><spawn>
|
2023-06-08 06:33:57
| 2
| 2,936
|
pbhle
|
76,429,150
| 19,328,707
|
wFastCGI / Flask - Restarting webserver on IIS
|
<p>I'm building a Web-App that is fetching data from an API and displaying it. For that im using <code>Flask</code> and the <code>request</code> library. Because the API is not well layed out, i need to make a bunch of API calls to get all the data i need.</p>
<p>Here is how the simplified folder structure looks like:</p>
<pre><code>app.py
api/
api.py
</code></pre>
<p>To not overload the api and sending hundreds of api requests on every <code>GET</code> requests, i tried to implement a <code>function</code> that fetches the data on webserver start, stores it into a <code>variable</code> and refreshes the data after a specific time. Here is a simplified api <code>class</code> and refresh <code>function</code></p>
<pre><code>"""
The API class gets initizialized on webserver start
"""
class API:
def __init(self):
self.API_KEY = 'xxx-xxx'
self.BASE_URL = 'https://xxxxxxxx.com/3'
self.HEADER = {
'X-Api-Key': f'{self.API_KEY}',
'Accept': 'application/json'
}
self.session = requests.session()
self.session.headers.update(self.HEADER)
self.data = {}
self.refresh_time = 900 # how long the function should wait until next refresh
threading.Thread(target=refresh_data).start()
def refresh_data(self):
while True:
self._refresh() # function that fetches the data from the API and stores/refreshes the in the self.data json
time.sleep(self.refresh_time)
</code></pre>
<p>I know its probably not the best way how to handle this, but in my <code>venv</code> it works without problems.</p>
<p>If i make this webapp production ready > deploying it to <code>Windows IIS</code> with <code>wFastCGI</code> the webserver gets restartet randomly ( i didnt noticed any pattern ) and so the api class gets initizialized multiple times meaning the refresh <code>function</code> gets started multiple times.</p>
<p>Here is some logging of the webserver:</p>
<pre><code>2023-06-05 07:54:29,298 [MainThread ] [ <module>()] [INFO ] Setting up APIs... # Log from webserver
2023-06-05 07:54:29,299 [MainThread ] [ __init__()] [DEBUG] API Class init > debug log in API class
2023-06-05 07:54:29,377 [MainThread ] [ index()] [INFO ] GET from 192.168.18.125 # GET request
2023-06-05 07:54:30,001 [MainThread ] [ <module>()] [INFO ] Setting up APIs... # Log from webserver
2023-06-05 07:54:30,001 [MainThread ] [ <module>()] [INFO ] Setting up APIs... # Log from webserver
2023-06-05 07:54:30,001 [MainThread ] [ __init__()] [DEBUG] API Class init >
2023-06-05 07:54:30,001 [MainThread ] [ __init__()] [DEBUG] API Class init > debug log from the same API class
2023-06-05 07:54:30,002 [Thread-1 (_s] [ refresh_data()] [INFO ] Checking data...
2023-06-05 07:54:30,002 [Thread-1 (_s] [ refresh_data()] [INFO ] Checking data...
2023-06-05 07:54:30,006 [Thread-1 (_s] [ _refresh()] [INFO ] Refreshing data...
2023-06-05 07:54:30,007 [Thread-1 (_s] [ get_something()] [INFO ] Getting data...
</code></pre>
<p>I already did some research maybe this helps.</p>
<ol>
<li><a href="https://github.com/microsoft/PTVS/issues/4061" rel="nofollow noreferrer">wfastcgi github question</a> so i thought because im writing the logs to a file in the webserver folder the server gets restarted, so i wrote logs outside the folder but the server kept restarting ( i also tried to edit the web.config but nothing worked for me )</li>
<li><a href="https://social.msdn.microsoft.com/Forums/azure/en-US/c34b9d41-ad67-40f0-88ea-8882d9f4b1f5/pythonweb-apps-web-app-and-fastcgi-restarts?forum=opensourcedevwithazure" rel="nofollow noreferrer">Microsoft dev network question</a> a similar question i found</li>
</ol>
<p>Can anyone explain this behavior to me? I would appriciate it if there are any suggestions how to handle a timed api call or in other words <code>queue</code>.</p>
<p><strong>EDIT:</strong></p>
<p>I found out that the IIS has a <code>load balancing</code> feature, which can load a website ( or web app ) on <code>demand</code> or let the website <code>always running</code>.</p>
<p>Here is what i found <a href="https://forums.ivanti.com/s/article/IIS-Always-On-Application-Pool-Setting?language=en_US" rel="nofollow noreferrer">IIS - "Always On" Application Pool</a></p>
<p>But the features has no impact on the <code>wFastCGI</code>, the application is still restarting.</p>
|
<python><flask><wfastcgi>
|
2023-06-08 06:31:12
| 2
| 326
|
LiiVion
|
76,428,867
| 2,966,197
|
Llama-index how to execute search query against OpenSearch Elasticsearch index?
|
<p>I have this code where I am able to create an index in Opensearch Elasticsearch:</p>
<pre><code>def openes_initiate(file):
endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
# index to demonstrate the VectorStore impl
idx = getenv("OPENSEARCH_INDEX", "llama-osindex-demo")
UnstructuredReader = download_loader("UnstructuredReader")
loader = UnstructuredReader()
documents = loader.load_data(file=Path(file))
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled
client = OpensearchVectorClient(endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field)
# initialize vector store
vector_store = OpensearchVectorStore(client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# initialize an index using our sample data and the client we just created
index = GPTVectorStoreIndex.from_documents(documents=documents,storage_context=storage_context)
</code></pre>
<p>Issue I am having is that once I have indexed the data, I am unable to reload it and serve a query against it. I tried to do this:</p>
<pre><code>def query(index,question):
query_engine = index.as_query_engine()
res = query_engine.query(question)
print(res.response)
</code></pre>
<p>Where <code>index</code> is the one I created in first piece of code, but it returns <code>None</code></p>
|
<python><elasticsearch><amazon-opensearch><llama-index>
|
2023-06-08 05:33:07
| 3
| 3,003
|
user2966197
|
76,428,783
| 6,724,526
|
Writing a tiff with colormap from rasterio
|
<p>I'm writing a single band tif using the code below. What I'm most interested in doing is writing a colormap for band 0 along with the tiff.</p>
<p>The code below is iterating through some other files and is successfully writing out each band of each input. I would like to be able to apply a colormap to each output. Below I'm trying to ask for viridis to be limited to 5 categories.</p>
<p>I would like to avoid explicitly avoid importing GDAL - and this seems entirely possible.</p>
<p>So my question is:
<strong>How do I write out the file with the viridis colormap applied to it? If necessary, apply on band[0]</strong></p>
<pre><code>import xarray as xr
import rioxarray
import cftime
import os
import datetime
from rasterio.enums import Resampling
import matplotlib.pyplot as plt
ds_input = "inputfile.nc"
operatingdir = '\\'
filename = ds_input[:-3]
#determine the maximum number of bands in the raster using the band name in position 0 of the common_key list
bandcount = 7
#create a colormap for the output raster, with 5 colour bands
cmap = plt.cm.get_cmap('viridis', 5)
#use a for loop to output each day of the forecast
for i in range(0, bandcount):
#Convert ds_netcdf_interp to raster using the max fbi values
ds_input.var_name.rio.to_raster('{}\\Output\\{}_number{}.tif'.format(operatingdir, filename, i),
windowed=True,
compress='lzw',
dtype='int16',
nodata=-9999,
overwrite=True,
tiled=True,
cloud_optimized=True,
driver='COG',
colormap=cmap
)
</code></pre>
|
<python><matplotlib><tiff><python-xarray><colormap>
|
2023-06-08 05:14:07
| 1
| 1,258
|
anakaine
|
76,428,561
| 22,039,525
|
TypeError: WebDriver.__init__() got multiple values for argument 'options'
|
<p>Error is:</p>
<pre><code>TypeError: WebDriver.__init__() got multiple values for argument 'options'
</code></pre>
<p>`</p>
<p>The code is:</p>
<pre><code>chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
browser = webdriver.Chrome(r'/usr/bin/chromedriver', options=chrome_options)
</code></pre>
<p>This is the error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-5-9a7e59e392ae> in <cell line: 6>()
4 chrome_options.add_argument('--headless')
5
----> 6 browser = webdriver.Chrome(r'/usr/bin/chromedriver', options=chrome_options)
TypeError: WebDriver.__init__() got multiple values for argument 'options'
</code></pre>
|
<python><python-3.x><selenium-webdriver><selenium-chromedriver><webdriver>
|
2023-06-08 04:11:08
| 3
| 311
|
Tor
|
76,428,259
| 1,631,414
|
Why do I get a ValueError in my Kafka Consumer if I seek to another position?
|
<p>I'm using python 3.9.16 and kafka-python version 2.0.2. I'm running on my Macbook Pro IOS 11.6.5.</p>
<p>I'm still getting my feet wet with Kafka so it's entirely possible I'm doing things the wrong way.</p>
<p>What I'm trying to do is test seeking to offsets with my consumer in case something doesn't get processed and I have to go back and re-read a message.</p>
<p>Anyway, I keep running into this error message. I'm not even sure why it's happening because sometimes I can process the offset and it works fine, other times, it gives me this message:</p>
<pre><code>ValueError: Error encountered when attempting to convert value: b'' to struct format: '<built-in method unpack of _struct.Struct object at 0x10bb669f0>', hit error: unpack requires a buffer of 4 bytes
</code></pre>
<p>When it's working, I can see this in pdb, which kinda proves that the values are present in the topic for me to consume:</p>
<pre><code>(Pdb)
> /Users/username/kafka/tkCons.py(41)<module>()
-> print ("{}, {}".format(blah.offset, blah.value))
(Pdb)
10, b'{"number": 10}'
> /Users/username/kafka/tkCons.py(40)<module>()
-> for blah in consumer:
(Pdb)
</code></pre>
<p>I wish I could narrow down what I'm doing during testing but I can't pin down what lines of code I added/commented out helps make it work or makes it give me the above error.
Since I'm not 100% sure what's happening under the hood, is me seeking around somehow affecting something in zookeeper? What do I need to do to make whatever under the hood stuff happy? Here's my code in case it matters.</p>
<pre><code>from kafka import KafkaConsumer, TopicPartition
# To consume latest messages and auto-commit offsets
consumer = KafkaConsumer(#'my-topic333', 'my-topic222', 'my-topic',
group_id='my-group',
bootstrap_servers=['localhost:9092'])
myTP = TopicPartition('my-topic333', 0)
import pdb
pdb.set_trace()
consumer.assign([myTP])
print ("this is the consumer assignment: {}".format(consumer.assignment()))
print ("before this is my position: {} ".format(consumer.position(myTP)))
consumer.seek(myTP, 50)
#consumer.seek_to_beginning()
print ("after seeking this is my position: {} ".format(consumer.position(myTP)))
for blah in consumer:
print ("{}, {}".format(blah.offset, blah.value))
</code></pre>
|
<python><apache-kafka><kafka-consumer-api><kafka-python>
|
2023-06-08 02:41:41
| 1
| 6,100
|
Classified
|
76,428,095
| 13,067,435
|
How to run a shell script file from python code in an ec2 instance in aws?
|
<p>I have some python code for preprocessing/training . I have a shell script file (sample code below) that i would like to run from my python script.how can i run this script from my python main.py file ? also what file permissions should the my shell script needs to have?</p>
<p>main.py</p>
<pre><code>import os
import subprocess
import venv
venv.create('a_venv', with_pip=True)
##invoke shell script here
</code></pre>
<p>somescript.sh</p>
<pre><code>#!/bin/bash
set -e
echo "how to run this"
....do something
</code></pre>
|
<python><amazon-web-services><amazon-ec2><amazon-sagemaker>
|
2023-06-08 01:47:50
| 0
| 841
|
arve
|
76,428,082
| 284,932
|
How to create a Entity Ruler pattern that includes dot and hyphen?
|
<p>I am trying to include brazilian CPF as entity on my NER app using spacy. The current code is the follow:</p>
<pre><code>import spacy
from spacy.pipeline import EntityRuler
nlp = spacy.load("pt_core_news_sm")
text = "João mora na Bahia, 22/11/1985, seu cpf é 111.222.333-11"
ruler = nlp.add_pipe("entity_ruler")
patterns = [
{"label": "CPF", "pattern": [{"SHAPE": "ddd.ddd.ddd-dd"}]},
]
ruler.add_patterns(patterns)
doc = nlp(text)
#extract entities
for ent in doc.ents:
print (ent.text, ent.label_)
</code></pre>
<p>The result was only:</p>
<pre><code>João PER
Bahia LOC
</code></pre>
<p>I tried using regex too:</p>
<pre><code>{"label": "CPF", "pattern": [{"TEXT": {"REGEX": r"^\d{3}\.\d{3}\.\d{3}\-\d{2}$"}}]},
</code></pre>
<p>But not worked too</p>
<p>How can I fix that to retrieve CPF?</p>
|
<python><named-entity-recognition><spacy-3>
|
2023-06-08 01:41:53
| 1
| 474
|
celsowm
|
76,428,049
| 15,239,717
|
How Can I perform Django Multiple Filter on a Model
|
<p>I am current working on a Django project where I want user to perform property search using different criterial like Property type, State, Minimum bedrooms, Bathroom type, minimum and max price. I want the search to be flexible such that the user chooses which field to search for and not forcing him to input every form field during search. I have tried my best but no error and no positive result, rather even when the user search matches the Model record, it still does not display it.
Here is my Model Code:</p>
<pre><code>class Property(models.Model):
PROPERTY_TYPE_CHOICES = [
('Complete House', 'Complete House'),
('Apartment', 'Apartment'),
('Self-Contained', 'Self-Contained'),
]
BEDROOM_CHOICES = [
('1', '1'),
('2', '2'),
('3', '3'),
('4', '4'),
('5', '5+'),
]
BATHROOM_CHOICES = [
('Self-contained', 'Self-contained'),
('General', 'General'),
('Private Detached', 'Private Detached'),
]
COUNTRY_CHOICES = [
('Nigeria', 'Nigeria'),
]
STATE_CHOICES = [
('Abia', 'Abia'),
('Adamawa', 'Adamawa'),
('Akwa Ibom', 'Akwa Ibom'),
('Anambra ', 'Anambra '),
('Bauchi', 'Bauchi'),
('Bayelsa', 'Bayelsa'),
('Benue ', 'Benue '),
('Borno', 'Borno'),
('Cross River', 'Cross River'),
('Delta', 'Delta'),
('Ebonyi', 'Ebonyi'),
('Edo', 'Edo'),
('Ekiti', 'Ekiti'),
('Enugu', 'Enugu'),
('Gombe', 'Gombe'),
('Imo', 'Imo'),
('Jigawa', 'Jigawa'),
('Kaduna', 'Kaduna'),
('Kano', 'Kano'),
('Katsina', 'Katsina'),
('Kebbi', 'Kebbi'),
('Kogi', 'Kogi'),
('Kwara', 'Kwara'),
('Lagos', 'Lagos'),
]
agent = models.ForeignKey(Agent, on_delete=models.SET_NULL, blank=True, null=True)
landlord = models.ForeignKey(Landlord, on_delete=models.SET_NULL, blank=True, null=True)
description = models.CharField(max_length=60, blank=True, null=True)
property_type = models.CharField(max_length=20, choices=PROPERTY_TYPE_CHOICES)
bedrooms = models.CharField(max_length=2, blank=True, null=True, choices=BEDROOM_CHOICES)
bathroom_type = models.CharField(max_length=20, choices=BATHROOM_CHOICES)
country = models.CharField(max_length=20, choices=COUNTRY_CHOICES)
state = models.CharField(max_length=20, choices=STATE_CHOICES)
state_lga = models.CharField(max_length=12, blank=True, null=True)
address = models.CharField(max_length=60, null=True, blank=True)
latitude = models.FloatField(null=True, blank=True)
longitude = models.FloatField(null=True, blank=True)
price = models.DecimalField(max_digits=10, decimal_places=2)
is_available = models.BooleanField(default=True)
image = models.ImageField(default='avatar.jpg', blank=False, null=False, upload_to ='profile_images',
)
last_updated = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.description
</code></pre>
<p>Form code:</p>
<pre><code>class SearchForm(forms.Form):
property_type = forms.ChoiceField(choices=Property.PROPERTY_TYPE_CHOICES, required=False)
state = forms.ChoiceField(choices=Property.STATE_CHOICES, required=False)
min_beds = forms.CharField(max_length=10, required=False, widget=forms.TextInput(attrs={'placeholder': "Min Bedrooms"}))
bathroom_type = forms.ChoiceField(choices=Property.BATHROOM_CHOICES, required=False)
min_price = forms.DecimalField(max_digits=10, decimal_places=2, required=False, widget=forms.TextInput(attrs={'placeholder': "Min Price"}))
max_price = forms.DecimalField(max_digits=10, decimal_places=2, required=False, widget=forms.TextInput(attrs={'placeholder': "Max Price"}))
</code></pre>
<p>View code:</p>
<pre><code># Home Page View.
def index(request):
listings = Property.objects.filter(is_available=True).order_by('-last_updated')[:6]
form = SearchForm(request.GET or None)
if request.method == 'GET' and form.is_bound:
if form.is_valid():
cleaned_data = form.cleaned_data
property = cleaned_data.get('property_type')
state = cleaned_data.get('state')
min_beds = cleaned_data.get('min_beds') or 0
bathroom = cleaned_data.get('bathroom_type')
min_price = cleaned_data.get('min_price')
max_price = cleaned_data.get('max_price')
# Filter properties based on search parameters
properties = Property.objects.filter(is_available=True)
if property:
properties = properties.filter(property_type=property)
if state:
properties = properties.filter(state=state)
if min_beds:
properties = properties.filter(bedrooms__gte=min_beds)
if bathroom:
properties = properties.filter(bathroom_type=bathroom)
if min_price:
properties = properties.filter(price__gte=min_price)
if max_price:
properties = properties.filter(price__lte=max_price)
# Apply sorting
properties = properties.order_by('-last_updated')[:6]
context = {
'listings': properties,
'page_title': 'Ease to Locate Ease to Rent... | UyaProp',
'form': form,
}
return render(request, 'pages/index.html', context)
context = {
'listings': listings,
'form': form,
}
return render(request, 'pages/index.html', context)
</code></pre>
<p>HTML Code:</p>
<pre><code><div class="search-property">
<form method="GET" action="{% url 'pages-index' %}">
{% csrf_token %}
<div class="row">
<div class="col-md-3">
<div class="single-search-property">
<div class="intro">
{{ form.property_type }}
</div>
</div>
</div>
<div class="col-md-3">
<div class="single-search-property">
<div class="intro">
{{ form.state }}
</div>
</div>
</div>
<div class="col-md-3">
<div class="single-search-property">
{{ form.min_beds }}
</div>
</div>
<div class="col-md-3">
<div class="single-search-property">
<div class="intro">
{{ form.bathroom_type }}
</div>
</div>
</div>
<div class="col-md-3">
<div class="single-search-property">
{{ form.min_price }}
</div>
</div>
<div class="col-md-3">
<div class="single-search-property">
{{ form.max_price }}
</div>
</div>
<div class="col-md-3">
<div class="single-search-property button">
<button type="submit">Search</button>
</div>
</div>
</div>
</form>
</div>
</code></pre>
|
<python><django>
|
2023-06-08 01:30:09
| 1
| 323
|
apollos
|
76,428,038
| 11,922,765
|
ValueError: Data must be 1-dimensional, got ndarray of shape (6, 1) instead
|
<p>I wanted to create a new dataframe using index, X, y variable data.</p>
<pre><code>df_idx1 = [[3]
[4]
[5]
[6]
[7]
[8]]
X1 = [[[10]
[20]
[30]]
[[20]
[30]
[40]]
[[30]
[40]
[50]]
[[40]
[50]
[60]]
[[50]
[60]
[70]]
[[60]
[70]
[80]]]
y1 = [[[40]]
[[50]]
[[60]]
[[70]]
[[80]]
[[90]]]
print("Length index, X, Y: ", len(df_idx1), len(X1), len(y1))
print("df_idx1",df_idx1)
print("X1",X1)
print("y1",y1)
exdf1 = pd.DataFrame(data={"X":np.array(X1),"y":np.array(y1)},index=df_idx1)
</code></pre>
<p>present output:</p>
<pre><code>Length of index, X, Y: 6 6 6
ValueError: Data must be 1-dimensional, got ndarray of shape (6, 1) instead
</code></pre>
<p>Expected output:</p>
<pre><code>exdf1=
X y
3 [[10],[20],[30]] [[40]]
4 [[20],[30],[40]] [[50]]
5 ....
6
7
8 [[60],[70],[80]] [[90]]
</code></pre>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-06-08 01:23:48
| 2
| 4,702
|
Mainland
|
76,427,995
| 6,929,343
|
How do I clean up `sqlite_master` format in Python?
|
<p>My Python application is nearing completion (for three years) and I'm adding debugging information:</p>
<pre class="lang-py prettyprint-override"><code>print("\nSQL - Sqlite3 Information")
print("====================================\n")
print("Sqlite3 Version:", sql.sqlite3.sqlite_version, "\n")
rows = sql.con.execute("SELECT * FROM sqlite_master;").fetchall()
for row in rows:
print(row, "\n")
</code></pre>
<p>The output is strange with tab (<code>\t</code>) and newline (<code>\n</code>) characters plus extraordinary amount of whitespace:</p>
<pre class="lang-bash prettyprint-override"><code>SQL - Sqlite3 Information
====================================
Sqlite3 Version: 3.11.0
(u'table', u'History', u'History', 7, u'CREATE TABLE History(Id INTEGER PRIMARY KEY, Time FLOAT, MusicId INTEGER, User TEXT, Type TEXT, Action TEXT, SourceMaster TEXT, SourceDetail TEXT, Target TEXT, Size INT, Count INT, Seconds FLOAT, Comments TEXT)')
(u'index', u'MusicIdIndex', u'History', 15, u'CREATE INDEX MusicIdIndex ON History(MusicId)')
(u'index', u'TimeIndex', u'History', 28, u'CREATE INDEX TimeIndex ON History(Time)')
(u'table', u'Music', u'Music', 1277, u'CREATE TABLE "Music" (\n\t`Id`\tINTEGER,\n\t`OsFileName`\tTEXT,\n\t`OsAccessTime`\tFLOAT,\n\t`OsModificationTime`\tFLOAT,\n\t`OsCreationTime`\tFLOAT,\n\t`OsFileSize`\tINT,\n\t`MetaArtistName`\tTEXT,\n\t`MetaAlbumName`\tTEXT,\n\t`MetaSongName`\tTEXT,\n\t`ReleaseDate`\tFLOAT,\n\t`OriginalDate`\tFLOAT,\n\t`Genre`\tTEXT,\n\t`Seconds`\tINT,\n\t`Duration`\tTEXT,\n\t`PlayCount`\tINT,\n\t`TrackNumber`\tTEXT,\n\t`Rating`\tTEXT,\n\t`UnsynchronizedLyrics`\tBLOB,\n\t`LyricsTimeIndex`\tTEXT,\n\tPRIMARY KEY(Id)\n)')
(u'index', u'OsFileNameIndex', u'Music', 2, u'CREATE UNIQUE INDEX OsFileNameIndex ON Music(OsFileName)\n\n')
(u'index', u'TypeActionIndex', u'History', 16, u'CREATE INDEX TypeActionIndex ON History(Type, Action)')
</code></pre>
<p>Am I making a rookie mistake when creating the Sqlite3 tables in Python?</p>
<pre class="lang-py prettyprint-override"><code> """ Open SQL Tables """
global con, cursor, hist_cursor
# con = sqlite3.connect(":memory:") # Initial tests, not needed anymore
con = sqlite3.connect(FNAME_LIBRARY)
# MUSIC TABLE - 'PlayCount' & 'Rating' not used
# Create the table (key must be INTEGER not just INT !
# See https://stackoverflow.com/a/7337945/6929343 for explanation
con.execute("create table IF NOT EXISTS Music(Id INTEGER PRIMARY KEY, \
OsFileName TEXT, OsAccessTime FLOAT, \
OsModificationTime FLOAT, OsCreationTime FLOAT, \
OsFileSize INT, MetaArtistName TEXT, MetaAlbumName TEXT, \
MetaSongName TEXT, ReleaseDate FLOAT, OriginalDate FLOAT, \
Genre TEXT, Seconds INT, Duration TEXT, PlayCount INT, \
TrackNumber TEXT, Rating TEXT, UnsynchronizedLyrics BLOB, \
LyricsTimeIndex TEXT)")
con.execute("CREATE UNIQUE INDEX IF NOT EXISTS OsFileNameIndex ON \
Music(OsFileName)")
# HISTORY TABLE
con.execute("create table IF NOT EXISTS History(Id INTEGER PRIMARY KEY, \
Time FLOAT, MusicId INTEGER, User TEXT, Type TEXT, \
Action TEXT, SourceMaster TEXT, SourceDetail TEXT, \
Target TEXT, Size INT, Count INT, Seconds FLOAT, \
Comments TEXT)")
con.execute("CREATE INDEX IF NOT EXISTS MusicIdIndex ON \
History(MusicId)")
con.execute("CREATE INDEX IF NOT EXISTS TimeIndex ON \
History(Time)")
con.execute("CREATE INDEX IF NOT EXISTS TypeActionIndex ON \
History(Type, Action)")
</code></pre>
<hr />
<h2>Success Using Variation of Accepted Answer:</h2>
<pre class="lang-bash prettyprint-override"><code>SQL - Sqlite3 Information
====================================
Sqlite3 Version: 3.11.0
(u'table', u'Music', u'Music', 2, u'CREATE TABLE Music(Id INTEGER PRIMARY KEY, OsFileName TEXT, OsAccessTime FLOAT, OsModificationTime FLOAT, OsCreationTime FLOAT, OsFileSize INT, MetaArtistName TEXT, MetaAlbumName TEXT, MetaSongName TEXT, ReleaseDate FLOAT, OriginalDate FLOAT, Genre TEXT, Seconds INT, Duration TEXT, PlayCount INT, TrackNumber TEXT, Rating TEXT, UnsynchronizedLyrics BLOB, LyricsTimeIndex TEXT)')
(u'index', u'OsFileNameIndex', u'Music', 3, u'CREATE UNIQUE INDEX OsFileNameIndex ON Music(OsFileName)')
(u'table', u'History', u'History', 4, u'CREATE TABLE History(Id INTEGER PRIMARY KEY, Time FLOAT, MusicId INTEGER, User TEXT, Type TEXT, Action TEXT, SourceMaster TEXT, SourceDetail TEXT, Target TEXT, Size INT, Count INT, Seconds FLOAT, Comments TEXT)')
(u'index', u'MusicIdIndex', u'History', 5, u'CREATE INDEX MusicIdIndex ON History(MusicId)')
(u'index', u'TimeIndex', u'History', 6, u'CREATE INDEX TimeIndex ON History(Time)')
(u'index', u'TypeActionIndex', u'History', 7, u'CREATE INDEX TypeActionIndex ON History(Type, Action)')
</code></pre>
<h3>Snippet of Code used:</h3>
<pre class="lang-py prettyprint-override"><code> # MUSIC TABLE - 'PlayCount' & 'Rating' not used
# Avoid \t & \n in sqlite_master. See: https://stackoverflow.com/questions/76427995/
# how-do-i-clean-up-sqlite-master-format-in-python
# Create the table (key must be INTEGER not just INT !
# See https://stackoverflow.com/a/7337945/6929343 for explanation
con.execute("create table IF NOT EXISTS Music(Id INTEGER PRIMARY KEY, " +
"OsFileName TEXT, OsAccessTime FLOAT, " +
"OsModificationTime FLOAT, OsCreationTime FLOAT, " +
"OsFileSize INT, MetaArtistName TEXT, MetaAlbumName TEXT, " +
"MetaSongName TEXT, ReleaseDate FLOAT, OriginalDate FLOAT, " +
"Genre TEXT, Seconds INT, Duration TEXT, PlayCount INT, " +
"TrackNumber TEXT, Rating TEXT, UnsynchronizedLyrics BLOB, " +
"LyricsTimeIndex TEXT)")
con.execute("CREATE UNIQUE INDEX IF NOT EXISTS OsFileNameIndex ON " +
"Music(OsFileName)")
</code></pre>
|
<python><sqlite>
|
2023-06-08 01:03:55
| 1
| 2,005
|
WinEunuuchs2Unix
|
76,427,940
| 130,288
|
How to get Python Notebook (Jupyter/Colab) to reliably display emoji-variants of unicode characters?
|
<p>The Unicode U+FE0F invisible variation-selector character will often – & dare I say is <em>supposed-to</em> – cause many specific preceding characters to adopt an 'emoji' presentation, with standard emoji-width.</p>
<p>But, trying to get such emoji-display in the output of a Jupyter Notebook – specifically in Google Colab's version – the display is inconsistent, with <em>some</em> but not <em>all</em> chacters with emoji-variants never showing their emoji-version.</p>
<p>For example, a cell with code...</p>
<pre class="lang-py prettyprint-override"><code>copyright_plain = '©' # U+00A9
copyright_emoji = '©️' # U+00A9 U+FE0F
scissors_plain = '✂' # U+2702
scissors_var = '✂︎' # U+2702 U+FE0E
scissors_red = '✂️' # U+2702 U+FE0F
doubang_plain = '‼' # U+203C
doubang_var = '‼︎' # U+203C U+FE0E
doubang_emoji = '‼️' # U+203C U+FE0F
poo_emoji = '💩' # U+1F4A9
all = [copyright_plain, copyright_emoji, scissors_plain, scissors_black, scissors_red,
doubang_plain, doubang_var, doubang_emoji, poo_emoji]
print(f"list literal: {all}")
print(f"codes: {[('-'.join(hex(ord(c)) for c in chr)) for chr in all]}")
print(f"string: {''.join(all)}")
print("grid:")
for i in range(0, 11, 2):
print(copyright_emoji*(i//2) + scissors_red*(10-i) + poo_emoji*(i//2))
</code></pre>
<p>…displays the following mixed & undesired output:</p>
<p><a href="https://i.sstatic.net/ieQcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ieQcJ.png" alt="inconsistent emoji glyphs as seen in notebook output cell" /></a></p>
<p>Note that if I <em>paste</em> the cell output here to SO, even the <code>red_scissors</code> glyphs that appear properly, there, collapse back to plain variable-width black scissors here on SO:</p>
<pre>
list literal: ['©', '©️', '✂', '✂︎', '✂️', '‼', '‼︎', '‼️', '💩']
codes: ['0xa9', '0xa9-0xfe0f', '0x2702', '0x2702-0xfe0e', '0x2702-0xfe0f', '0x203c', '0x203c-0xfe0e', '0x203c-0xfe0f', '0x1f4a9']
string: ©©️✂✂︎✂️‼‼︎‼️💩
grid:
✂️✂️✂️✂️✂️✂️✂️✂️✂️✂️
©️✂️✂️✂️✂️✂️✂️✂️✂️💩
©️©️✂️✂️✂️✂️✂️✂️💩💩
©️©️©️✂️✂️✂️✂️💩💩💩
©️©️©️©️✂️✂️💩💩💩💩
©️©️©️©️©️💩💩💩💩💩
</pre>
<p>Ideally, the <code>©️</code> emoji-variant that's actually in the string would display as an emoji, making that 'grid' a perfect square, with each line exactly 10 emoji-widths across.</p>
<p>The in-notebook output is definitely retaining the necessary distinctions, without any visual impact, because if the same text is copied & pasted into a text view that works as desired – like say MacOS TextEdit – the desired display distinctions are seen:</p>
<p><a href="https://i.sstatic.net/dmKQQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dmKQQ.png" alt="proper emoji-output of exact same text as above in MacOS TextEdit" /></a></p>
<p>Is there any way to get Python notebook cells to fully-respect emoji-variants - at the very least in printed output cells, but also ideally in typed input (source code) cells, as well?</p>
|
<python><jupyter-notebook><google-colaboratory><emoji><python-unicode>
|
2023-06-08 00:45:43
| 0
| 54,493
|
gojomo
|
76,427,870
| 19,787,814
|
Jenkins - Create an agent with a python script
|
<p>as the title suggests, I am willing to automate the creation of a jenkins agent using python!</p>
<p>I don't have any idea about jenkins or what is it even used for, I just got a task to do in my internship and I can't skip it, so I just installed Jenkins and didn't anything yet!</p>
<p>I'm sorry for not giving a code sample because I'm literally stuck in that task!
All I have is this django view function:</p>
<pre><code>@login_required
def create(request):
if request.method == 'POST':
name = request.POST.get('form-submit')
description = request.POST.get('employe-description')
employee = Employee(user=request.user, name=name, description=description)
employee.save()
return redirect('create')
return render(request, 'create.html')
</code></pre>
<p>And all I want is when the form is submitted, a jenkins agent will be created using the same name as the employee's from the request.</p>
<p>Thanks in advance.</p>
|
<python><django><jenkins>
|
2023-06-08 00:22:40
| 1
| 460
|
Nova
|
76,427,726
| 7,519,300
|
Pytorch Lightning - Display per class metrics (precision, recall, f1) in Train.Test(model, datamodule)
|
<p>How would i go around adding per-class metrics inside the test_step methods from a Pytorch Lighting Module?</p>
<p>I used</p>
<pre><code> self.f1_each = F1Score(task="multiclass", num_classes=self.num_classes, average=None )
self.precision_each = Precision(task="multiclass", num_classes=self.num_classes, average=None )
self.recall_each = Recall(task="multiclass", num_classes=self.num_classes, average=None )
</code></pre>
<p>In order to compute them. Inside test_step, i've written:</p>
<pre><code>def test_step(...):
class_precisions = self.precision_each(preds, labels)
class_recalls = self.recall_each(preds, labels)
class_f1 = self.f1_each(preds, labels)
for i in range(self.num_classes):
self.logger.expe(f'precision_{i}', class_precisions[i] , on_step=False, on_epoch=True, prog_bar=True)
self.log(f'recall_{i}', class_recalls[i] , on_step=False, on_epoch=True, prog_bar=True)
self.log(f'f1_{i}', class_f1[i] , on_step=False, on_epoch=True, prog_bar=True)
</code></pre>
<p>However this approach doesn't seem to return the correct values?</p>
<p>Am i doing a mistake? Is there another recommended method?</p>
|
<python><machine-learning><deep-learning><pytorch><pytorch-lightning>
|
2023-06-07 23:42:01
| 0
| 315
|
Eduard6421
|
76,427,689
| 3,247,006
|
Cannot import the attribute from "abc.py" in Python
|
<p>I have <code>hello()</code> in <code>test/abc.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "test/abc.py"
def hello():
return "Hello"
</code></pre>
<p>Then, I import and call <code>hello()</code> from <code>abc.py</code> in <code>test/main.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "test/main.py"
from abc import hello
print(hello())
</code></pre>
<p>But, I got the error below:</p>
<blockquote>
<p>ImportError: cannot import name 'hello' from 'abc'</p>
</blockquote>
<p>So, I import <code>abc.py</code> and call <code>abc.hello()</code> in <code>test/main.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "test/main.py"
import abc
print(abc.hello())
</code></pre>
<p>But, I got another error below:</p>
<blockquote>
<p>AttributeError: module 'abc' has no attribute 'hello'</p>
</blockquote>
<p>Actually, I can import <code>abc.py</code> without error as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import abc
# print(abc.hello())
</code></pre>
<p>So, how can I import and call <code>hello()</code> from <code>abc.py</code>?</p>
|
<python><python-3.x><import><attributes><abc>
|
2023-06-07 23:29:27
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,427,496
| 9,067,589
|
Convert Binary File Data To ArrayBuffer Sent From Python As Byte-Array String
|
<p>I'm developing a Python App which uses PyQt5 and PyQtWebEngine to create a browser window. Using QWebChannel I'm able to send strings back and forth between the Web Application and the Python program. I want to use this feature to load in external files such as GLB Models.</p>
<p>I need to read the file in Python then send the byte data as a string to the Web Application where I convert it into an ArrayBuffer before its usable. I'm loosely following the dissussion <a href="https://discourse.threejs.org/t/looking-for-a-way-to-edit-a-glb-file-as-a-string-in-runtime/37745/25" rel="nofollow noreferrer">here</a> which does the exact same thing except that the file is read using FileReader in JavaScript instead of reading it with Python.</p>
<p>In my Python Code I'm doing this to read the file:</p>
<pre><code>@pyqtSlot(result=str)
def load_binary_data(self):
file = open("model.glb", "rb")
byte_array = str(file.read())
return byte_array
</code></pre>
<p>With QWebChannel I can access the above Python function inside my Javascript code. To convert the string to an ArrayBuffer I'm using TextEncoder:</p>
<pre><code>bridge.load_binary_data((byteArrayString)=>{
var byteArray = new TextEncoder().encode(byteArrayString)
var arrayBuffer = byteArray.buffer;
})
</code></pre>
<p>With ThreeJS I'm supposed to be able to load a model as ArrayBuffer using <code>loader.parse()</code> like this:</p>
<pre><code>const loader = new GLTFLoader()
loader.parse( arrayBuffer, '', ( gltf ) => {
// do stuff
})
</code></pre>
<p>However when I try to load the Model I get this error:</p>
<pre><code>Unexpected token b in JSON at position 0
</code></pre>
<p>I'm guessing there is something wrong with how I send the byte array as a string from Python to my Web Application as there is this <code>b''</code> wrapping around the entire string which <code>GLTFLoader().parse()</code> does not seem to like very much. I'm cannot figure out how to solve this issue, what would be the correct approach here?</p>
<p>How do I correctly convert the byte-array string sent from Python to an ArrayBuffer in my JavScript?</p>
|
<javascript><python><arraybuffer>
|
2023-06-07 22:34:38
| 1
| 1,247
|
Miger
|
76,427,444
| 15,542,245
|
Using class set value to access a list subset
|
<p>The <code>testList</code> example represents list strings from a text file. The <code>compareList</code> represents a values list from a dictionary. I want to 'pull out' subsets of a string AFTER the match.</p>
<pre><code># Use a set result to print "fox here!"
testList = ['quick', 'brown', 'fox', 'here!']
compareList = ['beige', 'black', 'brown']
foundList = []
result = set(testList) & set(compareList)
# <class 'set'> {'brown'}
</code></pre>
<p>How to use the <code>compareList</code> value 'brown' to print the remainder of the <code>testList</code> eg <code>['fox', 'here!']</code> From a resultant <code>foundList</code></p>
|
<python><list>
|
2023-06-07 22:20:32
| 1
| 903
|
Dave
|
76,427,427
| 14,820,295
|
split column with values inside and outside brackets python
|
<p>I need to split (in python code) my column "code" into 2 columns:</p>
<ul>
<li>"outside" with value outside the brackets</li>
<li>"inside" with value inside the brackets</li>
</ul>
<p>I'd create a "prepared" column by adding a "+" separator after each letter before the number.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>code</th>
<th>outside</th>
<th>inside</th>
<th>prepared</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>-(83C24H)</td>
<td>-</td>
<td>83C24H</td>
<td>83C + 24H</td>
</tr>
<tr>
<td>2</td>
<td>30(30C14H)</td>
<td>30</td>
<td>30C14H</td>
<td>30C + 14H</td>
</tr>
<tr>
<td>3</td>
<td>25</td>
<td>25</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>Thank u!</p>
|
<python><string><split>
|
2023-06-07 22:17:38
| 1
| 347
|
Jresearcher
|
76,427,425
| 1,107,474
|
Python make a POST request after the connection is already established
|
<p>Earlier I a question asking how to create a connection before calling POST, GET etc using the Python Requests library:</p>
<p><a href="https://stackoverflow.com/questions/76426442/create-connection-using-python-request-library-before-calling-post-get-etc?noredirect=1#comment134763838_76426442">Create connection using Python Request library before calling post/get etc</a></p>
<p>turns out I couldn't.</p>
<p>I'm now using the httpclient library and have this:</p>
<pre><code>import http.client as httplib
params = my_params()
headers = {"Connection": "Keep-Alive"}
conn = httplib.HTTPSConnection("api.domain.com:443")
conn.request("POST", "/api/service", params, headers)
response = conn.getresponse()
</code></pre>
<p>I was hoping it would establish the connection first. However, the line:</p>
<pre><code>conn = httplib.HTTPSConnection("api.domain.com:443")
</code></pre>
<p>is only taking 2ms to complete and I know the latency to the recipient is approximately 95ms. So, I presume this line isn't creating the connection and it's still being created along with the POST.</p>
<p>How can I make a POST request in Python, where the connection is already established before make the POST request?</p>
<p>Something like:</p>
<pre><code>conn = https.connect("api.domain.com:443")
# Connection established before making POST
conn.request("POST", "/api/service", params, headers)
</code></pre>
|
<python><rest><http><post><get>
|
2023-06-07 22:16:28
| 1
| 17,534
|
intrigued_66
|
76,427,406
| 10,146,441
|
Pydantic is not reducing set to its unique items
|
<p>I have two pydantic models such that <code>Child</code> model is part of <code>Parent</code> model.
For both models the unique field is <code>name</code> field. However, when I create two <code>Child</code> instances with the same <code>name</code>(<code>"Child1"</code>), the <code>Parent.children</code> set unable to identify the duplicate children with the same name.</p>
<p>I am using Python 3.11.</p>
<pre><code>import pydantic
class Child(pydantic.BaseModel):
name: str
__hash__ = object.__hash__
def __str__(self):
cls_name = self.__class__.__name__
return f"{cls_name}({self.name})"
# def __cmp__(self, other):
# return self.name == other.name
class Parent(pydantic.BaseModel):
name: str
children: set[Child] = set()
def __str__(self):
cls_name = self.__class__.__name__
return f"{cls_name}({self.name}, children={self.children})"
data = dict(
name="test",
children=set([
Child(name="Child1"),
Child(name="Child1"),
])
)
print(Parent(**data))
# Result: Parent(test, children={Child(name='Child1'), Child(name='Child1')})
# desired result is to have a single `Child` because both childrent have same `name`.
</code></pre>
<p><code>Result: Parent(test, children={Child(name='Child1'), Child(name='Child1')})</code>
desired result is to have a single <code>Child</code> because both childrent have same <code>name</code>.</p>
<p>Where am I going wrong?</p>
|
<python><python-3.x><set><pydantic>
|
2023-06-07 22:09:57
| 1
| 684
|
DDStackoverflow
|
76,427,399
| 10,500,957
|
Hierarchy in TKinter Treeview
|
<p>I have been trying to create a Hierarchical TreeView with data from a database in Gtk but have been unsuccessful (<a href="https://stackoverflow.com/questions/76410276/creating-a-hierarchical-gtk-treestore-with-unkown-dataset-length">Creating a Hierarchical Gtk.TreeStore with Unkown Dataset Length</a>). I discovered that in TKinter I can almost do what I want, but there are a couple of problems.</p>
<ol>
<li>If a node has an identical value to another, Python throws up the following error:</li>
</ol>
<pre><code> add_node(k, v, x)
File "/home/bigbird/tvTk3.py", line 10, in add_node
tree.insert(k, 1, i, text=i,values=[myList2[x][1]])
File "/usr/lib/python3.10/tkinter/ttk.py", line 1361, in insert
res = self.tk.call(self._w, "insert", parent, index,
_tkinter.TclError: Item Electric already exists
</code></pre>
<p>Is is possible to have nodes with the same name? And what do I change to avoid this?</p>
<ol start="2">
<li>I want the number associated with each value to be inserted in column 1 of the treeview. But the line values=[myList[x][1]] does not change with each new node. How do I get the associated number in the row?</li>
</ol>
<p>Could somebody help me get this going? Thank you very much.</p>
<pre><code>import tkinter as tk
from tkinter import ttk as ttk
myList = [['Board', 71], ['Book', 8], ['Breadboard', 6], ['Cables', 48], ['Capacitor', 9], ['Capacitor | Ceramic', 10], ['Capacitor | Electrolytic', 11], ['Circuits', 73], ['Circuits | 555 Timer', 77], ['Circuits | Audio', 76], ['Connector', 12], ['Connectors', 49], ['Drill', 54], ['Drill | Electric', 56], ['Drill | Manual', 55], ['Screwdriver', 32], ['Screwdriver | Electric', 58], ['Screwdriver | Manual', 57], ['Veraboard', 7], ['Wire', 35], ['Wire | Jumper', 36], ['Wire | Solid Core', 37], ['Wire | Stranded', 38]]
# MyList2 replaces ['Screwdriver | Electric', 58], ['Screwdriver | Manual', 57], of MyList with ['Screwdriver | Electrical', 58], ['Screwdriver | Hand', 57],
myList2 = [['Board', 71], ['Book', 8], ['Breadboard', 6], ['Cables', 48], ['Capacitor', 9], ['Capacitor | Ceramic', 10], ['Capacitor | Electrolytic', 11], ['Circuits', 73], ['Circuits | 555 Timer', 77], ['Circuits | Audio', 76], ['Connector', 12], ['Connectors', 49], ['Drill', 54], ['Drill | Electric', 56], ['Drill | Manual', 55], ['Screwdriver', 32], ['Screwdriver | Electrical', 58], ['Screwdriver | Hand', 57], ['Veraboard', 7], ['Wire', 35], ['Wire | Jumper', 36], ['Wire | Solid Core', 37], ['Wire | Stranded', 38]]
# /57036493/
def add_node(k, v,x):
for i, j in v.items():
tree.insert(k, 1, i, text=i,values=[myList[x][1]]) # MyList2 will work, MyList does not
if isinstance(j, dict):
add_node(i, j,x)
# /59767830/, /52971687/
tree = {}
for path in myList: # MyList2 will work, MyList does not
node = tree
for level in path[0].split(' | '):
if level:
node = node.setdefault(level, dict())
# /57036493/
hierarchy = tree
root = tk.Tk()
root.geometry("900x900")
tree = ttk.Treeview(root)
ttk.Style().configure('Treeview', rowheight=30)
tree["columns"] = ("one")
tree.column("one")
x=0
for k, v in hierarchy.items():
tree.insert("", 1, k, text=k, values=[myList[x][1]]) # MyList2 will work, MyList does not
add_node(k, v, x)
x+=1
tree.pack(expand=True, fill='both')
root.mainloop()
</code></pre>
|
<python><tkinter><treeview>
|
2023-06-07 22:09:27
| 2
| 322
|
John
|
76,427,340
| 16,674,436
|
Dockerize a python application for distribution
|
<p>I created a small python application that automatically sends emails with some customization.</p>
<p>I want to distribute it in a way that is fairly easy for end users (i.e., general Windows and macOS users) to download and use.</p>
<p>I thought of bundling the app as a docker image, but I am running in technical problems.</p>
<p>The main one is that I rely on the <code>yagmail</code> library, which itself relies on the <a href="https://pypi.org/project/keyring/" rel="nofollow noreferrer"><code>keyring</code></a> library. It works great outside of docker, but within docker it’s a different story.</p>
<p>It requires a keyring backend that is usually installed by default on platforms. However, within the docker container, there is no keyring backend.</p>
<ol>
<li><p>I struggle to install a backend keyring in the docker without making a multi-GB image.</p>
</li>
<li><p>Even once installed, it seems quite of a hassle if the end users have to re-enter their app specific password (as google requires it) again and again, every time they want to use the app. (I mean when they close/remove the docker container as they have no use for it during a week or so, and then relaunch it having to re-enter an app specific password.)</p>
</li>
</ol>
<p>More broadly, I know docker would probably make it simpler for me to distribute the app to different platforms, but I wonder if for the end user it is the easiest to download/use the application. I’d like your take on it.</p>
<p>Here is my <code>dockerfile</code> for reference:</p>
<pre><code>FROM python:3.10
WORKDIR /app
# RUN apt update -y && apt-get install -y kwalletmanager # This did not work
RUN pip3 install --no-cache-dir yagmail[all]
COPY iteramail.py .
CMD ["python3", "iteramail.py", "--help", "run", "--host=0.0.0.0"]
</code></pre>
|
<python><docker><email><yagmail><keyring>
|
2023-06-07 21:54:39
| 0
| 341
|
Louis
|
76,427,133
| 13,538,030
|
Aggregate rows per its text pattern using Python
|
<p>I am working on an interesting text mining (maybe text pattern recognition) problem. The dataset has two columns as follows:</p>
<pre class="lang-none prettyprint-override"><code>column1 (string) column2 (integer)
'ABC' 3
'DEF' 4
'abc' 1
'abc:very specific message' 1
...
</code></pre>
<p>The first column is input message, while the second one is its count. For many of the input messages, they are in fact the same, but different with regard to the details, e.g., the third row and the fourth row deliver the same message, but the fourth row has more specific information compared to the third one.</p>
<p>How can I create an algorithm to aggregate those rows with high similarity? By the end of the day, the aggregate data will be somehow as follows:</p>
<pre class="lang-none prettyprint-override"><code>column1 (string) column2 (integer)
'ABC' 3
'DEF' 4
'abc' 2
</code></pre>
|
<python><python-3.x><text-mining>
|
2023-06-07 21:11:34
| 1
| 384
|
Sophia
|
76,427,131
| 6,087,667
|
Element-wise multiply of two numpy array of different shapes
|
<p>I have two numpy arrays <code>F</code> and <code>C</code> of dimensions <code>NxM</code> and <code>MxB</code>. How can I get a matrix with elements <code>Result[n,m,b] = F[n,m]*C[m,b]</code> with dimensions <code>NxMxB</code> in an efficient manner?</p>
|
<python><numpy><matrix-multiplication>
|
2023-06-07 21:10:58
| 1
| 571
|
guyguyguy12345
|
76,427,084
| 1,068,223
|
Streaming high frequency data with Python requests API - latency issues
|
<p>I am using requests to subscribe to a high frequency data stream. During busy times, there is a significant latency between the timestamp of the messages being sent to me and the timestamp of when I am able to process them in my script. During <strong>quiet times</strong>, this latency is consistently around <strong>500 ms</strong>. During <strong>busy times</strong>, it rises to over <strong>50 seconds</strong>. Here are some more observations:</p>
<ol>
<li><p>I looked at the resource usage of my machine during busy times and CPU load and memory hardly rise.</p>
</li>
<li><p>The latency during busy times begins (when I started my script) at around <1s but as the script runs, the latency increases to 50s. Therefore, this latency is not inherent to the sender of the data but to some processing going on in my script. <strong>As my script runs, the latency gets higher and higher.</strong></p>
</li>
</ol>
<p>Therefore, I am concluding that the problem is with my processing of the data. Here is what I am doing to process the data.</p>
<p>The function essentially sends dict objects back to a callback for further processing. Each dict is a JSON dict being sent by the streaming API.</p>
<pre><code>def receive_stream(callback):
s = requests.Session()
with s.get(...stream=True) as resp:
buffer = ''
for line in resp.iter_lines():
line = line.decode('utf-8')
json_dict = None
if line == '}':
json_dict = buffer + line
buffer = ''
else:
buffer = buffer + line
if json_dict is not None:
parsed_json = json.loads(json_dict)
if parsed_json['type'] != 'hb':
t = Thread(target=callback, args=(parsed_json))
t.start()
</code></pre>
<p><strong>Note:</strong> The callback function measures the latency over every 50 messages (takes a mean) or so and calculates it as date time.datetime.now() - the timestamp in the json dict being sent to it.</p>
<p>If I measure the latency in this function above AND remove the callback, it makes little difference -- same observations apply. Therefore, the downstream processing is not the issue (plus I am sending it off to another thread, so it shouldn't be)</p>
<p><strong>My questions:</strong></p>
<ol>
<li><p>Is the way I am processing the incoming lines of data inherently inefficient, so that during busy times, there is a big backlog of lines that are unprocessed? Could it be the json.loads() or line.decode() <--- I have to call the latter?</p>
</li>
<li><p>Is the way I am using threads, the problem? I don't think the downstream processing is particularly costly, it just sends a message using zmq and measures latency and removing the callback altogether, makes little difference to this problem. Should I be using a queue?</p>
</li>
</ol>
<p>Thanks</p>
|
<python><json><http><python-requests><streaming>
|
2023-06-07 20:59:48
| 1
| 1,478
|
BYZZav
|
76,427,082
| 1,473,517
|
How to make an animated set of images with a working duration per image
|
<p>I want to make an animated set of images in python. I can't use an animated gif as I need more than 256 colors. I have tried an animated png but it seems the duration argument doesn't work. Here is a MWE:</p>
<pre><code>import seaborn as sns
import io
import matplotlib.pyplot as plt
import scipy
import math
import numpy as np
import imageio
vmin = 0
vmax = 0.4
images = []
for i in range(3):
mu = 0
variance = i+0.1
sigma = math.sqrt(variance)
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
row = scipy.stats.norm.pdf(x, mu, sigma)
matrix = []
for _ in range(100):
matrix.append(row)
cmap = "viridis"
hmap = sns.heatmap(matrix, vmin=vmin, vmax=vmax, cmap=cmap)
hmap.set_xticklabels([*range(0, 100, 4)])
cbar = hmap.collections[0].colorbar
cbar.set_label("Colorbar Label", labelpad=10) # Set the label and adjust the spacing using labelpad
plt.savefig(f"image_{i}.png", format='png')
plt.close()
images.append(imageio.v3.imread(f"image_{i}.png"))
imageio.mimwrite("out.png", images, duration=4)
</code></pre>
<p>Is there way to set the duration, or alternatively, is there another way to make an animated set of images that doesn't require any external tools other than python?</p>
|
<python><python-imageio><apng>
|
2023-06-07 20:59:00
| 1
| 21,513
|
Simd
|
76,427,014
| 3,375,695
|
Python asyncio collect info from multiple tasks at timeout, then continue in parallel to main until all tasks finished
|
<p>My use case:</p>
<ol>
<li>I want to run N tasks in parallel</li>
<li>After a specific period (timeout) I need to know the results from the tasks already finished by then. This is my "main" thread. This is already working in the snippet below.</li>
<li>In a task parallel and independent to main, I want to continue waiting for the remaining tasks to finish. Onces all tasks finished successfully I need to do some calculations based on all N results, and store the computation in a persistent cache. "main" does not need to wait it.</li>
</ol>
<p>Any hints on how to achieve no. 3?</p>
<pre><code> tasks = [
asyncio.create_task(self.call_111(harmonized_address)),
asyncio.create_task(self.call_222(harmonized_address)),
asyncio.create_task(self.call_333(harmonized_address)),
asyncio.create_task(self.call_444(harmonized_address))
]
await asyncio.wait(tasks, timeout=timeout.total_seconds())
res = []
for t in tasks:
try:
r = t.result()
except asyncio.InvalidStateError as e:
res.append(None)
except Exception as e:
res.append(e)
else:
res.append(r)
</code></pre>
<p><strong>Update:</strong> (3) probably needs a bit more context. We are using python to develop AWS lambda functions. Our lambda function <strong>must</strong> respond within a given timebox, and it is acceptable for it to be a partial one. But we also want to wait for the remaining tasks to finish to update a cache with the full response.</p>
<p>Python based AWS Lambdas are not async. Maybe I need to start a separate thread, and execute the loop and tasks in their (similar to how larks outlined it in his answer below). And combine it with what <a href="https://gist.github.com/dmfigol/3e7d5b84a16d076df02baa9f53271058" rel="nofollow noreferrer">dmfigol</a> has posted in a gist about "Python asyncio event loop in a separate thread "</p>
<p>Obviously the new thread will finish later then the "lambda". I hope that is supported by AWS.</p>
|
<python><python-asyncio>
|
2023-06-07 20:46:51
| 1
| 891
|
Juergen
|
76,426,960
| 8,554,833
|
Connect to a VPN with Python Automatically
|
<p>I have databases I want to automate some data pulls.</p>
<p>To access the databases I have to connect to separate VPNs.</p>
<p>I have the remote gateway, port, and credentials.</p>
<p>How can I use python to automate the connecting and disconnecting from different VPNs?</p>
|
<python><vpn>
|
2023-06-07 20:37:58
| 0
| 728
|
David 54321
|
76,426,946
| 1,030,542
|
python3.8 delete file unittests
|
<p>I have a utility function for removing a file as below.
And I intend to write unittest cases for it.</p>
<pre><code>def delete_file_name(file_name, logger):
"""
Deletes given file name
"""
try:
os.remove(file_name)
logger.info("Deleted file: %s", file_name)
except FileNotFoundError:
logger.error("Error: File not found %s", file_name)
except PermissionError:
logger.error("Invalid permissions while deleting: %s", file_name)
except OSError as exc:
logger.error("Error while deleting %s: %s", exc.filename, exc.strerror)
</code></pre>
<p>How do I go about writing test cases?
Considering that these exceptions are not really raised but handled by a log message.
Any pointers on how the assertions would be?</p>
<p>Tried below but it shows an error that exception is not raised.</p>
<pre><code> @patch('os.remove')
def test_file_delete_with_os_error(self, mock_remove):
mock_remove.side_effect = OSError
with self.assertRaises(OSError):
delete_file_name('abc.txt', logging.getLogger('test'))
</code></pre>
|
<python><python-unittest><python-3.8>
|
2023-06-07 20:36:23
| 1
| 2,453
|
iDev
|
76,426,918
| 14,374,599
|
Embed a plotly.html plot into a quarto notebook?
|
<p>I have two local plotly graphs saved as html files - <code>volcano_plot_1.html</code> and <code>volcano_plot_2.html</code> in the same folder as a <code>notebook.qmd file</code></p>
<p>currently, I'm importing them, and rendering them in the notebook like so:</p>
<pre><code>from IPython.display import IFrame
IFrame(src='r_vs_nr_volcano.html', width=1200, height=600)
</code></pre>
<p>This generally works fine if I run the command:</p>
<p><code>quarto render path/to/notebook/.qmd --to-html</code></p>
<p>However, if I try and export this notebook from outside the folder, the plotly graphs no longer render. I've tried using: <code>embed-resources: True</code> and <code>self-contained: true</code> in the header section of the notebook, However, these also seem to yield no result.</p>
<p>Is there any way I can embed and display these html files into the quarto html output, so that it is self-contained and portable?</p>
<p>As always, any advice is greatly appreciated :)</p>
|
<python><html><jupyter-notebook><plotly><quarto>
|
2023-06-07 20:31:09
| 0
| 497
|
KLM117
|
76,426,629
| 1,363,127
|
Function of colon in Numpy array
|
<p>I'm trying to understand this line of Python.</p>
<pre><code> dP = -(dt / tau_stdp) * P[:, it] + A_plus * pre_spike_train_ex[:, it + 1]
</code></pre>
<p>but I can't find what the ' : ' does in these NumPy arrays</p>
|
<python><numpy>
|
2023-06-07 19:41:32
| 0
| 743
|
Seti Net
|
76,426,442
| 1,107,474
|
Create connection using Python Request library before calling post/get etc
|
<p>I would like to use Python's requests library to establish a REST connection and make a POST request separately (using the first connection).</p>
<p>Originally I started with this:</p>
<pre><code>import requests
session = requests.Session()
# I start timing here
response = session.post("full url", headers={"header" : "header value"})
# Response includes a timestamp
</code></pre>
<p>but I soon realised <code>post()</code> must be creating the connection because <code>Session()</code> doesn't accept the URL.</p>
<p>Once a connection is already established I know the network latency to the recipient is about 95ms. This above request takes ~200ms to reach the recipient.</p>
<p>To create the connection before calling <code>post()</code> I found this answer:</p>
<p><a href="https://stackoverflow.com/a/51026159/1107474">https://stackoverflow.com/a/51026159/1107474</a></p>
<p>My second attempt:</p>
<pre><code>import requests
import time
import urllib.request
from requests import Session
from urllib.parse import urljoin
class LiveServerSession(Session):
def __init__(self, base_url=None):
super().__init__()
self.base_url = base_url
def request(self, method, url, *args, **kwargs):
joined_url = urljoin(self.base_url, url)
return super().request(method, joined_url, *args, **kwargs)
if __name__ == "__main__":
baseUrl = "https://api.domain.com"
with LiveServerSession(baseUrl) as session:
# Connection should already be created?
# I start timing here
response = session.post(baseUrl + "my sub url", headers={"header" : "header value"})
</code></pre>
<p>However, it still takes ~196ms.</p>
<p>Am I using this wrong? I would just like to have the connection established before calling <code>post()</code>.</p>
|
<python><rest><http><post><python-requests>
|
2023-06-07 19:14:19
| 0
| 17,534
|
intrigued_66
|
76,426,262
| 396,014
|
How to replace x ticks with labels below avxlines
|
<p>I have a working script to display information from a gyroscope. I represent Euler angles with a line of varying thickness representing angular velocity like the following. This is for the x axis; data is stored in a pandas dataframe df, idx is a list of time steps:</p>
<pre><code>scaling = 0.1
# x
ax1 = plt.subplot(gs[0,0:2]) # row span 2 columns
widths = np.absolute(df['avelo']['x'].iloc[start:end])
widths *= scaling
ax1.scatter(idx,df['angle']['x'].iloc[start:end],s=widths,c = 'blue')
for i in steplist:
ax1.axvline(steps[i], linestyle = 'dashed', c = '0.8' )
ax1.axhline(0, linestyle = 'dashed', c = '0.8' )
</code></pre>
<p>The axvlines indicate events. Currently the x axis displays time steps. I would like to hide those and replace them with avxline labels step1, step2, etc. I know how to hide the x ticks, but how do I replace them with the avxline labels in the right spots?</p>
<p>Edit: added a plot to clarify the question.
<a href="https://i.sstatic.net/RnKxk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RnKxk.png" alt="plot described in th post" /></a></p>
|
<python><matplotlib><axis-labels>
|
2023-06-07 18:50:53
| 1
| 1,001
|
Steve
|
76,426,255
| 1,034,797
|
sklearn transformer for outlier removal - returning xy?
|
<p>I am trying to remove rows that are labeled outliers. I have this partially working, but not in the context of a pipeline and I am not sure why.</p>
<pre><code>from sklearn.datasets import make_classification
X1, y1 = make_classification(n_samples=100, n_features=10, n_informative=5, n_classes=3)
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.ensemble import IsolationForest
import numpy as np
class IsolationForestOutlierRemover(BaseEstimator, TransformerMixin):
def __init__(self, contamination=0.05):
self.contamination = contamination
self.isolation_forest = IsolationForest(contamination=self.contamination)
def fit(self, X, y=None):
self.isolation_forest.fit(X)
mask = self.isolation_forest.predict(X) == 1
self.mask = mask
return self
def transform(self, X, y=None):
if y is not None:
return X[self.mask], y[self.mask]
else:
return X[self.mask]
def fit_transform(self, X, y=None):
self.fit(X, y)
return self.transform(X, y)
working = IsolationForestOutlierRemover().fit_transform(X1, y1)
working[0].shape
# 95
working
# %%
pipelinet = Pipeline(
[
("outlier_removal", IsolationForestOutlierRemover(contamination=0.05)),
("random_forest", RandomForestClassifier()),
]
)
notworking = pipelinet.fit(X1, y1)
notworking
</code></pre>
<p>Getting the following error:</p>
<pre><code>ValueError Traceback (most recent call last)
/home/mmann1123/Documents/github/YM_TZ_crop_classifier/4_model.py in line 10
349 # %%
351 pipelinet = Pipeline(
352 [
353 ("outlier_removal", IsolationForestOutlierRemover(contamination=0.05)),
354 ("random_forest", RandomForestClassifier()),
355 ]
356 )
---> 358 notworking = pipelinet.fit(X1, y1)
359 notworking
File ~/miniconda3/envs/crop_class/lib/python3.8/site-packages/sklearn/pipeline.py:406, in Pipeline.fit(self, X, y, **fit_params)
404 if self._final_estimator != "passthrough":
405 fit_params_last_step = fit_params_steps[self.steps[-1][0]]
--> 406 self._final_estimator.fit(Xt, y, **fit_params_last_step)
408 return self
File ~/miniconda3/envs/crop_class/lib/python3.8/site-packages/sklearn/ensemble/_forest.py:346, in BaseForest.fit(self, X, y, sample_weight)
344 if issparse(y):
345 raise ValueError("sparse multilabel-indicator for y is not supported.")
--> 346 X, y = self._validate_data(
347 X, y, multi_output=True, accept_sparse="csc", dtype=DTYPE
348 )
...
--> 185 array = numpy.asarray(array, order=order, dtype=dtype)
186 return xp.asarray(array, copy=copy)
187 else:
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 95) + inhomogeneous part.
</code></pre>
|
<python><scikit-learn><pipeline>
|
2023-06-07 18:49:41
| 2
| 5,392
|
mmann1123
|
76,426,243
| 12,596,824
|
df.str.get_dummies() vs pd.get_dummies() (Python)
|
<p>I have a series like so:</p>
<pre><code>0 mcdonalds, popeyes
1 wendys
2 popeyes
3 mcdonalds
4 mcdonalds
</code></pre>
<p>I use the following code:</p>
<pre><code>df.str.get_dummies(sep = ', ')
</code></pre>
<p>to get the following data frame:</p>
<pre><code>popeyes wendys mcdonalds
1 0 1
0 1 0
1 0 0
0 0 1
0 0 1
</code></pre>
<p>I want to remove a column though to account for the dummy variable trap. how do i do this like in the drop_first argument in <a href="https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer">pd.get_dummies()</a>?</p>
<p>expected output might look something like this, but i don't want to hardcode to drop a random column:</p>
<pre><code>popeyes wendys
1 0
0 1
1 0
0 0
0 0
</code></pre>
|
<python><pandas>
|
2023-06-07 18:47:50
| 2
| 1,937
|
Eisen
|
76,426,007
| 6,099,211
|
Popen with PIPE blocks called process, when it's output large enough, how to unblock it?
|
<p>I need to start subprocess, which can just exit, printing something to console, or never exits (became a web server e.g.).</p>
<p>So in my runner I'd like to check is process still running or already exits, and do it in completely unblocking manner, so if it exits, I'd like to capture the output.</p>
<p>So it looks like this</p>
<pre class="lang-py prettyprint-override"><code>
import subprocess
import sys
from subprocess import Popen
from time import sleep
with Popen(
[
sys.executable,
r"...\stub1.py"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
) as proc:
while True:
ret = proc.poll()
print(ret)
if ret is not None: break
sleep(1)
</code></pre>
<p>where <code>stub1.py</code> is just</p>
<pre><code>print("foo"*100)
</code></pre>
<p>So it works fine while output of subprocess is relatively small.</p>
<pre><code>None
0
</code></pre>
<p>But with larger output called process is blocked (on writing to filled pipe as I understand) and never exits, so <code>poll()</code> always returns <code>None</code>.</p>
<p>So how to unblock called process, flushing pipe in unblocking manner?</p>
<p>I see I can pass real file descriptors for stdout/stderr, but is it possible with PIPEs?</p>
|
<python><subprocess>
|
2023-06-07 18:10:29
| 1
| 1,200
|
Anton Ovsyannikov
|
76,425,808
| 4,857,606
|
Regex replace: return empty none/empty string if no match
|
<p>So i thought i know a bit of regex but it seems i found a case where my knowledge is at is end.
Anyway i tried the following <a href="https://stackoverflow.com/questions/53119343/regex-replace-function-in-cases-of-no-match-1-returns-full-line-instead-of-nu">Regex Replace function: in cases of no match, $1 returns full line instead of null</a>
But the main difference is i want to not only replace the input with the match but also insert some characters inbetween the matches. Simply put i want to standardize the input to a certain pattern.
The regex i want to match and capture specific parts of the input but not everything</p>
<pre><code>^[\D]*(?P<from_day>(0?[1-9])|([12][0-9])|3[01])[\.\-\s,■]+(?P<from_month>(0?[1-9])|(1[0-2]))[\.\-\s,■]*(?P<until_day>(0?[1-9])|[12][0-9]|3[01])[\.\-\s,■]+(?P<until_month>(0?[1-9])|1[012])[\D]*$
</code></pre>
<p>the replacement string:</p>
<pre><code>\g<from_day>.\g<from_month>-\g<until_day>.\g<until_month>
</code></pre>
<p>Input:</p>
<pre><code>28.11 16.12
"13.01 23,09"
01.08.-31.12
"01.01,-51.12"
"01,01.-31,12."
01083112
1.02 - 4.3
</code></pre>
<p>Current output:</p>
<pre><code>28.11-16.12.-.
13.01-23.09.-.
01.08-31.12.-.
.-..-.
01.01-31.12.-.
.-..-.
1.02-4.3.-.
</code></pre>
<p>Expected/desired:</p>
<pre><code>28.11-16.12
13.01-23.09
01.08-31.12
01.01-31.12
1.02-4.3
</code></pre>
<p><a href="https://regex101.com/r/M3arvW/1" rel="nofollow noreferrer">https://regex101.com/r/M3arvW/1</a></p>
|
<python><regex><substitution>
|
2023-06-07 17:35:41
| 1
| 336
|
Tollpatsch
|
76,425,762
| 19,600,130
|
docker executor failed running
|
<p>I wrote a python script now I want to dockerize the script so I make a 'Dockerfile' and then I wrote <code>docker build -t price_updater_service.py .</code> but in final step after copying my 'requirements.txt' i got this error 'executor failed running [/bin/sh -c pip install --no-cache-dir -r requirements.txt]: exit code: 2'.
atually problem is that image can't download package listed in my'requirements.txt'.
what else I can to do? is this any other way? also I'll put my 'requirements.txt' and my 'Dockerfile' here:</p>
<p>'requirements.txt'</p>
<pre><code>pandas == 2.0
redis
</code></pre>
<p>Dockerfile</p>
<pre><code># Use the official Python base image
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /app
# Copy the Python requirements file to the container
COPY requirements.txt .
# Install the Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the Python code to the container
COPY main.py .
# Run the Python script when the container starts
CMD [ "python", "main.py" ]
</code></pre>
<p>full error trace:</p>
<pre><code> => ERROR [4/5] RUN pip install --no-cache-dir -r requirements.txt 169.8s
------
> [4/5] RUN pip install --no-cache-dir -r requirements.txt:
#6 11.00 Collecting pandas==2.0
#6 12.16 Downloading pandas-2.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.4 MB)
#6 164.0 ━━━━ 1.3/12.4 MB 4.3 kB/s eta 0:42:42
#6 164.0 ERROR: Exception:
#6 164.0 Traceback (most recent call last):
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
#6 164.0 yield
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 561, in read
#6 164.0 data = self._fp_read(amt) if not fp_closed else b""
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read
#6 164.0 return self._fp.read(amt) if amt is not None else self._fp.read()
#6 164.0 File "/usr/local/lib/python3.9/http/client.py", line 463, in read
#6 164.0 n = self.readinto(b)
#6 164.0 File "/usr/local/lib/python3.9/http/client.py", line 507, in readinto
#6 164.0 n = self.fp.readinto(b)
#6 164.0 File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
#6 164.0 return self._sock.recv_into(b)
#6 164.0 File "/usr/local/lib/python3.9/ssl.py", line 1242, in recv_into
#6 164.0 return self.read(nbytes, buffer)
#6 164.0 File "/usr/local/lib/python3.9/ssl.py", line 1100, in read
#6 164.0 return self._sslobj.read(len, buffer)
#6 164.0 socket.timeout: The read operation timed out
#6 164.0
#6 164.0 During handling of the above exception, another exception occurred:
#6 164.0
#6 164.0 Traceback (most recent call last):
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
#6 164.0 status = run_func(*args)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
#6 164.0 return func(self, options, args)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 419, in run
#6 164.0 requirement_set = resolver.resolve(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
#6 164.0 result = self._result = resolver.resolve(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
#6 164.0 state = resolution.resolve(requirements, max_rounds=max_rounds)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
#6 164.0 self._add_to_criteria(self.state.criteria, r, parent=None)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
#6 164.0 if not criterion.candidates:
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
#6 164.0 return bool(self._sequence)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
#6 164.0 return any(self)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
#6 164.0 return (c for c in iterator if id(c) not in self._incompatible_ids)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
#6 164.0 candidate = func()
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
#6 164.0 self._link_candidate_cache[link] = LinkCandidate(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in __init__
#6 164.0 super().__init__(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in __init__
#6 164.0 self.dist = self._prepare()
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
#6 164.0 dist = self._prepare_distribution()
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
#6 164.0 return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
#6 164.0 return self._prepare_linked_requirement(req, parallel_builds)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 536, in _prepare_linked_requirement
#6 164.0 local_file = unpack_url(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 166, in unpack_url
#6 164.0 file = get_http_url(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 107, in get_http_url
#6 164.0 from_path, content_type = download(link, temp_dir.path)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/network/download.py", line 147, in __call__
#6 164.0 for chunk in chunks:
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar
#6 164.0 for chunk in iterable:
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks
#6 164.0 for chunk in response.raw.stream(
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 622, in stream
#6 164.0 data = self.read(amt=amt, decode_content=decode_content)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 587, in read
#6 164.0 File "/usr/local/lib/python3.9/contextlib.py", line 137, in __exit__
#6 164.0 self.gen.throw(typ, value, traceback)
#6 164.0 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher
#6 164.0 raise ReadTimeoutError(self._pool, None, "Read timed out.")
#6 164.0 pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
#6 169.6
#6 169.6 [notice] A new release of pip is available: 23.0.1 -> 23.1.2
#6 169.6 [notice] To update, run: pip install --upgrade pip
------
executor failed running [/bin/sh -c pip install --no-cache-dir -r requirements.txt]: exit code: 2
PS C:\Users\hesam\OneDrive\Desktop\sam\python_django_test\price_update> docker build -t price_updater_service.py .
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
[+] Building 103.0s (8/9)
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 32B 0.1s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.9-slim 3.8s
=> [1/5] FROM docker.io/library/python:3.9-slim@sha256:5f0192a4f58a6ce99f732fe05e3b3d00f12ae62e183886bca3ebe3d202686c7f 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 81B 0.0s
=> CACHED [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY requirements.txt . 0.0s
=> ERROR [4/5] RUN pip install --no-cache-dir -r requirements.txt 99.0s
------
> [4/5] RUN pip install --no-cache-dir -r requirements.txt:
#7 53.23 Collecting pandas==2.0
#7 55.65 Downloading pandas-2.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.4 MB)
#7 94.83 0.0/12.4 MB 2.5 kB/s eta 1:21:34
#7 94.84 ERROR: Exception:
#7 94.84 Traceback (most recent call last):
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
#7 94.84 yield
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 561, in read
#7 94.84 data = self._fp_read(amt) if not fp_closed else b""
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read
#7 94.84 return self._fp.read(amt) if amt is not None else self._fp.read()
#7 94.84 File "/usr/local/lib/python3.9/http/client.py", line 463, in read
#7 94.84 n = self.readinto(b)
#7 94.84 File "/usr/local/lib/python3.9/http/client.py", line 507, in readinto
#7 94.84 n = self.fp.readinto(b)
#7 94.84 File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
#7 94.84 return self._sock.recv_into(b)
#7 94.84 File "/usr/local/lib/python3.9/ssl.py", line 1242, in recv_into
#7 94.84 return self.read(nbytes, buffer)
#7 94.84 File "/usr/local/lib/python3.9/ssl.py", line 1100, in read
#7 94.84 return self._sslobj.read(len, buffer)
#7 94.84 socket.timeout: The read operation timed out
#7 94.84
#7 94.84 During handling of the above exception, another exception occurred:
#7 94.84
#7 94.84 Traceback (most recent call last):
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
#7 94.84 status = run_func(*args)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
#7 94.84 return func(self, options, args)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 419, in run
#7 94.84 requirement_set = resolver.resolve(
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
#7 94.84 result = self._result = resolver.resolve(
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
#7 94.84 state = resolution.resolve(requirements, max_rounds=max_rounds)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
#7 94.84 self._add_to_criteria(self.state.criteria, r, parent=None)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
#7 94.84 if not criterion.candidates:
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
#7 94.84 return bool(self._sequence)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
#7 94.84 return any(self)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
#7 94.84 return (c for c in iterator if id(c) not in self._incompatible_ids)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
#7 94.84 candidate = func()
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
#7 94.84 self._link_candidate_cache[link] = LinkCandidate(
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in __init__
#7 94.84 super().__init__(
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in __init__
#7 94.84 self.dist = self._prepare()
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
#7 94.84 dist = self._prepare_distribution()
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
#7 94.84 return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
#7 94.84 return self._prepare_linked_requirement(req, parallel_builds)
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 536, in _prepare_linked_requirement
#7 94.84 local_file = unpack_url(
#7 94.84 File "/usr/local/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line
</code></pre>
|
<python><docker>
|
2023-06-07 17:28:33
| 1
| 983
|
HesamHashemi
|
76,425,647
| 15,140,144
|
Python: real order of execution for equalities/inequalities in expressions?
|
<p>Imagine this "sneaky" python code:</p>
<pre class="lang-py prettyprint-override"><code>>>> 1 == 2 < 3
False
</code></pre>
<p>According to <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="nofollow noreferrer">Python documentation</a> all of the operators <code>in, not in, is, is not, <, <=, >, >=, !=, ==</code> have the same priority, but what happens here seems contradictory.</p>
<p>I get even weirder results after experimenting:</p>
<pre class="lang-py prettyprint-override"><code>>>> (1 == 2) < 3
True
>>> 1 == (2 < 3)
True
</code></pre>
<p>What is going on?</p>
<p>(Note)</p>
<pre class="lang-py prettyprint-override"><code>>>> True == 1
True
>>> True == 2
False
>>> False == 0
True
>>> False == -1
False
</code></pre>
<p>Boolean type is a subclass of <code>int</code> and True represents <code>1</code> and False represents <code>0</code>.</p>
<p>This is likely an implementation detail and may differ from version to version, so I'm mostly interested in python 3.10.</p>
|
<python><python-3.x><equality><operator-precedence><boolean-expression>
|
2023-06-07 17:11:18
| 1
| 316
|
oBrstisf8o
|
76,425,637
| 1,484,601
|
htcondor: can a python executable started via condor_submit access all the values of its condor descriptors?
|
<p>Here a trivial submission file:</p>
<pre><code>executable = /path/to/myexecutable
error = test.err
output = test.out
log = test.log
request_memory = 1024
request_cpus = 1
queue
</code></pre>
<p><code>myexecutable</code> is a python executable</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import htcondor
# here some code that retrieves the
# value of a descriptor, e.g. 1024 for 'request_memory'
</code></pre>
<p>Once the executable started by condor (e.g. via condor_submit) is there some ways to retrieve the values of the descriptors provided in the submission file ? (including the values of the expanded macros, so directly parsing the file would not do it)</p>
|
<python><htcondor>
|
2023-06-07 17:09:25
| 1
| 4,521
|
Vince
|
76,425,545
| 15,524,510
|
Scipy optimize: limiting number of non-zero variables
|
<p>My optimization problem has ~200 variables but I would like to find a solution that only uses 5 of them. As in 195 should be zero and the other 5 can be non zero.</p>
<p>I have tried the following constraint, but it seems that the optimization algorithm completely ignores it, as it continues to use all 200 variables whether I include the constraint or not. Is there something I am missing or can SLSQP just not handle this?</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.optimize import minimize, Bounds
tmp = pd.DataFrame()
tmp.insert(loc=0,column='pred', value = np.random.random(200))
tmp['x0'] = 0
def obj(x, df = tmp):
return np.sum(-x * df['pred'].values)
def c2(x):
return -(len(x[x!=0])-5)
sol = minimize(fun=obj,x0=tmp['x0'],method='SLSQP',bounds=Bounds(-10,10),jac=False,
constraints=({'type': 'ineq', 'fun': c2}),
options={'maxiter': 1000})
</code></pre>
<p>When I run this it just sets everything to 10 and ignores c2.</p>
|
<python><pandas><scipy><scipy-optimize>
|
2023-06-07 16:54:10
| 1
| 363
|
helloimgeorgia
|
76,425,337
| 11,724,014
|
Write CSV file decrease it/s during processing
|
<p>I have a CSV file where I want to modify values inside. This file has more than 1 millions lines.</p>
<p>At the begiging of the file, the script go at <code>5000 lines per seconds</code>, but then it decrease to <code>400 lines per seconds</code>.</p>
<p><strong>I would like to know why it is slowing down and how to improve this script performances</strong> ?</p>
<pre><code>txt = ""
data = open(path + "_mag.csv").readlines()
i = 0
for line in tqdm.tqdm(data):
if i == 0:
txt += line + "\n"
i += 1
continue
values = line.split(";")
for j, value in enumerate(mag_rotated[i-1]): # 15 values in float type
index = j + 5
values[index] = str(value)
new_line = ";".join(values)
txt += new_line + "\n"
i += 1
</code></pre>
<p>Here is a small lines examples of data:</p>
<pre><code>Timestamp [µs];Temp ADC1 [°C];Temp ADC2 [°C];Temp ADC3 [°C];Temp ADC4 [°C];X0 [µT];Y0 [µT];Z0 [µT];X1 [µT];Y1 [µT];Z1 [µT];X2 [µT];Y2 [µT];Z2 [µT];X3 [µT];Y3 [µT];Z3 [µT];X4 [µT];Y4 [µT];Z4 [µT];unused;X5 [µT];Y5 [µT];Z5 [µT];X6 [µT];Y6 [µT];Z6 [µT];X7 [µT];Y7 [µT];Z7 [µT];X8 [µT];Y8 [µT];Z8 [µT];X9 [µT];Y9 [µT];Z9 [µT];unused
911532093;30.563;30.313;;;-1.62919885e+01;1.86305991e+01;3.93283914e+01;-1.59370440e+01;1.88661384e+01;3.93722822e+01;-1.60104993e+01;1.89392587e+01;3.93954596e+01;-1.63657738e+01;1.86798405e+01;3.93386700e+01;-1.65630085e+01;1.78254986e+01;3.95608333e+01;;;;;;;;;;;;;;;;;
911533093;30.563;30.313;;;-1.62927678e+01;1.86261966e+01;3.93056706e+01;-1.59385424e+01;1.88634411e+01;3.93504949e+01;-1.60110385e+01;1.89362330e+01;3.93737558e+01;-1.63657738e+01;1.86764223e+01;3.93170219e+01;-1.65643876e+01;1.78232520e+01;3.95398268e+01;;;;;;;;;;;;;;;;;
911534093;30.563;30.313;;;-1.62899203e+01;1.86278438e+01;3.93394221e+01;-1.59340472e+01;1.88623921e+01;3.93840899e+01;-1.60095108e+01;1.89366524e+01;3.94075406e+01;-1.63627492e+01;1.86763323e+01;3.93510360e+01;-1.65617493e+01;1.78224132e+01;3.95712317e+01;;;;;;;;;;;;;;;;;
</code></pre>
<p>For context: my script aim to replace theses 15 values:
<code>X0 [µT] Y0 [µT] Z0 [µT] X1 [µT] Y1 [µT] Z1 [µT] X2 [µT] Y2 [µT] Z2 [µT] X3 [µT] Y3 [µT] Z3 [µT] X4 [µT] Y4 [µT] Z4 [µT]</code></p>
|
<python><performance><csv><optimization><export>
|
2023-06-07 16:21:08
| 2
| 1,314
|
Vincent Bénet
|
76,425,142
| 3,513,267
|
wxPython, missing taskbar (not tray) icon, and top level windows
|
<p>I have a wxpython app, in the init function for the main application frame I create 2 dialogs, the first is a splash screen and the second is a login dialog. During the startup of my app, the splash screen is displayed, app init stuff happens, and the splash screen disappears before showing the login dialog. After logging in, the login dialog is hidden and the main app is shown. The problem is that the task bar entry and icon disappears when the login dialog is visible, it shows normally when the splash or main app is visible. I am talking about the main taskbar entry, not a system tray icon. What am I missing?</p>
|
<python><windows><wxpython>
|
2023-06-07 15:56:39
| 0
| 512
|
Ryan Hope
|
76,425,137
| 6,251,742
|
How to make precise function annotation after Partial applied
|
<p>Given a function:</p>
<pre class="lang-py prettyprint-override"><code>def foobar(foo: int, bar: str, spam: SpamService) -> str:
return spam.serve(foo, bar)
</code></pre>
<p>This function, similar in look to FastAPI endpoints, define two parameters as "normal" parameters, and one "Service", an abstract class. I want to "reuse" the <code>foobar</code> function like I reuse a FastAPI endpoint in a router, and register <code>n</code> "version" of the function given <code>n</code> dependencies.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>foobar_rabbit = inject(foobar, RabbitService)
foobar_snake = inject(foobar, SnakeService)
foobar_rabbit(1, "rabot")
foobar_snake(2, "sniky")
</code></pre>
<p>I can use <code>functools.partial</code> to do that, but I want the dependancy to be injected as a correct parameter without relying on position or keyword args.</p>
<p>This mean that a function that require two dependencies like:</p>
<pre class="lang-py prettyprint-override"><code>def foobar(foo: int, egg: EggService, spam: SpamService) -> str:
return spam.serve(foo, egg.do_stuff())
</code></pre>
<p>Can be registered like this:</p>
<pre class="lang-py prettyprint-override"><code>foobar_1 = inject(foobar, SpamService1, EggService2)
foobar_1_ = inject(foobar, EggService2, SpamService1) # result in the same Partial
</code></pre>
<p>To do that, I did this code (should run as is on python 3.11, no external dep):</p>
<pre class="lang-py prettyprint-override"><code>import abc
import functools
import inspect
import typing
class Service(abc.ABC):
...
class ServiceA(Service):
@staticmethod
@abc.abstractmethod
def method_a(a: int) -> str:
"""
This method do something.
"""
class ServiceA1(ServiceA):
@staticmethod
def method_a(a: int) -> str:
return f"A1: {a}"
def inject(
func: typing.Callable,
*services: typing.Type[Service]
) -> functools.partial:
annotations = inspect.get_annotations(func)
del annotations["return"]
bind_services = {
key: service
for key, value in annotations.items()
if issubclass(value, Service)
for service in services
if issubclass(service, value)
}
return functools.partial(func, **bind_services)
def foobar(foo: int, spam: ServiceA) -> str:
return spam.method_a(foo)
foobar_A1 = inject(foobar, ServiceA1)
if __name__ == '__main__':
print(foobar_A1(1)) # A1: 1
</code></pre>
<p>The issue is the signature of <code>foobar_A1</code>. If I don't send any arguments, Pycharm won't raise a warning, and mypy won't find any error.</p>
<p>I tried many alternative using <code>typing.TypeVar</code> for example but nothing works.</p>
<p>Here an example of a non working solution:</p>
<pre class="lang-py prettyprint-override"><code>_SERVICE = typing.TypeVar("_SERVICE", bound=Service)
_RETURN = typing.TypeVar("_RETURN")
def inject(
func: typing.Callable[[..., _SERVICE], _RETURN],
*services: typing.Type[Service]
) -> functools.partial[typing.Callable[[_SERVICE, ...], _RETURN]]:
</code></pre>
<p>But mypy complains and it's not creating the expected signature (I'm not used to this kind of annotation wizardry yet).</p>
<p>Expected signature: <code>(foo: int) -> str</code></p>
|
<python><python-typing><mypy><functools>
|
2023-06-07 15:55:56
| 3
| 4,033
|
Dorian Turba
|
76,425,128
| 11,252,662
|
Getting scala.MatchError while trying to execute join on Spark SQL query
|
<p>I have a query that I am trying to execute via spark sql. When I executed the table results separately, I could see results. But when I join them, i am getting scalaMatch error</p>
<p><strong>Query that I used:</strong></p>
<pre class="lang-py prettyprint-override"><code>query = '''
WITH
a1 AS (
SELECT val_in_min, val_date
FROM table1
),
a2 AS (
SELECT val_in_sec, val_date
FROM table2
)
SELECT a.val_in_min, a.val_date, b.val_in_sec
FROM a1 a
JOIN a2 b
ON a.val_date=b.val_date
'''
spark.sql(query).show()
</code></pre>
<ul>
<li><strong>Sample Table1 values</strong>:</li>
</ul>
<pre><code>|val_date |val_in_min |
|----------|-----------|
|2023-05-01| 11.4833 |
|2023-05-02| 9.90000 |
</code></pre>
<ul>
<li><strong>Sample Table2 values</strong>:</li>
</ul>
<pre><code>|val_date |val_in_sec |
|----------|-----------|
|2023-05-01| 23 |
|2023-05-02| 26 |
</code></pre>
<ul>
<li><strong>Error message</strong>:</li>
</ul>
<pre><code>An error was encountered:
An error occurred while calling o120.showString.
: scala.MatchError: list#77341 [] (of class org.apache.spark.sql.catalyst.expressions.ListQuery)
at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.hasFilterPredicate(SizeBasedJoinReorder.scala:285)
at scala.collection.LinearSeqOptimized.exists(LinearSeqOptimized.scala:95)
at scala.collection.LinearSeqOptimized.exists$(LinearSeqOptimized.scala:92)
at scala.collection.immutable.List.exists(List.scala:89)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
Traceback (most recent call last):
File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/pyspark.zip/pyspark/sql/dataframe.py", line 441, in show
print(self._jdf.showString(n, 20, vertical))
File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/pyspark.zip/pyspark/sql/utils.py", line 128, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o120.showString.
: scala.MatchError: list#77341 [] (of class org.apache.spark.sql.catalyst.expressions.ListQuery)
at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.hasFilterPredicate(SizeBasedJoinReorder.scala:285)
at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.$anonfun$hasFilterPredicate$1(SizeBasedJoinReorder.scala:293)
at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.$anonfun$hasFilterPredicate$1$adapted(SizeBasedJoinReorder.scala:293)
at scala.collection.LinearSeqOptimized.exists(LinearSeqOptimized.scala:95)
at scala.collection.LinearSeqOptimized.exists$(LinearSeqOptimized.scala:92)
at scala.collection.immutable.List.exists(List.scala:89)
</code></pre>
<p>I tried changing the join tables, but am still receiving this error. Please advise what I can change in the Spark SQL to fix this error.</p>
<p>Spark version is 3.0.1 running <strong>on Amazon AWS</strong>.</p>
|
<python><scala><apache-spark><apache-spark-sql><hive>
|
2023-06-07 15:55:35
| 1
| 397
|
vvazza
|
76,424,989
| 12,596,824
|
Replacing values in pandas dataframe with multiple conditions
|
<p>I want to create a new column that does a mapping. For example if col1 only contains Ted values then i want to make the mapping Ted, if it contains both Ted and Not Ted i want to make the mapping Both, if it only contains Not Ted values i want to make the mapping Not Ted.</p>
<p>I know how to do it for two conditions because i can just use np.where.. but it's more tricky because there's 3 conditions. How can i do this in python or pandas?</p>
<p><strong>input:</strong></p>
<pre><code>col1
Ted, Ted
Ted, Not Ted
Not Ted, Not Ted
Not Ted, Ted
Ted, Ted
</code></pre>
<p><strong>expected output:</strong></p>
<pre><code>col1 new_col
Ted, Ted Ted
Ted, Not Ted Both
Not Ted, Not Ted Not Ted
Not Ted, Ted Both
Ted, Ted Ted
</code></pre>
|
<python><pandas><replace>
|
2023-06-07 15:40:07
| 6
| 1,937
|
Eisen
|
76,424,929
| 504,717
|
Images are not building on M1 Macbook
|
<p>I was earlier using Macbook Intel and the project (which relies on multiple internal-external images) were working fine. But now when i try to build them, I get stuck at this line.</p>
<pre><code>=> CACHED [api 1/18] FROM gcr.io/<gcp-project-name>/python-base:3.2-259@sha256:<sha256> 0.0s
=> => resolve gcr.io/<gcp-project-name>/python-base:3.2-259@sha256:<sha256> 0.0s
=> [api internal] load build context 0.6s
=> => transferring context: 36.30MB 0.6s
=> [api 2/18] RUN apt-get update && apt-get upgrade -y --no-install-recommends && apt-get install -y graphicsmagick imagemagick pngnq pngcrush build-essential xmlse 131.9s
</code></pre>
<p>I let it run for hours and it didn't move.</p>
<p>Before this, I was stuck at error that requested image doesn't meet the machine so i added following line to my docker-compose (for all images)</p>
<pre class="lang-yaml prettyprint-override"><code> api:
container_name: <image_name>
platform: linux/amd64 <-- I added this line
image: <appname>:latest
build:
</code></pre>
<p>How can i debug whats going on and how can i move forward.</p>
|
<python><python-3.x><docker><apple-m1><python-3.8>
|
2023-06-07 15:32:25
| 0
| 8,834
|
Em Ae
|
76,424,920
| 9,144,990
|
Turn protobuf-like string into json in python without scheme
|
<p>I have a log of grpc messages for a TensorflowServing system looking like this:</p>
<pre><code>model_spec:{name:"model_name"} inputs:{key:"args_0" value:{dtype:DT_STRING tensor_shape:{dim:{size:1}} string_val:"example input"}} inputs:{key:"args_1" value:{dtype:DT_STRING tensor_shape:{dim:{size:1}} string_val:"another example input"}} inputs:{key:"args_3" value:{dtype:DT_FLOAT tensor_shape:{dim:{size:1}} float_val:374969}}
</code></pre>
<p>I suspect this is a protobuf format string (but I might be wrong) and I would like to turn it into a python dictionary (something like this but I am flexible):</p>
<pre class="lang-py prettyprint-override"><code>{
"model_spec":{"name":"model_name"},
"inputs":{
"args_0": "example input",
"args_1": "another example input",
"args_3": 374969
}
}
</code></pre>
<p>Are there any libraries that can help me here? My problem is that I don't have any schema for this message (as required in this <a href="https://stackoverflow.com/a/39685021/9144990">stackoverflow question</a>). I tried parsing this with regular expressions but I hope there might be a better way.</p>
|
<python><protocol-buffers><grpc><tensorflow-serving>
|
2023-06-07 15:31:29
| 0
| 2,145
|
mrzo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.