QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,176,124
| 791,025
|
Python Enum with auto generated string names for typer
|
<p>I'm using <code>python 3.8.10</code> with <code>typer 0.7.0</code> and wanting to use an enum I have defined in my project as an input argument.</p>
<p>I've previously used something like the following with argparse and, alongside setting <code>choices=list(ModelTypeEnum)</code>, it's worked fine - note that this is a short example, I have many more values that in some places require the numerical values assigned.</p>
<pre class="lang-py prettyprint-override"><code>class ModelTypeEnum(Enum):
ALEXNET = 0
RESNET50 = 1
VGG19 = 2
def __str__(self):
return self.name
</code></pre>
<p>However using something like this with typer expects the argument to be an integer</p>
<pre class="lang-py prettyprint-override"><code>app = typer.Typer()
@app.command()
def main(model_type: ModelTypeEnum = typer.Argument(..., help="Model to choose")):
...
return 0
</code></pre>
<pre><code>click.exceptions.BadParameter: 'RESNET50' is not one of 0, 1, 2.
</code></pre>
<p>While this example is short and could be converted to an <code>(str, Enum)</code> I would like a solution where I don't need to specify the string names manually and still have integers associated with each item - is this possible?</p>
|
<python><python-3.x><string><enums>
|
2023-01-19 17:31:33
| 1
| 2,132
|
AdmiralJonB
|
75,176,106
| 8,884,239
|
Oracle OJDBC Class not found exception using PySpark with jupyter notebook
|
<p>I am trying to read data from Oracle DB by using JDBC connection and I am getting following error:</p>
<p>"name": "Py4JJavaError",
"message": "An error occurred while calling o36.load.\n: java.lang.ClassNotFoundException: \nFailed to find data source: ojdbc. Please find packages at\nhttps://spark.apache.org/third-party-projects.html\n \r\n\tat org.apache.spark.sql.errors.QueryExecutionErrors$.failedToFindDataSourceError(QueryExecutionErrors.scala:587)\r\n\tat org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:675)\r\n\tat org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:725)\r\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:207)\r\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:171)\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)\r\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)\r\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\r\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)\r\n\tat py4j.Gateway.invoke(Gateway.java:282)\r\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\r\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\r\n\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)\r\n\tat py4j.ClientServerConnection.run(ClientServerConnection.java:106)\r\n\tat java.base/java.lang.Thread.run(Thread.java:833)\r\nCaused by: java.lang.ClassNotFoundException: ojdbc.DefaultSource\r\n\tat java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)\r\n\tat java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587)\r\n\tat java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)\r\n\tat org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:661)\r\n\tat scala.util.Try$.apply(Try.scala:213)\r\n\tat org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:661)\r\n\tat scala.util.Failure.orElse(Try.scala:224)\r\n\tat org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:661)\r\n\t... 15 more\r\n"</p>
<p><strong>My Oracle version is 11g and I am using ojdbc6-11.2.0.4.jar</strong></p>
<h1>Here is my code:</h1>
<pre><code>from pyspark.sql import SparkSession, SQLContext
from pyspark import SparkContext
spark = SparkSession.builder \
.config('spark.driver.extraClassPath', 'path\\to\\ojdbc6-11.2.0.4.jar') \
.config('spark.executor.extraClassPath', 'path\\to\\ojdbc6-11.2.0.4.jar') \
.master("local[*]") \
.appName("app") \
.getOrCreate()
df = spark.read.format("jdbc")\
.option("url", "jdbc:oracle:thin:@xx.xx.xxx.xx:1521:schemaname")\
.option("dbtable", "table")\
.option("user", "user")\
.option("password","password")\
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
</code></pre>
<p>Can someone guide me why we are getting classMotFound Exception and what I am missing.</p>
<p>Thank you in advance and looking for some suggestions.</p>
|
<python><apache-spark><pyspark><jupyter-notebook><ojdbc>
|
2023-01-19 17:30:30
| 0
| 301
|
Bab
|
75,176,073
| 1,142,881
|
How to broadcast-map by columns from one dataframe to another?
|
<p>I'd like to broadcast or expand a dataframe columns-wise from a smaller set index to a larger set index based on a mapping specification. I have the following example, please accept small mistakes as this is untested</p>
<pre><code>import pandas as pd
# my broadcasting mapper spec
mapper = pd.Series(data=['a', 'b', 'c'], index=[1, 2, 2])
# my data
df = pd.DataFrame(data={1: [3, 4], 2: [5, 6]})
print(df)
# 1 2
# --------
# 0 3 5
# 1 4 6
# and I would like to get
df2 = ...
print(df2)
# a b c
# -----------
# 0 3 5 5
# 1 4 6 6
</code></pre>
<p>Simply mapping the columns will not work as there are duplicates, I would like to instead expand to the new values as defined in mapper:</p>
<pre><code># this will of course not work => raises InvalidIndexError
df.columns = df.columns.as_series().map(mapper)
</code></pre>
<p>A naive approach would just iterate the spec ...</p>
<pre><code>df2 = pd.DataFrame(index=df.index)
for i, v in df.iteritems():
df2[v] = df[i]
</code></pre>
|
<python><pandas>
|
2023-01-19 17:27:11
| 2
| 14,469
|
SkyWalker
|
75,176,014
| 6,290,211
|
Inclusive argument is not working as expected in pd.date_range()
|
<p>With <code>pandas 1.40</code> it was introduced the argument <code>inclusive</code> in the class <code>pd.date_range()</code>.
Reading the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>inclusive{“both”, “neither”, “left”, “right”}, default “both”
Include boundaries; Whether to set each bound as closed or open</p>
</blockquote>
<p>I wanted to use it to create a <code>df</code> with monthly frequency starting on the first day of 01/2020 and finishing on the first day of the month of 12/2022.
I wrote this code:</p>
<pre><code>df = pd.date_range(start='01/01/2020', end='31/12/2022', freq = "M", inclusive = "left")
</code></pre>
<p>Which is not giving my expected output.</p>
<pre><code>DatetimeIndex(['2020-01-31', '2020-02-29', '2020-03-31', '2020-04-30',
'2020-05-31', '2020-06-30', '2020-07-31', '2020-08-31',
'2020-09-30', '2020-10-31', '2020-11-30', '2020-12-31',
'2021-01-31', '2021-02-28', '2021-03-31', '2021-04-30',
'2021-05-31', '2021-06-30', '2021-07-31', '2021-08-31',
'2021-09-30', '2021-10-31', '2021-11-30', '2021-12-31',
'2022-01-31', '2022-02-28', '2022-03-31', '2022-04-30',
'2022-05-31', '2022-06-30', '2022-07-31', '2022-08-31',
'2022-09-30', '2022-10-31', '2022-11-30'],
dtype='datetime64[ns]', freq='M')
</code></pre>
<p><strong>Expected Output</strong></p>
<pre><code>DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01',
'2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01',
'2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01',
'2021-01-01', '2021-02-01', '2021-03-01', '2021-04-01',
'2021-05-01', '2021-06-01', '2021-07-01', '2021-08-01',
'2021-09-01', '2021-10-01', '2021-11-01', '2021-12-01',
'2022-01-01', '2022-02-01', '2022-03-01', '2022-04-01',
'2022-05-01', '2022-06-01', '2022-07-01', '2022-08-01',
'2022-09-01', '2022-10-01', '2022-11-01', '2022-12-01' ],
dtype='datetime64[ns]', freq='M')
</code></pre>
<ol>
<li>How do I get my expected output with <code>pd.date_range()</code>?</li>
<li>What I misunderstood on how to use correctly the argument <code>inclusive</code> inside <code>pd.date_range()</code>?</li>
</ol>
|
<python><pandas><date-range>
|
2023-01-19 17:21:16
| 2
| 389
|
Andrea Ciufo
|
75,175,999
| 3,029,274
|
Dict Update method only adds last value
|
<p>I have the following code (link <a href="https://www.online-python.com/l8qyPLY0iZ" rel="nofollow noreferrer">here</a>)</p>
<p>I have looked at similar posts and examples online but have not been able to understand / resolve the issue on why <code>.update</code> only adds the last value to the dictionary</p>
<pre><code>import json
def main():
jsonData = {'CompanyId': '320193', 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents':
[{'decimals': '-6', 'unitRef': 'usd', 'period': {'instant': '2020-09-26'}, 'value': '39789000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'instant': '2019-09-28'}, 'value': '50224000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'instant': '2018-09-29'}, 'value': '25913000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'instant': '2021-09-25'}, 'value': '35929000000'}],
'NetIncomeLoss': [{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2020-09-27', 'endDate': '2021-09-25'}, 'value': '94680000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2019-09-29', 'endDate': '2020-09-26'}, 'value': '57411000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2018-09-30', 'endDate': '2019-09-28'}, 'value': '55256000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2020-09-27', 'endDate': '2021-09-25'}, 'segment':
{'dimension': 'us-gaap:StatementEquityComponentsAxis', 'value': 'us-gaap:RetainedEarningsMember'}, 'value': '94680000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2019-09-29', 'endDate': '2020-09-26'},
'segment': {'dimension': 'us-gaap:StatementEquityComponentsAxis', 'value': 'us-gaap:RetainedEarningsMember'}, 'value': '57411000000'},
{'decimals': '-6', 'unitRef': 'usd', 'period': {'startDate': '2018-09-30', 'endDate': '2019-09-28'},
'segment': {'dimension': 'us-gaap:StatementEquityComponentsAxis', 'value': 'us-gaap:RetainedEarningsMember'},
'value': '55256000000'}]}
jsonDump = json.dumps(jsonData)
actualJson = json.loads(jsonDump)
finalJson = fixData(actualJson)
print(finalJson)
def fixData(jsonData):
jsonDump = json.dumps(jsonData)
actualJson = json.loads(jsonDump)
finalObject = {}
finalObject['CompanyId'] = actualJson.get("CompanyId")
CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents = actualJson.get(
"CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents")
one = dataRepeat(CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents,"CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents")
finalObject.update(one)
NetIncomeLoss = actualJson.get("NetIncomeLoss")
two = dataRepeat(NetIncomeLoss, "NetIncomeLoss")
finalObject.update(two)
return finalObject
def dataRepeat(item, property):
final = {}
test = {}
mList = []
super_dict = {}
for i in item:
decimals = i.get("decimals")
unitRef = i.get("unitRef")
if (i.get("period").get("startDate")):
startDate = i.get("period").get("startDate")
else:
startDate = None
if (i.get("period").get("endDate")):
endDate = i.get("period").get("endDate")
else:
endDate = None
if (i.get("period").get("instant")):
instant = i.get("period").get("instant")
else:
instant = None
propertyValue = i.get("value")
final['Decimals'] = decimals
final['UnitRef'] = unitRef
final['StartDate'] = startDate
final['EndDate'] = endDate
final['Instant'] = instant
final[f"{property}"] = propertyValue
mList.append({"Decimals": final['Decimals']}),
mList.append({"UnitRef": final['UnitRef']}),
mList.append({"StartDate": final['StartDate']}),
mList.append({"EndDate": final['EndDate']}),
mList.append({"Instant": final['Instant']}),
mList.append({f"{property}": final[f"{property}"]})
# ou = {}
# for d in mList:
# for key, value in d.items():
# ou.setdefault(key, []).append(value)
# return ou
ou = {}
for d in mList:
ou.update(d)
return ou
main()
</code></pre>
<p>What I get :</p>
<pre><code>{'CompanyId': '320193', 'Decimals': '-6', 'UnitRef': 'usd', 'StartDate': '2018-09-30', 'EndDate': '2019-09-28', 'Instant': None, 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents': '35929000000', 'NetIncomeLoss': '55256000000'}
</code></pre>
<p>vs Entire Data:</p>
<pre><code>{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': None}
{'EndDate': None}
{'Instant': '2020-09-26'}
{'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents': '39789000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': None}
{'EndDate': None}
{'Instant': '2019-09-28'}
{'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents': '50224000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': None}
{'EndDate': None}
{'Instant': '2018-09-29'}
{'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents': '25913000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': None}
{'EndDate': None}
{'Instant': '2021-09-25'}
{'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents': '35929000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2020-09-27'}
{'EndDate': '2021-09-25'}
{'Instant': None}
{'NetIncomeLoss': '94680000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2019-09-29'}
{'EndDate': '2020-09-26'}
{'Instant': None}
{'NetIncomeLoss': '57411000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2018-09-30'}
{'EndDate': '2019-09-28'}
{'Instant': None}
{'NetIncomeLoss': '55256000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2020-09-27'}
{'EndDate': '2021-09-25'}
{'Instant': None}
{'NetIncomeLoss': '94680000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2019-09-29'}
{'EndDate': '2020-09-26'}
{'Instant': None}
{'NetIncomeLoss': '57411000000'}
{'Decimals': '-6'}
{'UnitRef': 'usd'}
{'StartDate': '2018-09-30'}
{'EndDate': '2019-09-28'}
{'Instant': None}
{'NetIncomeLoss': '55256000000'}
</code></pre>
<p>Expected output would be where the finalJson contains all the data</p>
|
<python>
|
2023-01-19 17:20:23
| 1
| 2,090
|
Maddy
|
75,175,995
| 3,795,219
|
Show Python method's API documentation from a URL on hover in VSCode
|
<p><strong>Is it possible to have VSCode show a Python method's online documentation upon mouse-over/hover?</strong></p>
<p>For example, <a href="https://pytorch.org/docs/stable/index.html" rel="nofollow noreferrer">PyTorch API Docs</a> are hosted online and the method hierarchy is simple to navigate. When I hover of a method from the PyTorch library, I'd like to see the API documentation from that URL, <strong>not</strong> just the various method signatures.</p>
<p>This feature is available to Java Developer using Eclipse: The user can configure Eclipse to pull API documentation from a URL (e.g., the <a href="https://docs.oracle.com/javase/8/docs/api/java/util/List.html" rel="nofollow noreferrer">List API JavaDocs</a>) on a project-by-project basis. Here's an example of the popup that shows up in Eclipse for the <code>List.of()</code> method:</p>
<hr />
<p><a href="https://i.sstatic.net/rOSny.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rOSny.png" alt="enter image description here" /></a></p>
<hr />
<p>Notice that along with the method signature is a short description of the method ("returns an unmodifiable list...") along with additional helpful information (e.g., Type parameters, Parameters, and Returns)</p>
<p>To clarify: I'm not only looking for a popup showing the method signature(s), but also the associated documentation. For example: VSCode only displays the method signatures for <code>torch.rand()</code>, but the PyTorch API documentation includes the <a href="https://pytorch.org/docs/stable/generated/torch.rand.html" rel="nofollow noreferrer">full description</a>.</p>
|
<python><visual-studio-code><code-documentation>
|
2023-01-19 17:20:06
| 0
| 8,645
|
Austin
|
75,175,835
| 1,157,639
|
Creating a multilevel dataframe by clubbing columns with the same name under top level
|
<p>Consider the following input dataframe:</p>
<pre><code> index | col_1 | col_2 |
1 | 1234 | 4567 |
2 | 3456 | 9453 |
</code></pre>
<p>Each column of the dataframe is a series (timeseries), and we want to do some computations that create series of length equal to the input (for example, computing the running mean of the series of last 5 samples (op_1), and of 10 samples (op_2).</p>
<p>Finally, the output should be grouped under the name of the column like shown below:</p>
<pre><code> Output:
| col_1 | col_2 |
index | value opr_1 opr_2 | value opr_1 opr_2 |
1 | 1234 10 1 | 4567 22 13 |
2 | 3456 18 6 | 9453 21 4 |
</code></pre>
<p>This should allow me to access each original column's related computation under a single head <code>col_1</code>.</p>
<p>Initially, I thought of increasing the level of the input dataframe manually as:</p>
<pre><code>df.columns = pd.MultiIndex.from_product([df.columns, ['value']])
</code></pre>
<p>But, I cannot figure out how to run <code>apply</code> on its second level alone (considering that I want to address the column as <code>df['col_1']['value']</code> and then to put those values into the dataframe at the same level inside <code>df['col_1']['op_1']</code>.</p>
<p>So, the second approach I tried was to create a dataframe for each operation as</p>
<pre><code>op_1 = df.apply(lambda x: op_1_func(x, **params))
op_2 = df.apply(lambda x: op_2_func(x, **params))
</code></pre>
<p>And then merge the three dataframes to create the desired multilevel view.
However, I cannot figure out a way to concat the dataframes to produce the desired output. Please help!</p>
|
<python><pandas><dataframe><multi-level>
|
2023-01-19 17:06:18
| 1
| 776
|
Anshul
|
75,175,797
| 338,479
|
Apply function to each element of a list in place
|
<p>Similar question to <a href="https://stackoverflow.com/q/25082410/338479">Apply function to each element of a list</a>, however the answers to that question (list comprehension or map()) really return a new list to replace the old one.</p>
<p>I want to do something like this:</p>
<pre><code>for obj in myList:
obj.count = 0
</code></pre>
<p>Obviously I could (and currently do) process the list exactly this way, but I was wondering if there were a construct such as</p>
<pre><code>foreach(lambda obj: obj.count=0, myList)
</code></pre>
<p>Obviously I only care about this if the more "pythonic" way is more efficient. My list can have over 100,000 elements, so I want this to be quick.</p>
|
<python><list>
|
2023-01-19 17:02:18
| 1
| 10,195
|
Edward Falk
|
75,175,720
| 823,633
|
Filter a dataframe by column index in a chain, without using the column name or table name
|
<p>Generate an example dataframe</p>
<pre><code>import random
import string
import numpy as np
df = pd.DataFrame(
columns=[random.choice(string.ascii_uppercase) for i in range(5)],
data=np.random.rand(10,5))
df
V O C X E
0 0.060255 0.341051 0.288854 0.740567 0.236282
1 0.933778 0.393021 0.547383 0.469255 0.053089
2 0.994518 0.156547 0.917894 0.070152 0.201373
3 0.077694 0.685540 0.865004 0.830740 0.605135
4 0.760294 0.838441 0.905885 0.146982 0.157439
5 0.116676 0.340967 0.400340 0.293894 0.220995
6 0.632182 0.663218 0.479900 0.931314 0.003180
7 0.726736 0.276703 0.057806 0.624106 0.719631
8 0.677492 0.200079 0.374410 0.962232 0.915361
9 0.061653 0.984166 0.959516 0.261374 0.361677
</code></pre>
<p>Now I want to filter a dataframe using the values in the first column, but since I make heavy use of chaining (e.g. <code>df.T.replace(0, np.nan).pipe(np.log2).mean(axis=1).fillna(0).pipe(func)</code>) I need a much more compact notation for the operation. Normally you'd do something like</p>
<pre><code>df[df.iloc[:, 0] < 0.5]
V O C X E
0 0.060255 0.341051 0.288854 0.740567 0.236282
3 0.077694 0.685540 0.865004 0.830740 0.605135
5 0.116676 0.340967 0.400340 0.293894 0.220995
9 0.061653 0.984166 0.959516 0.261374 0.361677
</code></pre>
<p>but the awkwardly redundant syntax is horrible for chaining. I want to replace it with a <code>.query()</code>, and normally you'd use the column name like <code>df.query('V < 0.5')</code>, but here I want to be able to query the table by column index number instead of by name. So in the example, I've deliberately randomized the column names. I also can not use the table name in the query like <code>df.query('@df[0] < 0.5')</code> since in a long chain, the intermediate result has no name.</p>
<p>I'm hoping there is some syntax such as <code>df.query('_[0] < 0.05')</code> where I can refer to the source table as some symbol <code>_</code>.</p>
|
<python><pandas><dataframe>
|
2023-01-19 16:55:40
| 3
| 1,410
|
goweon
|
75,175,711
| 1,052,870
|
Unable to parallelise workloads on the KubeCluster operator for Dask
|
<p>I want to be able to run multiple "workflows" in parallel, where each workflow submits Dask tasks and waits for it's own Dask tasks to complete. Some of these workflows will then need to use the results of it's first set of tasks to run more tasks in Dask. I want the workflows to share a single Dask cluster running in Kubernetes.</p>
<p>I've implemented a basic proof of concept, which works with a local cluster but fails on <code>KubeCluster</code> with the error <code>AttributeError: 'NoneType' object has no attribute '__await__'</code>. This is because the operator version of <code>KubeCluster</code> doesn't seem to accept <code>asynchronous=True</code> as an argument like the old version did.</p>
<p>I'm fairly new to python and very new to Dask, so might be doing something very daft. I got a little further than this with the legacy <code>KubeCluster</code> approach but couldn't get <code>Client.upload_file()</code> to work asynchronously, which I'm definitely also going to need.</p>
<p>Very grateful for any direction in getting this to work - I'm not attached to any particular implementation strategy, as long as I can wait on a subset of Dask tasks to complete whilst others run in parallel on the same cluster, which seems like a fairly basic requirement for a distributed computing platform.</p>
<pre class="lang-py prettyprint-override"><code>import logging
import time
import dask
from dask.distributed import Client
import asyncio
from dask_kubernetes.operator import KubeCluster
logging.basicConfig(format='%(asctime)s %(levelname)-8s %(name)-20s %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S')
logger = logging.getLogger('AsyncTest')
def work(workflow, instance):
time.sleep(5)
logger.info('work done for workflow ' + str(workflow) + ', instance ' + str(instance))
async def workflow(id, client):
tasks = []
for x in range(5): tasks.append(client.submit(work, id, x))
for task in tasks: await task
return 'workflow 1 done'
async def all_workflows():
cluster = KubeCluster(custom_cluster_spec='cluster-spec.yml', namespace='dask')
cluster.adapt(minimum=5, maximum=50)
client = Client(cluster, asynchronous=True)
dask.config.set({"distributed.admin.tick.limit": "60s"})
cluster.scale(15)
logger.info('is the client async? ' + str(client.asynchronous)) # Returns false
# work is parallelised as expected if I use this local client instead
# client = await Client(processes=False, asynchronous=True)
tasks = [asyncio.create_task(workflow(1, client)), asyncio.create_task(workflow(2, client)), asyncio.create_task(workflow(3, client))]
await asyncio.gather(*tasks)
await client.close()
logger.info('all workflows done')
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(all_workflows())
</code></pre>
|
<python><asynchronous><async-await><dask>
|
2023-01-19 16:55:00
| 0
| 2,081
|
wwarby
|
75,175,659
| 8,040,369
|
Replace special character "-" with 0 integer value
|
<p>I am traversing an excel sheet using openpyxl() and copying over the contents to another excel.</p>
<pre><code>import openpyxl
NEW_EXCEL_FILE = openpyxl.load_workbook('workbook1.xlsx')
NEW_EXCEL_FILE_WS = NEW_EXCEL_FILE.active
SHEET = NEW_EXCEL_FILE.get_sheet_by_name(sheet_name)
for i,row in enumerate(SHEET.iter_rows()):
for j,col in enumerate(row):
col.value = col.value.replace("-", 0)
NEW_FILE_SHEET.cell(row=i+1,column=j+1).value = col.value
NEW_EXCEL_FILE.save('workbook1.xlsx')
</code></pre>
<p>I need to replace the cell contents which has "-" to 0.
when i tried using <strong>col.value.replace("-", 0)</strong>, it is not accepting int value.</p>
<p>I am getting the below exception,
<strong>TypeError: replace() argument 2 must be str, not int</strong></p>
<p>please help.</p>
<p>Thanks,</p>
|
<python><python-3.x><replace>
|
2023-01-19 16:50:53
| 1
| 787
|
SM079
|
75,175,635
| 1,273,751
|
Run python script then enter interactive session in ipython
|
<p>I have found ways to run a python script and then enter interactive mode:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/13432717/enter-interactive-mode-in-python">Enter Interactive Mode In Python</a></li>
<li><a href="https://stackoverflow.com/questions/50917938/enabling-console-features-with-code-interact">Enabling console features with code.interact</a></li>
</ul>
<p>But none of them enter ipython, but rather just conventional python.</p>
<p>Is there a way to do the same and end up in ipython? That would give auto-complete features and color highlighting.</p>
|
<python><ipython><interactive>
|
2023-01-19 16:49:00
| 0
| 2,645
|
Homero Esmeraldo
|
75,175,617
| 10,007,302
|
Using regular expression to remove commonly used company suffixes from a list of companies
|
<p>I have the following code that I use to generate a list of common company suffixes below:</p>
<pre><code>import re
from cleanco import typesources,
import string
def generate_common_suffixes():
unique_items = []
company_suffixes_raw = typesources()
for item in company_suffixes_raw:
for i in item:
if i.lower() not in unique_items:
unique_items.append(i.lower())
unique_items.extend(['holding'])
return unique_items
</code></pre>
<p>I'm then trying to use the following code to remove those suffixes from a list of company names</p>
<pre><code>company_name = ['SAMSUNG ÊLECTRONICS Holding, LTD', 'Apple inc',
'FIIG Securities Limited Asset Management Arm',
'First Eagle Alternative Credit, LLC', 'Global Credit
Investments','Seatown', 'Sona Asset Management']
suffixes = generate_common_suffixes()
cleaned_names = []
for company in company_name:
for suffix in suffixes:
new = re.sub(r'\b{}\b'.format(re.escape(suffix)), '', company)
cleaned_names.append(new)
</code></pre>
<p>I keep getting a list of unchanged company names despite knowing that the suffixes are there.</p>
<p><strong>Alternate Attempt</strong></p>
<p>I've also tried an alternate method where I'd look for the word and replace it without <code>regex,</code> but i couldn't figure out why it was removing parts of the company name itself - for example, it would remove the first 3 letters in Samsung</p>
<pre><code> for word in common_words:
name = name.replace(word, "")
</code></pre>
<p>Any help is greatly appreciated!</p>
|
<python><python-3.x><regex>
|
2023-01-19 16:46:54
| 1
| 1,281
|
novawaly
|
75,175,306
| 1,773,592
|
How do I structure a repo with Cloud Run needing higher level code?
|
<p>I have added code to a repo to build a Cloud Run service. The structure is like this:
<a href="https://i.sstatic.net/77dxk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/77dxk.png" alt="enter image description here" /></a></p>
<p>I want to run <code>b.py</code> in <code>cr</code>.</p>
<p>Is there any way I can deploy <code>cr</code> without just copying <code>b.py</code> into the <code>cr</code> directory? (I don't want to do it since here are lots of other folders and files that <code>b.py</code> represents).
The problem is due to the Dockerfile being unable to see folders above.
Also how would eg <code>api.py</code> import from <code>b.py</code>?</p>
<p>TIA you lovely people.</p>
|
<python><docker><google-cloud-run>
|
2023-01-19 16:24:05
| 1
| 3,391
|
schoon
|
75,175,220
| 12,082,289
|
Sqlalchemy in-memory database for MSSQL
|
<p>I'm trying to setup tests for my project and want to use an in memory database for those tests. Based on some examples I found online I've been able to get an in memory sqlite database working...</p>
<pre><code>class TestService:
def setup_method(self) -> None:
sqlite_shared_name = "test_db_{}".format(
random.sample(string.ascii_letters, k=4)
)
engine = create_engine(
"sqlite:///file:{}?mode=memory&cache=shared&uri=true".format(
sqlite_shared_name
),
echo=True,
)
self.session = Session(engine)
Base.metadata.create_all(engine)
</code></pre>
<p>The problem I have is my models are all based around MSSQL (that's what the 'real' code is using) so I get</p>
<blockquote>
<pre><code>def do_execute(self, cursor, statement, parameters, context=None):
cursor.execute(statement, parameters)
</code></pre>
<p>E sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database "dbo"</p>
</blockquote>
<p>I also have some specific dialect things brought in for my models</p>
<pre><code>from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER, DECIMAL, BIGINT
</code></pre>
<p>Is there any way to get an in memory sqlalchemy engine that works with all these MSSQL functions?</p>
<p>Thanks!</p>
|
<python><sql-server><sqlalchemy>
|
2023-01-19 16:17:34
| 1
| 565
|
Jeremy Farmer
|
75,175,081
| 667,440
|
Why isn't Anaconda working inside of one of my Conda environments while using Ubuntu LTS?
|
<p>I'm not sure why I can't get Anaconda to launch while I'm inside of one of my Conda environments; I'm using Ubuntu LTS. While I'm in the base environment, I can run the following command in my terminal and Anaconda will start up just fine.</p>
<pre><code>anaconda-navigator
</code></pre>
<p>I also set up a Conda environment with specifications for a specific version of python and specific versions of libraries in a .yml file; conda was able to download and install everything successfully. When I run the below command in my terminal, I can see that the environment changes.</p>
<pre><code>conda active {environment name}
</code></pre>
<p>My problem is that when I try to start up anaconda while inside of this new environment I get a <em>command not found</em> message.</p>
<p>How do I fix this error?</p>
|
<python><ubuntu><anaconda><conda>
|
2023-01-19 16:05:32
| 1
| 1,140
|
j.jerrod.taylor
|
75,174,976
| 17,696,880
|
Capture all capitalized words in a row in a capture group only if they are before the end of the string or if they are before a punctuation mark or \n
|
<pre class="lang-py prettyprint-override"><code>import re
def test_extraction_func(input_text):
word = ""
try_save_name = False #Not stop yet
#Here I have tried to concatenate a substring at the end to try to establish it as a delimiter
# that allows me to identify whether or not it is the end of the sentence
input_text_aux = input_text + "3hsdfhdg11sdf"
name_capture_pattern_01 = r"([A-ZÁÉÍÓÚÜÑ][a-záéíóúüñ]+(?:\s*[A-ZÁÉÍÓÚÜÑ][a-záéíóúüñ]+)*)"
regex_pattern_01 = r"(?i:no)\s*(?i:identifiques\s*como\s*(?:a\s*un|un|)\s*nombre\s*(?:a\s*el\s*nombre\s*(?:de|)|al\s*nombre|a)\s+)" + name_capture_pattern_01 + r"\s*(?:\.\s*\n|;|,|)" + r"\s*<3hsdfhdg11sdf"
n1 = re.search(regex_pattern_01, input_text_aux)
if n1 and word == "" and try_save_name == False:
word, = n1.groups()
if(word == None or word == "" or word == " "):
print("ERROR!!")
else:
try_save_name = True
word = word.strip()
print(repr(word)) # --> print the substring that I captured with the capturing group
else: print(repr(word)) # --> NOT CAPTURED ""
input_text = "SAFJHDFH no identifiques como nombre a María del Carmen asjdjhs" #example 1 (NOT CAPTURE)
input_text = "no identifiques como nombre a María Carmen" #example 2 (CAPTURE)
input_text = "sagasghas no identifiques como a un nombre a María Carmen Azul, k9kfjfjfd" #example 3 (CAPTURE)
input_text = "sagasghas no identifiques como a un nombre a María Carmen Azul; Aun que no estoy realmente segura de ello" #example 4 (CAPTURE)
input_text = "no identifiques como nombre a María hghdshgsd" #example 5 (NOT CAPTURE)
test_extraction_func(input_text)
</code></pre>
<p>Extract more than one word beginning with a capital letter <code>([A-ZÁÉÍÓÚÜÑ][a-záéíóúüñ]+(?:\s*[A-ZÁÉÍÓÚÜÑ][a-záéíóúüñ]+)*)</code> if this regex pattern is true, <code>n1 == True</code>, and if the capturing group is at the end of the sentence, or if this capturing group is followed by a boundary sentence punctuation, such as <code>. \n</code> , <code>.\n</code> , <code>.</code> , <code>,</code> or <code>;</code> (However, I have established this punctuation as optional, since many times it is omitted, even if the sentence does end.)</p>
<p>I've tried setting the end of the string by concatenating a generic delimiter <code>"3hsdfhdg11sdf"</code> and storing this inside a helper aux string <code>input_text_aux</code>, ie. concatenating some content that a user is unlikely to input in the <code>input_text</code>. However this did not work correctly, as it prevents any of the examples from being detected.</p>
<p>Note that not in all examples, the capture pattern will be valid, so these should be the correct console prints:</p>
<pre><code>"" #for the example 1
"María Carmen" #for the example 2
"María Carmen Azul" #for the example 3
"María Carmen Azul" #for the example 4
"" #for the example 5
</code></pre>
|
<python><python-3.x><regex><string><regex-group>
|
2023-01-19 15:58:05
| 1
| 875
|
Matt095
|
75,174,904
| 9,911,256
|
Python3 get key from dict.items() maximum value
|
<p>I'm not a Python programmer. Having a Python3 dictionary like this,</p>
<pre><code>d={"a":"1", "b":"2"}
</code></pre>
<p>How can I get the key for the largest value (that is, 'b') in a simple form?</p>
<p>Of course, I can write some spaghetti,</p>
<pre><code>def get_max_key(data):
MAX=''
MAXKEY=''
for x in data.items():
if x[1]>MAX:
MAX=x[1]
MAXKEY=x[0]
return MAXKEY
</code></pre>
<p>But that's silly. I know there should be a pythonic way to do it, possibly a one-liner.</p>
|
<python><python-3.x><dictionary>
|
2023-01-19 15:53:40
| 1
| 954
|
RodolfoAP
|
75,174,753
| 17,969,875
|
Universal method in python for trying to parse a value to a chosen type
|
<p>Is there a method in python that could allow me to parse a given value to a chosen type?</p>
<p>It might be a lib that works in a similar manner.</p>
<p>If there is not a method, how can I create something like it?</p>
<p>Example (parsing list[str] to list[int]):</p>
<pre class="lang-py prettyprint-override"><code># could be a value of ANY type
value: list[str] = ['1', '2', '3']
# can be any other type
parse_to_type = list[int]
# makes best effort to parse
parsed_value = universal_parser(value, parse_to_type)
# parses or raises error
# parsed_value = [1, 2, 3]
</code></pre>
<p>Bear in mind I don't won't a miracle function, something with parsing capabilities compared to Pydantic lib would be enough· Pydantic itself is able to parse from a list[str] to a list[int], but I was not able to locate inside the code how it does this for any type.</p>
<p>I know I can use something like <code>int(value)</code> or <code>float(value)</code> ( <code>annotation(value)</code> in general). But that won't <strong>work for complex nested types</strong> like: <code>list[int]</code>, <code>dict[str, str | int]</code>.</p>
|
<python><python-3.x><pydantic>
|
2023-01-19 15:41:48
| 1
| 731
|
PedroDM
|
75,174,679
| 7,446,003
|
Using django form wizard with allauth
|
<p>Currently my user sign up process is implemented with allauth. Does anyone have any experience of how to make this a multipage process e.g. with form wizard from formtools?</p>
<p>forms.py (stored at users/forms.py)</p>
<pre><code>
class UserCreationForm1(forms.UserCreationForm):
error_message = forms.UserCreationForm.error_messages.update(
{
"duplicate_username": _(
"This username has already been taken."
)
}
)
username = CharField(label='User Name',
widget=TextInput(attrs={'placeholder': 'User Name'})
)
class Meta(forms.UserCreationForm.Meta):
model = User
fields = ['username', 'email', 'title', 'first_name', 'last_name', 'country', ]
field_order = ['username', 'email', 'title', 'first_name', 'last_name', 'country', ]
def clean_username(self):
username = self.cleaned_data["username"]
if self.instance.username == username:
return username
try:
User._default_manager.get(username=username)
except User.DoesNotExist:
return username
raise ValidationError(
self.error_messages["duplicate_username"]
)
class UserCreationForm2(forms.UserCreationForm):
class Meta(forms.UserCreationForm.Meta):
model = User
fields = ['occupation', 'password1', 'password2', 'terms']
field_order = ['occupation', 'password1', 'password2', 'terms']
def clean_terms(self):
is_filled = self.cleaned_data['terms']
if not is_filled:
raise forms.ValidationError('This field is required')
return is_filled
</code></pre>
<p>For the views.py i then have</p>
<pre><code>SIGNUP_FORMS = [('0', UserCreationForm),
('1', UserCreationForm2)]
TEMPLATES = {'0': 'account/signup_1.html',
'1': 'account/signup_2.html'}
class SignupWizard(SessionWizardView):
def get_template_names(self):
return [TEMPLATES[self.steps.current]]
def done(self, form_list, **kwargs):
for form in form_list:
if isinstance(form, UserCreationForm):
print('form1')
user = form.save(self.request)
elif isinstance(form, UserCreationForm2):
userprofile = form.save(commit=False)
user = self.request.user
userprofile.user = user
userprofile.save()
print('form2')
return HttpResponseRedirect(settings.LOGIN_REDIRECT_URL)
</code></pre>
<p>What I find though is that after completing the first form and clicking to continue to the second the user is redirected to the inbuilt allauth login: at accounts/signup/</p>
<p>Does anyone have any advice?</p>
|
<python><django><django-allauth><django-formtools>
|
2023-01-19 15:37:09
| 0
| 422
|
RobMcC
|
75,174,553
| 9,257,578
|
GOT TypeError: deprecated() got an unexpected keyword argument 'name' in boto3
|
<p>Boto3 was running successfully the day before but when i ran today i am getting these errors
<code>import boto3</code></p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/root1/.local/lib/python3.10/site-packages/boto3/__init__.py", line 16, in <module>
from boto3.session import Session
File "/home/root1/.local/lib/python3.10/site-packages/boto3/session.py", line 17, in <module>
import botocore.session
File "/home/root1/.local/lib/python3.10/site-packages/botocore/session.py", line 29, in <module>
import botocore.credentials
File "/home/root1/.local/lib/python3.10/site-packages/botocore/credentials.py", line 35, in <module>
from botocore.config import Config
File "/home/root1/.local/lib/python3.10/site-packages/botocore/config.py", line 16, in <module>
from botocore.endpoint import DEFAULT_TIMEOUT, MAX_POOL_CONNECTIONS
File "/home/root1/.local/lib/python3.10/site-packages/botocore/endpoint.py", line 24, in <module>
from botocore.awsrequest import create_request_object
File "/home/root1/.local/lib/python3.10/site-packages/botocore/awsrequest.py", line 24, in <module>
import botocore.utils
File "/home/root1/.local/lib/python3.10/site-packages/botocore/utils.py", line 32, in <module>
import botocore.httpsession
File "/home/root1/.local/lib/python3.10/site-packages/botocore/httpsession.py", line 28, in <module>
from urllib3.contrib.pyopenssl import orig_util_SSLContext as SSLContext
File "/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py", line 50, in <module>
import OpenSSL.SSL
File "/home/root1/.local/lib/python3.10/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import SSL, crypto
File "/home/root1/.local/lib/python3.10/site-packages/OpenSSL/SSL.py", line 19, in <module>
from OpenSSL.crypto import (
File "/home/root1/.local/lib/python3.10/site-packages/OpenSSL/crypto.py", line 3224, in <module>
utils.deprecated(
TypeError: deprecated() got an unexpected keyword argument 'name'
</code></pre>
<p>i tried uninstalling and reinstalling also tried out installing specific version
still getting an error</p>
<p>i am using it on ubuntu 22.0
<a href="https://i.sstatic.net/Vhjdj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vhjdj.png" alt="enter image description here" /></a></p>
<p>the python version i am running
<code>Python 3.10.6</code></p>
|
<python><boto3>
|
2023-01-19 15:27:49
| 0
| 533
|
Neetesshhr
|
75,174,474
| 12,244,355
|
Python: How to sum up the values of a dictionary list in a column
|
<p>I have a DataFrame like this:</p>
<pre><code>timestamp asks
2022-01-01 00:00:00 [{'price':'0.99', 'size':'12311'},{'price':'0.991', 'size':'20013'}]
2022-01-01 01:00:00 [{'price':'0.99', 'size':'3122'},{'price':'0.991', 'size':'43221'}]
...
</code></pre>
<p>What I want to do is sum up the values of <code>size</code> for each<code>timestamp</code> to get the following DataFrame:</p>
<pre><code>timestamp asks
2022-01-01 00:00:00 32324
2022-01-01 01:00:00 46343
...
</code></pre>
<p>i.e. <code>12311+20013= 32324</code>.</p>
<p>How can this be done (using pandas ideally)?</p>
|
<python><pandas><list><dataframe><dictionary>
|
2023-01-19 15:20:53
| 1
| 785
|
MathMan 99
|
75,174,405
| 3,099,733
|
Is it possible to remove unecessary nested structure in yaml file?
|
<p>I need to set a param that is deep inside a yaml object like below:</p>
<pre class="lang-yaml prettyprint-override"><code>executors:
hpc01:
context:
cp2k:
charge: 0
</code></pre>
<p>Is it possible to make it more clear, for example</p>
<pre class="lang-yaml prettyprint-override"><code>executors: hpc01: context: cp2k: charge: 0
</code></pre>
<p>I am using <code>ruamel.yaml</code> in Python to parse the file and it fails to parse the example. Is there some yaml dialect can support such style, or is there better way to write such configuration in standard yaml spec?</p>
|
<python><yaml>
|
2023-01-19 15:15:18
| 2
| 1,959
|
link89
|
75,174,228
| 2,244,766
|
python argparser help in case of positional + REMAINDER
|
<p>having a code like below</p>
<pre><code>>>> import argparse
>>> parser=argparse.ArgumentParser()
>>> parser.add_argument('must_have', help='this is a required arg')
>>> parser.add_argument('-o', '--optional', help='some optional arg')
>>> parser.add_argument('--others', nargs=argparse.REMAINDER, help='some other options')
</code></pre>
<p>and displaying help will display sth like:</p>
<pre><code>usage: [-h] [-o OPTIONAL] [--others ...] must_have
positional arguments:
must_have this is a required arg
optional arguments:
-h, --help show this help message and exit
-o OPTIONAL, --optional OPTIONAL
some optional arg
--others ... some other options
</code></pre>
<p>which is clearly misleading, cause passing arguments in the order like in help will cause an error:</p>
<pre><code>>>> parser.parse_args(['-o', 'foo', '--others', 'foo=1', 'bar=2', 'ye_mandatory_arg'])
usage: [-h] [-o OPTIONAL] [--others ...] must_have
: error: the following arguments are required: must_have
</code></pre>
<p>which of course sounds right. and to make it work we have to pass the mandatory positional arg before the one using REMAINDER:</p>
<pre><code>>>> parser.parse_args(['-o', 'foo', 'ye-mandatory-arg', '--others', 'foo=1', 'bar=2'])
Namespace(must_have='ye-mandatory-arg', optional='foo', others=['foo=1', 'bar=2'])
</code></pre>
<p>Is it even possible to fix the help message?</p>
|
<python><argparse>
|
2023-01-19 15:01:23
| 1
| 4,035
|
murison
|
75,174,096
| 6,054,066
|
How to get pandas to roll through entire series?
|
<h1>Pandas Rolling function</h1>
<h2>Last elements when window_size == step_size</h2>
<p>I can't seem to get the last three element of an example 9 element series to be rolled on, when my window size and step size are both 3.</p>
<p>Is the below an intended behaviour of <code>pandas</code>?</p>
<h3>My desired outcome</h3>
<p>If so how can I roll over the <code>Series</code> so that:</p>
<p><code>pd.Series([1., 1., 1., 2., 2., 2., 3., 3., 3.]).rolling(window=3, step=3).mean()</code></p>
<p>evaluate to <code>pd.Series([1., 2., 3.,])</code>?</p>
<h3>Example</h3>
<pre class="lang-py prettyprint-override"><code> import pandas as pd
def print_mean(x):
print(x)
return x.mean()
df = pd.DataFrame({"A": [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]})
df["left"] = (
df["A"].rolling(window=3, step=3, closed="left").apply(print_mean, raw=False)
)
df["right"] = (
df["A"].rolling(window=3, step=3, closed="right").apply(print_mean, raw=False)
)
df["both"] = (
df["A"].rolling(window=3, step=3, closed="both").apply(print_mean, raw=False)
)
df["neither"] = (
df["A"].rolling(window=3, step=3, closed="neither").apply(print_mean, raw=False)
)
</code></pre>
<p>This evaluates to:</p>
<pre><code> A left right both neither
0 0.0 NaN NaN NaN NaN
1 1.0 NaN NaN NaN NaN
2 2.0 NaN NaN NaN NaN
3 3.0 1.0 2.0 1.5 NaN
4 4.0 NaN NaN NaN NaN
5 5.0 NaN NaN NaN NaN
6 6.0 4.0 5.0 4.5 NaN
7 7.0 NaN NaN NaN NaN
8 8.0 NaN NaN NaN NaN
</code></pre>
<p>and prints:</p>
<pre><code>0 0.0
1 1.0
2 2.0
dtype: float64
3 3.0
4 4.0
5 5.0
dtype: float64
</code></pre>
<pre><code>1 1.0
2 2.0
3 3.0
dtype: float64
4 4.0
5 5.0
6 6.0
dtype: float64
</code></pre>
<pre><code>0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
3 3.0
4 4.0
5 5.0
6 6.0
dtype: float64
</code></pre>
|
<python><pandas><dataframe><pandas-rolling>
|
2023-01-19 14:49:23
| 1
| 450
|
semyd
|
75,174,062
| 639,676
|
How to implement a fast type inference procedure for SKI combinators in Python?
|
<p>How to implement a fast simple type inference procedure for <a href="https://en.wikipedia.org/wiki/SKI_combinator_calculus" rel="nofollow noreferrer">SKI combinators</a> in Python?</p>
<p>I am interested in 2 functions:</p>
<ol>
<li><p><code>typable</code>: returns true if a given SKI term has a type (I suppose it should work faster than searching for a concrete type).</p>
</li>
<li><p><code>principle_type</code>: returns <a href="https://en.wikipedia.org/wiki/Principal_type" rel="nofollow noreferrer">principle type</a> if it exists and False otherwise.</p>
</li>
</ol>
<br>
<pre><code>typable(SKK) = True
typable(SII) = False # (I=SKK). This term does not have a type. Similar to \x.xx
principle_type(S) = (t1 -> t2 -> t3) -> (t1 -> t2) -> t1 -> t3
principle_type(K) = t1 -> t2 -> t1
principle_type(SK) = (t3 -> t2) -> t3 -> t3
principle_type(SKK) = principle_type(I) = t1 -> t1
</code></pre>
<p>Theoretical questions:</p>
<ol>
<li><p>I read about <a href="https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_system" rel="nofollow noreferrer">Hindley–Milner type system</a>. There are 2 algs: Algorithm J and Algorithm W. Do I understand correctly that they are used for more complex type system: System F? System with parametric polymorphism? Are there combinators typable in System F but not typable in the simple type system?</p>
</li>
<li><p>As I understand, to find a principle type we need to solve a system of equations between symbolic expressions. Is it possible to simplify the algorithm and speed up the process by using SMT solvers like <a href="https://github.com/Z3Prover/z3" rel="nofollow noreferrer">Z3</a>?</p>
</li>
</ol>
<p>My implementation of basic combinators, reduction and parsing:</p>
<pre><code>from __future__ import annotations
import typing
from dataclasses import dataclass
@dataclass(eq=True, frozen=True)
class S:
def __str__(self):
return "S"
def __len__(self):
return 1
@dataclass(eq=True, frozen=True)
class K:
def __str__(self):
return "K"
def __len__(self):
return 1
@dataclass(eq=True, frozen=True)
class App:
left: Term
right: Term
def __str__(self):
return f"({self.left}{self.right})"
def __len__(self):
return len(str(self))
Term = typing.Union[S, K, App]
def parse_ski_string(s):
# remove spaces
s = ''.join(s.split())
stack = []
for c in s:
# print(stack, len(stack))
if c == '(':
pass
elif c == 'S':
stack.append(S())
elif c == 'K':
stack.append(K())
# elif c == 'I':
# stack.append(I())
elif c == ')':
x = stack.pop()
if len(stack) > 0:
# S(SK)
f = stack.pop()
node = App(f, x)
stack.append(node)
else:
# S(S)
stack.append(x)
else:
raise Exception('wrong c = ', c)
if len(stack) != 1:
raise Exception('wrong stack = ', str(stack))
return stack[0]
def simplify(expr: Term):
if isinstance(expr, S) or isinstance(expr, K):
return expr
elif isinstance(expr, App) and isinstance(expr.left, App) and isinstance(expr.left.left, K):
return simplify(expr.left.right)
elif isinstance(expr, App) and isinstance(expr.left, App) and isinstance(expr.left.left, App) and isinstance(
expr.left.left.left, S):
return simplify(App(App(expr.left.left.right, expr.right), (App(expr.left.right, expr.right))))
elif isinstance(expr, App):
l2 = simplify(expr.left)
r2 = simplify(expr.right)
if expr.left == l2 and expr.right == r2:
return App(expr.left, expr.right)
else:
return simplify(App(l2, r2))
else:
raise Exception('Wrong type of combinator', expr)
# simplify(App(App(K(),S()),K())) = S
# simplify(parse_ski_string('(((SK)K)S)')) = S
</code></pre>
|
<python><algorithm><functional-programming><lambda-calculus><combinators>
|
2023-01-19 14:45:51
| 1
| 4,143
|
Oleg Dats
|
75,173,924
| 4,954,079
|
Fastest way to apply function along axes
|
<p>In a time-critical code fragment, I need to apply a function along different axes of a tensor and sum results. A peculiar feature is that the number of axes of the tensor (<code>ns_test</code>) can be large. I came up with two implementations, where I move the current axis (<code>moveaxis</code>) to either zeroth (<code>h_zero</code>) or last (<code>h_last</code>) position, apply the function, and move the axis back. I am not sure it is the best way.</p>
<pre><code>import numpy as np
import time
def h_last(state, km, ns):
new_state = np.zeros_like(state)
for i in range(ns):
a = np.moveaxis(state, i+1, -1).copy()
for k in range(km):
a[..., k] = (k+0.5) * a[..., k]
new_state += np.moveaxis(a, -1, i+1)
return new_state
def h_zero(state, km, ns):
new_state = np.zeros_like(state)
for i in range(ns):
a = np.moveaxis(state, i+1, 0).copy()
for k in range(km):
a[k, ...] = (k+0.5) * a[k, ...]
new_state += np.moveaxis(a, 0, i+1)
return new_state
# ==================== init ============================
km_test = 4
ns_test = 7
nreps = 100
dims = tuple([ns_test] + [km_test] * ns_test)
y= np.random.rand(*dims)
# =================== first run =============================
tic = time.perf_counter()
for i in range(nreps):
yy = h_last(y, km_test, ns_test)
toc = time.perf_counter()
print(f"Run time h_last {toc - tic:0.4f} seconds")
# =================== second run =============================
tic = time.perf_counter()
for i in range(nreps):
yyy = h_zero(y, km_test, ns_test)
toc = time.perf_counter()
print(f"Run time h_zero {toc - tic:0.4f} seconds")
print(np.linalg.norm(yy-yy))
</code></pre>
<p>I am surprised a bit that the zeroth axis performs better (I thought python internally uses the C-order for storage). But my main question is how to further speed up the code? I looked into <code>apply_along_axis</code>, but this seems to be very slow.</p>
|
<python><numpy><optimization><tensor>
|
2023-01-19 14:35:54
| 1
| 367
|
yarchik
|
75,173,896
| 15,366,635
|
How to multiply each digit by a number in a list in order
|
<p>I'm trying to create a program that will multiply each digit with a number in the list in order</p>
<p>so lets say my inputted number is <code>1234</code>, then</p>
<ul>
<li><code>1</code> will be multiplied by <code>4</code></li>
<li><code>2</code> will be multiplied by <code>8</code></li>
<li><code>3</code> will be multiplied by <code>5</code>, and so on</li>
</ul>
<pre class="lang-py prettyprint-override"><code>frequentNum = input("please enter 4 digit frequent number: ")
arr = [4, 8, 5, 7]
for i in str(frequentNum):
for j in arr:
value = int(i) * j
print(value)
</code></pre>
<p>I tried this code but it multiplies each digit with every number in the list</p>
<p>I was trying to get an output of</p>
<pre><code>4
16
15
28
</code></pre>
|
<python>
|
2023-01-19 14:33:25
| 0
| 1,341
|
I_love_vegetables
|
75,173,872
| 12,181,868
|
Semantic Search regardless of database type
|
<p>I can apply semantic search on a specific table of the database. But my goal is to achieve a module that can use search queries on a related table and database. Any database should be flexibly integrated with the system.</p>
<ul>
<li><strong>Example:</strong>
Might be possible, my question is ambiguous. In that case here is an example:</li>
</ul>
<p><strong>Use Case:</strong>
The search query is "Property rates in Vancouver, BC", in this case, the logic will definitely understand, this query is related to the property database and the location in Vancouver BC. On the other hand, if the search query is "How to turn off the windows computer", the semantic search will understand the relation with the Computer/OS info database. The problem is how can the system choose which database should be used. and what if the "system understood database" is not integrated with the system?</p>
|
<python><database><elasticsearch><semantics><opensemanticsearch>
|
2023-01-19 14:31:54
| 0
| 348
|
Muhammad Afzaal
|
75,173,822
| 5,743,711
|
The sklearn.manifold.TSNE gives different results for same input vectors
|
<p>I give TSNE a list of vectors, some of these vectors are exactly the same. But the output of fit() function can be different for each!
IS this expected behavior? How can i assure each input vector will be mapped to same output vector?</p>
<p>Exclamation,
I cannot tell for sure, but I even noticed that the first entry in the input list of vectors always gets different unexpected value.</p>
<p>Consider the following simple example.</p>
<p>Notice that the first three vectors are the same, the random_state is static but the first three 2D vectors in the output can be different from each others.</p>
<pre><code>from sklearn import manifold
import numpy as np
X= np.array([ [2, 1, 3, 5],
[2, 1, 3, 5],
[2, 1, 3, 5],
[2, 1, 3, 5],
[12, 1, 3, 5],
[87, 22, 3, 5],
[3, 23, 9, 5],
[43, 87, 3, 5],
[121, 65, 3, 5]])
m = manifold.TSNE(
n_components=2,
perplexity=0.666666,
verbose=0,
random_state=42,
angle=.99,
init='pca',
metric='cosine',
n_iter=1000)
X_emedded = m.fit_transform(X)
# The following might fail
assert( sum(X_emedded[1] - X_emedded[2] ) == 0)
assert( sum(X_emedded[0] - X_emedded[1] ) == 0)
</code></pre>
<p><strong>Update....</strong>
sklearn.<strong>version</strong> is '1.2.0'</p>
|
<python><scikit-learn><tsne>
|
2023-01-19 14:28:27
| 1
| 1,408
|
Samer Aamar
|
75,173,552
| 3,450,064
|
Matplotlib, plot a vector of numbers as a rectangle filled with numbers
|
<p>So let's say I have a vector of numbers.</p>
<pre class="lang-py prettyprint-override"><code>np.random.randn(5).round(2).tolist()
[2.05, -1.57, 1.07, 1.37, 0.32]
</code></pre>
<p>I want a draw a rectangle that shows this elements as numbers in a rectangle.
Something like this:</p>
<p><a href="https://i.sstatic.net/f1Wzp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f1Wzp.png" alt="enter image description here" /></a></p>
<p>Is there an easy way to do this in matplotlib?</p>
|
<python><matplotlib>
|
2023-01-19 14:10:07
| 3
| 11,300
|
CentAu
|
75,173,357
| 4,691,830
|
deep copy of GeoDataFrame becomes Pandas DataFrame
|
<p>When I <a href="https://docs.python.org/3/library/copy.html" rel="nofollow noreferrer">deepcopy</a> a <code>geopandas.GeoDataFrame</code> without a "geometry" column, the copy becomes a <code>pandas.DataFrame</code>. Why does this happen? I looked on the main branches on Github and neither Pandas nor Geopandas override <code>__deepcopy__</code>.</p>
<pre><code>import copy
import geopandas as gpd
empty = gpd.GeoDataFrame()
print("original plain:", type(empty))
print("copied plain:", type(copy.deepcopy(empty)))
geom = gpd.GeoDataFrame(columns=["geometry"])
print("original with geometry:", type(geom))
print("copied with geometry:", type(copy.deepcopy(geom)))
</code></pre>
<p>Output:</p>
<pre><code>original plain: <class 'geopandas.geodataframe.GeoDataFrame'>
copied plain: <class 'pandas.core.frame.DataFrame'>
original with geometry: <class 'geopandas.geodataframe.GeoDataFrame'>
copied with geometry: <class 'geopandas.geodataframe.GeoDataFrame'>
</code></pre>
|
<python><pandas><geopandas><deep-copy>
|
2023-01-19 13:57:21
| 0
| 4,145
|
Joooeey
|
75,173,255
| 11,502,612
|
Is it possible to keep the format of INI file after change it with ConfigParser?
|
<p>Is it possible that <code>ConfigParser</code> keeps the format of <code>INI</code> config file? I have config files which have comments and specific <code>section</code>/<code>option</code> names and if a read and change the content of file the <code>ConfigParser</code> re-format it (I can solve the <code>section</code>/<code>option</code> names).</p>
<p>I am familiar with the way of working of <code>ConfigParser</code> (Read key/value pairs to a <code>dict</code> and dumping it to the file after change). But I am interested if there is solution to keep the original format and comments in the <code>INI</code> file.</p>
<p><strong>Example:</strong></p>
<p><strong><code>test.ini</code></strong></p>
<pre class="lang-ini prettyprint-override"><code># Comment line
; Other Comment line
[My-Section]
Test-option = Test-Variable
</code></pre>
<p><strong><code>test.py</code></strong></p>
<pre class="lang-py prettyprint-override"><code>import configparser as cp
parser: cp.ConfigParser = cp.ConfigParser()
parser.read("test.ini")
parser.set("My-Section", "New-Test_option", "TEST")
with open("test.ini", "w") as configfile:
parser.write(configfile)
</code></pre>
<p><strong><code>test.ini</code> after run script</strong></p>
<pre class="lang-ini prettyprint-override"><code>[My-Section]
test-option = Test-Variable
new-test_option = TEST
</code></pre>
<p>As you can see above the comment lines (both types of comments) have been removed. Furthermore, the <code>option</code> names have been re-formatted.</p>
<p>If I add the following line to source code then I can keep the format of the <code>options</code> but the comments are still removed:</p>
<pre><code>parser.optionxform = lambda option: option
</code></pre>
<p>So the <code>test.ini</code> file after run the script with above line:</p>
<pre class="lang-ini prettyprint-override"><code>[My-Section]
Test-option = Test-Variable
New-Test_option = TEST
</code></pre>
<p><strong>So my question(s):</strong></p>
<ul>
<li>Is it possible to keep the comments in the <code>INI</code> file after change it?</li>
<li>Is it possible to keep the formatting of file eg.: spaces, tabs, new lines etc...?</li>
</ul>
<p><strong>Note:</strong></p>
<ul>
<li>I have already checked the <code>RawConfigParser</code> module but as I saw that also doesn't support the format keeping.</li>
</ul>
|
<python><python-3.x><configuration><ini><configparser>
|
2023-01-19 13:50:20
| 3
| 5,389
|
milanbalazs
|
75,173,213
| 7,084,115
|
Python how to iterate over a list inside another list
|
<p>I have the following list of lists as below:</p>
<pre><code>my_list = [
['first-column', 'DisplayName', 'FLOW TRIGGERED: 636e56d390c8c0910d592cc6', 'ClassificationType', 'NLU', 'KeyPhrases', 'MetaIntent', 'Description', 'test description', 'SampleSentences', [], 'Regexes'],
['first-column', 'DisplayName', 'FLOW TRIGGERED: 636e56d390c8c0910d592cc6', 'ClassificationType', 'NLU', 'KeyPhrases', 'MetaIntent', 'Description', 'test description', 'SampleSentences', [], 'Regexes'],
['first-column', 'DisplayName', 'FLOW TRIGGERED: 636e56d490c8c01802592cd1', 'ClassificationType', 'NLU', 'KeyPhrases', 'MetaIntent', 'Description', 'test description', 'SampleSentences', ['Pressemitteilung?\n', 'Pressemeldung?\n', 'Wo finde ich den Schlussbericht zur Messe?\n'], 'Regexes'],
['first-column', 'DisplayName', 'FLOW TRIGGERED: 636e56d490c8c0edac592cd8', 'ClassificationType', 'NLU', 'KeyPhrases', 'MetaIntent', 'Description', 'test description', 'SampleSentences', ['Aussteller?\n', 'Ausstellerverzeichnis 2022?\n', 'Welche Aussteller waren 2022 dabei?\n', 'Ausstellerliste 2022?\n', 'Welche Unternehmen waren als Aussteller vertreten?\n'], 'Regexes'],
['first-column', 'DisplayName', 'FLOW TRIGGERED: 636e56d490c8c01739592ce0', 'ClassificationType', 'NLU', 'KeyPhrases', 'MetaIntent', 'Description', 'test description', 'SampleSentences', ['Wie hoch war die Ausstellerzahl 2022?\n', 'Wie viele Unternehmen waren vor Ort\n', 'Anzahl Aussteller?\n', 'Ausstellerzahl?\n', 'Wie viele Aussteller waren auf der Messe vertreten?\n'], 'Regexes']
]
</code></pre>
<p>I am using the above list to write a CSV file as below:</p>
<pre><code> rows = zip(*my_list)
with open('test.csv', "w") as f:
writer = csv.writer(f, lineterminator='\n\n')
for row in rows:
writer.writerow(row)
</code></pre>
<p>So my CSV looks like below. Which is the format I need.</p>
<pre><code>first-column,first-column,first-column
[],[],"['Pressemitteilung?', 'Pressemeldung?', 'Wo finde ich den Schlussbericht zur Messe?']"
Regexes,Regexes,Regexes
</code></pre>
<p>But above is not exactly what I need my CSV to looks like,</p>
<p><strong>I need it like below:</strong></p>
<pre><code>first-column,first-column,first-column,first-column,first-column
DisplayName,DisplayName,DisplayName,DisplayName,DisplayName
FLOW TRIGGERED: 636e56d390c8c0910d592cc6,FLOW TRIGGERED: 636e56d390c8c0910d592cc6,FLOW TRIGGERED: 636e56d490c8c01802592cd1,FLOW TRIGGERED: 636e56d490c8c0edac592cd8,FLOW TRIGGERED: 636e56d490c8c01739592ce0
ClassificationType,ClassificationType,ClassificationType,ClassificationType,ClassificationType
NLU,NLU,NLU,NLU,NLU
KeyPhrases,KeyPhrases,KeyPhrases,KeyPhrases,KeyPhrases
MetaIntent,MetaIntent,MetaIntent,MetaIntent,MetaIntent
Description,Description,Description,Description,Description
test description,test description,test description,test description,test description
SampleSentences,SampleSentences,SampleSentences,SampleSentences,SampleSentences
[],[],Pressemitteilung?,Aussteller?,Wie hoch war die Ausstellerzahl 2022?
[],[],Pressemeldung?,Ausstellerverzeichnis 2022?,Wie viele Unternehmen waren vor Ort
[],[],Wo finde ich den Schlussbericht zur Messe?,Welche Aussteller waren 2022 dabei?,Anzahl Aussteller?
[],[],[],Ausstellerliste 2022?,Ausstellerzahl?
[],[],[],Welche Unternehmen waren als Aussteller vertreten?,Wie viele Aussteller waren auf der Messe vertreten?
Regexes,Regexes,Regexes,Regexes,Regexes
</code></pre>
<p>How can I iterate over the inner array so my CSV looks like above?</p>
<pre><code> with open('test.csv', "w") as f:
writer = csv.writer(f, lineterminator='\n\n')
for row in rows:
writer.writerow(row)
writer.writerow(row[1])
</code></pre>
<p>But this produces weird output. I am new to <code>python</code> can someone help me fix this?</p>
<p>Thank you,
Best Regards</p>
|
<python><csv>
|
2023-01-19 13:47:16
| 1
| 4,101
|
Jananath Banuka
|
75,173,128
| 10,967,204
|
QVideoWidget() is not working with Frameless Window and Translucent Background
|
<p>I am making a videoplayer with <strong>QMediaplayer and QVideoWidget</strong> using <strong>PySide2</strong>.
Everything works as expected but when I use:</p>
<pre><code> self.setWindowFlags(QtCore.Qt.FramelessWindowHint)
self.setAttribute(QtCore.Qt.WA_TranslucentBackground)
</code></pre>
<p>The sound is reporoduced but the video is not displayed. Is there a possibility to use the FramelessWindowHint and WA_TranslucentBackground with QVideoWidget?</p>
<p>NOTE: I am using QStackedWidget and QFrames</p>
<p>Here is minimal and reproducible <strong>example</strong>:</p>
<p><em>testMP.py</em></p>
<pre><code>import sys
from PySide2.QtCore import Qt, QUrl
from PySide2.QtMultimedia import QMediaContent, QMediaPlayer
from PySide2.QtMultimediaWidgets import QVideoWidget
from PySide2.QtWidgets import (QApplication, QFileDialog, QVBoxLayout,QMainWindow)
from testMP_GUI import Ui_MainWindow
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupUi(self)
self.setWindowFlags(Qt.FramelessWindowHint) # <-------------------
self.setAttribute(Qt.WA_TranslucentBackground) # <-------------------
self.mediaPlayer = QMediaPlayer(None, QMediaPlayer.VideoSurface)
self.videoWidget = QVideoWidget()
self.mediaPlayer.setVideoOutput(self.videoWidget)
layout = QVBoxLayout()
layout.addWidget(self.videoWidget)
self.video_viewer.setLayout(layout)
openButton = self.btn_open
openButton.clicked.connect(self.open_song)
self.playButton = self.btn_play
self.playButton.clicked.connect(self.play)
def open_song(self):
fileName, _ = QFileDialog.getOpenFileName(self, "Select files",
".", "Video Files (*.mp4 *.flv *.ts *.mts *.avi)")
if fileName != '':
self.mediaPlayer.setMedia(
QMediaContent(QUrl.fromLocalFile(fileName)))
self.playButton.setEnabled(True)
self.play()
def play(self):
if self.mediaPlayer.state() == QMediaPlayer.PlayingState:
self.mediaPlayer.pause()
else:
self.mediaPlayer.play()
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
</code></pre>
<p><em>testMP_GUI.py</em></p>
<pre><code># -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'testMP_GUI.ui'
##
## Created by: Qt User Interface Compiler version 5.15.2
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide2.QtCore import *
from PySide2.QtGui import *
from PySide2.QtWidgets import *
import files_rc
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
if not MainWindow.objectName():
MainWindow.setObjectName(u"MainWindow")
MainWindow.resize(1000, 720)
MainWindow.setMinimumSize(QSize(1000, 720))
palette = QPalette()
brush = QBrush(QColor(255, 255, 255, 255))
brush.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.WindowText, brush)
brush1 = QBrush(QColor(0, 0, 0, 0))
brush1.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Button, brush1)
brush2 = QBrush(QColor(66, 73, 90, 255))
brush2.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Light, brush2)
brush3 = QBrush(QColor(55, 61, 75, 255))
brush3.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Midlight, brush3)
brush4 = QBrush(QColor(22, 24, 30, 255))
brush4.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Dark, brush4)
brush5 = QBrush(QColor(29, 32, 40, 255))
brush5.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Mid, brush5)
brush6 = QBrush(QColor(210, 210, 210, 255))
brush6.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Text, brush6)
palette.setBrush(QPalette.Active, QPalette.BrightText, brush)
palette.setBrush(QPalette.Active, QPalette.ButtonText, brush)
palette.setBrush(QPalette.Active, QPalette.Base, brush1)
palette.setBrush(QPalette.Active, QPalette.Window, brush1)
brush7 = QBrush(QColor(0, 0, 0, 255))
brush7.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Shadow, brush7)
brush8 = QBrush(QColor(85, 170, 255, 255))
brush8.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.Highlight, brush8)
palette.setBrush(QPalette.Active, QPalette.Link, brush8)
brush9 = QBrush(QColor(255, 0, 127, 255))
brush9.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.LinkVisited, brush9)
palette.setBrush(QPalette.Active, QPalette.AlternateBase, brush4)
brush10 = QBrush(QColor(44, 49, 60, 255))
brush10.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Active, QPalette.ToolTipBase, brush10)
palette.setBrush(QPalette.Active, QPalette.ToolTipText, brush6)
palette.setBrush(QPalette.Inactive, QPalette.WindowText, brush)
palette.setBrush(QPalette.Inactive, QPalette.Button, brush1)
palette.setBrush(QPalette.Inactive, QPalette.Light, brush2)
palette.setBrush(QPalette.Inactive, QPalette.Midlight, brush3)
palette.setBrush(QPalette.Inactive, QPalette.Dark, brush4)
palette.setBrush(QPalette.Inactive, QPalette.Mid, brush5)
palette.setBrush(QPalette.Inactive, QPalette.Text, brush6)
palette.setBrush(QPalette.Inactive, QPalette.BrightText, brush)
palette.setBrush(QPalette.Inactive, QPalette.ButtonText, brush)
palette.setBrush(QPalette.Inactive, QPalette.Base, brush1)
palette.setBrush(QPalette.Inactive, QPalette.Window, brush1)
palette.setBrush(QPalette.Inactive, QPalette.Shadow, brush7)
palette.setBrush(QPalette.Inactive, QPalette.Highlight, brush8)
palette.setBrush(QPalette.Inactive, QPalette.Link, brush8)
palette.setBrush(QPalette.Inactive, QPalette.LinkVisited, brush9)
palette.setBrush(QPalette.Inactive, QPalette.AlternateBase, brush4)
palette.setBrush(QPalette.Inactive, QPalette.ToolTipBase, brush10)
palette.setBrush(QPalette.Inactive, QPalette.ToolTipText, brush6)
palette.setBrush(QPalette.Disabled, QPalette.WindowText, brush4)
palette.setBrush(QPalette.Disabled, QPalette.Button, brush1)
palette.setBrush(QPalette.Disabled, QPalette.Light, brush2)
palette.setBrush(QPalette.Disabled, QPalette.Midlight, brush3)
palette.setBrush(QPalette.Disabled, QPalette.Dark, brush4)
palette.setBrush(QPalette.Disabled, QPalette.Mid, brush5)
palette.setBrush(QPalette.Disabled, QPalette.Text, brush4)
palette.setBrush(QPalette.Disabled, QPalette.BrightText, brush)
palette.setBrush(QPalette.Disabled, QPalette.ButtonText, brush4)
palette.setBrush(QPalette.Disabled, QPalette.Base, brush1)
palette.setBrush(QPalette.Disabled, QPalette.Window, brush1)
palette.setBrush(QPalette.Disabled, QPalette.Shadow, brush7)
brush11 = QBrush(QColor(51, 153, 255, 255))
brush11.setStyle(Qt.SolidPattern)
palette.setBrush(QPalette.Disabled, QPalette.Highlight, brush11)
palette.setBrush(QPalette.Disabled, QPalette.Link, brush8)
palette.setBrush(QPalette.Disabled, QPalette.LinkVisited, brush9)
palette.setBrush(QPalette.Disabled, QPalette.AlternateBase, brush10)
palette.setBrush(QPalette.Disabled, QPalette.ToolTipBase, brush10)
palette.setBrush(QPalette.Disabled, QPalette.ToolTipText, brush6)
MainWindow.setPalette(palette)
font = QFont()
font.setFamily(u"Segoe UI")
font.setPointSize(10)
MainWindow.setFont(font)
MainWindow.setStyleSheet(u"QMainWindow {background: transparent; }\n"
"QToolTip {\n"
" color: #ffffff;\n"
" background-color: rgba(27, 29, 35, 160);\n"
" border: 1px solid rgb(40, 40, 40);\n"
" border-radius: 2px;\n"
"}")
self.centralwidget = QWidget(MainWindow)
self.centralwidget.setObjectName(u"centralwidget")
self.centralwidget.setStyleSheet(u"background: transparent;\n"
"color: rgb(210, 210, 210);")
self.horizontalLayout = QHBoxLayout(self.centralwidget)
self.horizontalLayout.setSpacing(0)
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.horizontalLayout.setContentsMargins(10, 10, 10, 10)
self.frame_main = QFrame(self.centralwidget)
self.frame_main.setObjectName(u"frame_main")
self.frame_main.setStyleSheet(u"/* LINE EDIT */\n"
"QLineEdit {\n"
" background-color: rgb(27, 29, 35);\n"
" border-radius: 5px;\n"
" border: 2px solid rgb(27, 29, 35);\n"
" padding-left: 10px;\n"
"}\n"
"QLineEdit:hover {\n"
" border: 2px solid rgb(64, 71, 88);\n"
"}\n"
"QLineEdit:focus {\n"
" border: 2px solid rgb(91, 101, 124);\n"
"}\n"
"\n"
"/* SCROLL BARS */\n"
"QScrollBar:horizontal {\n"
" border: none;\n"
" background: rgb(52, 59, 72);\n"
" height: 14px;\n"
" margin: 0px 21px 0 21px;\n"
" border-radius: 0px;\n"
"}\n"
"QScrollBar::handle:horizontal {\n"
" background: rgb(85, 170, 255);\n"
" min-width: 25px;\n"
" border-radius: 7px\n"
"}\n"
"QScrollBar::add-line:horizontal {\n"
" border: none;\n"
" background: rgb(55, 63, 77);\n"
" width: 20px;\n"
" border-top-right-radius: 7px;\n"
" border-bottom-right-radius: 7px;\n"
" subcontrol-position: right;\n"
" subcontrol-origin: margin;\n"
"}\n"
"QScrollBar::sub-line:horizontal {\n"
" border: none;\n"
" background: rgb(55, 63, 77);\n"
" width: 20px;\n"
""
" border-top-left-radius: 7px;\n"
" border-bottom-left-radius: 7px;\n"
" subcontrol-position: left;\n"
" subcontrol-origin: margin;\n"
"}\n"
"QScrollBar::up-arrow:horizontal, QScrollBar::down-arrow:horizontal\n"
"{\n"
" background: none;\n"
"}\n"
"QScrollBar::add-page:horizontal, QScrollBar::sub-page:horizontal\n"
"{\n"
" background: none;\n"
"}\n"
" QScrollBar:vertical {\n"
" border: none;\n"
" background: rgb(52, 59, 72);\n"
" width: 14px;\n"
" margin: 21px 0 21px 0;\n"
" border-radius: 0px;\n"
" }\n"
" QScrollBar::handle:vertical { \n"
" background: rgb(85, 170, 255);\n"
" min-height: 25px;\n"
" border-radius: 7px\n"
" }\n"
" QScrollBar::add-line:vertical {\n"
" border: none;\n"
" background: rgb(55, 63, 77);\n"
" height: 20px;\n"
" border-bottom-left-radius: 7px;\n"
" border-bottom-right-radius: 7px;\n"
" subcontrol-position: bottom;\n"
" subcontrol-origin: margin;\n"
" }\n"
" QScrollBar::sub-line:vertical {\n"
" border: none;\n"
" background: rgb(55, 63"
", 77);\n"
" height: 20px;\n"
" border-top-left-radius: 7px;\n"
" border-top-right-radius: 7px;\n"
" subcontrol-position: top;\n"
" subcontrol-origin: margin;\n"
" }\n"
" QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical {\n"
" background: none;\n"
" }\n"
"\n"
" QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical {\n"
" background: none;\n"
" }\n"
"\n"
"/* CHECKBOX */\n"
"QCheckBox::indicator {\n"
" border: 3px solid rgb(52, 59, 72);\n"
" width: 15px;\n"
" height: 15px;\n"
" border-radius: 10px;\n"
" background: rgb(44, 49, 60);\n"
"}\n"
"QCheckBox::indicator:hover {\n"
" border: 3px solid rgb(58, 66, 81);\n"
"}\n"
"QCheckBox::indicator:checked {\n"
" background: 3px solid rgb(52, 59, 72);\n"
" border: 3px solid rgb(52, 59, 72); \n"
" background-image: url(:/16x16/icons/16x16/cil-check-alt.png);\n"
"}\n"
"\n"
"/* RADIO BUTTON */\n"
"QRadioButton::indicator {\n"
" border: 3px solid rgb(52, 59, 72);\n"
" width: 15px;\n"
" height: 15px;\n"
" border-radius"
": 10px;\n"
" background: rgb(44, 49, 60);\n"
"}\n"
"QRadioButton::indicator:hover {\n"
" border: 3px solid rgb(58, 66, 81);\n"
"}\n"
"QRadioButton::indicator:checked {\n"
" background: 3px solid rgb(94, 106, 130);\n"
" border: 3px solid rgb(52, 59, 72); \n"
"}\n"
"\n"
"/* COMBOBOX */\n"
"QComboBox{\n"
" background-color: rgb(27, 29, 35);\n"
" border-radius: 5px;\n"
" border: 2px solid rgb(27, 29, 35);\n"
" padding: 5px;\n"
" padding-left: 10px;\n"
"}\n"
"QComboBox:hover{\n"
" border: 2px solid rgb(64, 71, 88);\n"
"}\n"
"QComboBox::drop-down {\n"
" subcontrol-origin: padding;\n"
" subcontrol-position: top right;\n"
" width: 25px; \n"
" border-left-width: 3px;\n"
" border-left-color: rgba(39, 44, 54, 150);\n"
" border-left-style: solid;\n"
" border-top-right-radius: 3px;\n"
" border-bottom-right-radius: 3px; \n"
" background-image: url(:/16x16/icons/16x16/cil-arrow-bottom.png);\n"
" background-position: center;\n"
" background-repeat: no-reperat;\n"
" }\n"
"QComboBox QAbstractItemView {\n"
" color: rgb("
"85, 170, 255); \n"
" background-color: rgb(27, 29, 35);\n"
" padding: 10px;\n"
" selection-background-color: rgb(39, 44, 54);\n"
"}\n"
"\n"
"/* SLIDERS */\n"
"QSlider::groove:horizontal {\n"
" border-radius: 9px;\n"
" height: 18px;\n"
" margin: 0px;\n"
" background-color: rgb(52, 59, 72);\n"
"}\n"
"QSlider::groove:horizontal:hover {\n"
" background-color: rgb(55, 62, 76);\n"
"}\n"
"QSlider::handle:horizontal {\n"
" background-color: rgb(85, 170, 255);\n"
" border: none;\n"
" height: 18px;\n"
" width: 18px;\n"
" margin: 0px;\n"
" border-radius: 9px;\n"
"}\n"
"QSlider::handle:horizontal:hover {\n"
" background-color: rgb(105, 180, 255);\n"
"}\n"
"QSlider::handle:horizontal:pressed {\n"
" background-color: rgb(65, 130, 195);\n"
"}\n"
"\n"
"QSlider::groove:vertical {\n"
" border-radius: 9px;\n"
" width: 18px;\n"
" margin: 0px;\n"
" background-color: rgb(52, 59, 72);\n"
"}\n"
"QSlider::groove:vertical:hover {\n"
" background-color: rgb(55, 62, 76);\n"
"}\n"
"QSlider::handle:verti"
"cal {\n"
" background-color: rgb(85, 170, 255);\n"
" border: none;\n"
" height: 18px;\n"
" width: 18px;\n"
" margin: 0px;\n"
" border-radius: 9px;\n"
"}\n"
"QSlider::handle:vertical:hover {\n"
" background-color: rgb(105, 180, 255);\n"
"}\n"
"QSlider::handle:vertical:pressed {\n"
" background-color: rgb(65, 130, 195);\n"
"}\n"
"\n"
"")
self.frame_main.setFrameShape(QFrame.NoFrame)
self.frame_main.setFrameShadow(QFrame.Raised)
self.verticalLayout = QVBoxLayout(self.frame_main)
self.verticalLayout.setSpacing(0)
self.verticalLayout.setObjectName(u"verticalLayout")
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.frame_center = QFrame(self.frame_main)
self.frame_center.setObjectName(u"frame_center")
sizePolicy = QSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.frame_center.sizePolicy().hasHeightForWidth())
self.frame_center.setSizePolicy(sizePolicy)
self.frame_center.setStyleSheet(u"background-color: rgb(40, 44, 52);")
self.frame_center.setFrameShape(QFrame.NoFrame)
self.frame_center.setFrameShadow(QFrame.Raised)
self.horizontalLayout_2 = QHBoxLayout(self.frame_center)
self.horizontalLayout_2.setSpacing(0)
self.horizontalLayout_2.setObjectName(u"horizontalLayout_2")
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
self.frame_content_right = QFrame(self.frame_center)
self.frame_content_right.setObjectName(u"frame_content_right")
self.frame_content_right.setStyleSheet(u"background-color: rgb(44, 49, 60);")
self.frame_content_right.setFrameShape(QFrame.NoFrame)
self.frame_content_right.setFrameShadow(QFrame.Raised)
self.verticalLayout_4 = QVBoxLayout(self.frame_content_right)
self.verticalLayout_4.setSpacing(0)
self.verticalLayout_4.setObjectName(u"verticalLayout_4")
self.verticalLayout_4.setContentsMargins(0, 0, 0, 0)
self.frame_content = QFrame(self.frame_content_right)
self.frame_content.setObjectName(u"frame_content")
self.frame_content.setMaximumSize(QSize(16777215, 16777215))
self.frame_content.setFrameShape(QFrame.NoFrame)
self.frame_content.setFrameShadow(QFrame.Raised)
self.verticalLayout_9 = QVBoxLayout(self.frame_content)
self.verticalLayout_9.setSpacing(0)
self.verticalLayout_9.setObjectName(u"verticalLayout_9")
self.verticalLayout_9.setContentsMargins(5, 5, 5, 5)
self.stackedWidget = QStackedWidget(self.frame_content)
self.stackedWidget.setObjectName(u"stackedWidget")
self.stackedWidget.setStyleSheet(u"/*background: transparent;*/")
self.stackedWidget.setLineWidth(-1)
self.page_1 = QWidget()
self.page_1.setObjectName(u"page_1")
self.page_1.setAutoFillBackground(False)
self.verticalLayout_12 = QVBoxLayout(self.page_1)
self.verticalLayout_12.setObjectName(u"verticalLayout_12")
self.btn_play = QPushButton(self.page_1)
self.btn_play.setObjectName(u"btn_play")
self.verticalLayout_12.addWidget(self.btn_play)
self.btn_open = QPushButton(self.page_1)
self.btn_open.setObjectName(u"btn_open")
self.verticalLayout_12.addWidget(self.btn_open)
self.video_viewer = QWidget(self.page_1)
self.video_viewer.setObjectName(u"video_viewer")
self.verticalLayout_12.addWidget(self.video_viewer)
self.stackedWidget.addWidget(self.page_1)
self.page_2 = QWidget()
self.page_2.setObjectName(u"page_2")
self.verticalLayout_10 = QVBoxLayout(self.page_2)
self.verticalLayout_10.setObjectName(u"verticalLayout_10")
self.stackedWidget.addWidget(self.page_2)
self.page_3 = QWidget()
self.page_3.setObjectName(u"page_3")
self.verticalLayout_13 = QVBoxLayout(self.page_3)
self.verticalLayout_13.setObjectName(u"verticalLayout_13")
self.stackedWidget.addWidget(self.page_3)
self.page_4 = QWidget()
self.page_4.setObjectName(u"page_4")
self.verticalLayout_6 = QVBoxLayout(self.page_4)
self.verticalLayout_6.setObjectName(u"verticalLayout_6")
self.stackedWidget.addWidget(self.page_4)
self.verticalLayout_9.addWidget(self.stackedWidget)
self.verticalLayout_4.addWidget(self.frame_content)
self.horizontalLayout_2.addWidget(self.frame_content_right)
self.verticalLayout.addWidget(self.frame_center)
self.horizontalLayout.addWidget(self.frame_main)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
self.stackedWidget.setCurrentIndex(0)
QMetaObject.connectSlotsByName(MainWindow)
# setupUi
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QCoreApplication.translate("MainWindow", u"MainWindow", None))
self.btn_play.setText(QCoreApplication.translate("MainWindow", u"Play", None))
self.btn_open.setText(QCoreApplication.translate("MainWindow", u"Open", None))
# retranslateUi
</code></pre>
<p>My problem is that with WA_TranslucentBackground active, the QVideoWidget does not show up at all.</p>
|
<python><pyside2><qvideowidget>
|
2023-01-19 13:39:05
| 1
| 7,216
|
ncica
|
75,173,034
| 10,596,488
|
Remove key but keep value in JSON [Python]
|
<p>I need to "Semi-Flatten" a JSON object where i have a JSON object with nested items. I have tried to use the flat_json with pandas and other "flatten_json" and json_normalize code in stackoverflow but i end up with completely flatten JSON (something i do not need).</p>
<p>Here is the JSON structure:</p>
<pre><code>[{
"Stat": {
"event": "03458188-abf9-431c-8144-ad49c1d069ed",
"id": "102e1bb1f28ca44b70d02d33380b13",
"number": "1121",
"source": "",
"datetime": "2023-01-13T00:00:00Z",
"status": "ok"
},
"Goal": {
"name": "goalname"
},
"Fordel": {
"company": "companyname"
},
"Land": {
"name": "landname"
}
}, {
"Stat": {
"event": "22222",
"id": "44444",
"number": "5555",
"source": "",
"datetime": "2023-01-13T00:00:00Z",
"status": "ok"
},
"Goal": {
"name": "goalname2"
},
"Fordel": {
"company": "companyname2"
},
"Land": {
"name_land": "landname2"
}
}]
</code></pre>
<p><strong>The result i need is this:</strong></p>
<pre><code>[{
"event": "03458188-abf9-431c-8144-ad49c1d069ed",
"id": "102e1bb1f28ca44b70d02d33380b13",
"number": "1121",
"source": "",
"datetime": "2023-01-13T00:00:00Z",
"status": "ok",
"name": "goalname",
"company": "companyname",
"name_land": "landname"
}, {
"event": "22222",
"id": "44444",
"number": "5555",
"source": "",
"datetime": "2023-01-13T00:00:00Z",
"status": "ok",
"name": "goalname2",
"company": "companyname2",
"name_land": "landname2"
}]
</code></pre>
<p>If this can be used with pandas or other json packages it would be great.</p>
<p>Coded i have tried: (copy/paste from another question/answer)</p>
<pre><code>def flatten_data(y):
out = {}
def flatten(x, name=''):
if type(x) is dict:
for a in x:
flatten(x[a], name + a + '_')
elif type(x) is list:
i = 0
for a in x:
flatten(a, name + str(i) + '_')
i += 1
else:
out[name[:-1]] = x
flatten(y)
return out
</code></pre>
<p>That gives me:</p>
<pre><code>{
"0_event": "03458188-abf9-431c-8144-ad49c1d069ed",
"0_id": "102e1bb1f28ca44b70d02d33380b13",
......
"1_event": "102e1bb1f28ca44b70d02d33380b13",
"1_id": "102e1bb1f28ca44b70d02d33380b13",
etc...
}
</code></pre>
|
<python><json><pandas><dataframe>
|
2023-01-19 13:32:36
| 1
| 353
|
Ali Durrani
|
75,173,005
| 12,845,199
|
Check for previous value condition, and grabbing sucessor value with condition
|
<p>I have the following sample series</p>
<pre><code>s = {0: 'feedback ratings-positive-unexpected origin',
1: 'decision-tree identified-regex input',
2: 'feedback ratings-options input',
3: 'feedback ratings-options-unexpected origin',
4: 'checkout order-placed input',
5: 'decision-tree identified-regex input'}
</code></pre>
<p>What I want to do is grab the values, that are under the "unexpected" keyword string and have the "input" string in them. So for example if I have 'feedback ratings-positive-unexpected origin', and the next value contains the "input" string. The map marks as True.
So in this case I want to map 'decision-tree identified-regex input', and 'checkout order-placed input'.</p>
<p>The wanted map, would be something like this</p>
<pre><code>want = {0: False,
1: True,
2: False,
3: False,
4: True,
5: False}
</code></pre>
<p>I did the following map using looping, I was wondering if there was way of using pandas library.</p>
<pre><code>mapi = []
for i in np.arange(s.shape[0]):
if 'input' in s.iloc[i] and 'unexpected' not in s.iloc[i]:
if 'unexpected' in s.iloc[i-1]:
mapi.append(True)
else:
mapi.append(False)
else:
mapi.append(False)
</code></pre>
|
<python><pandas>
|
2023-01-19 13:30:48
| 1
| 1,628
|
INGl0R1AM0R1
|
75,172,995
| 5,947,182
|
ModuleNotFoundError in regular .py files, but working when run with pytest .py files
|
<p>I'm writing a concurrency test where the code opens two separate playwright browsers and run them separately and concurrently. The structure is as follows</p>
<pre><code>MoveMouse
|
----__init__.py
----MoveMouse.py
UnitTest
|
----__init__.py
----TestMoveMouse
|
----__init__.py
----test_move_mouse.py
----concurrent_browsers.py
</code></pre>
<p>In both <code>test_move_mouse.py</code> and <code>concurrent_browsers.py</code> I have</p>
<pre><code>import os
import sys
current = os.path.dirname(os.path.realpath(__file__))
parent = os.path.dirname(current)
sys.path.append(parent)
from MoveMouse.MoveMouse import wind_mouse as wm
</code></pre>
<p><strong>From the terminal, running <code>pytest test_move_mouse.py</code> works but when run with <code>python concurrent_browsers.py</code> I get <code>ModuleNotFoundError: No module named 'MoveMouse'</code></strong>.
Here are the files <code>test_move_mouse.py</code> and <code>concurrent_browsers.py</code></p>
<p>test_move_mouse.py:</p>
<pre><code>import os
import sys
import pytest
from playwright.sync_api import Page, expect, sync_playwright
current = os.path.dirname(os.path.realpath(__file__))
parent = os.path.dirname(current)
sys.path.append(parent)
from MoveMouse.MoveMouse import wind_mouse as wm
def test_drawing_board():
rel_path = r"/mats/drawing_board.html"
file_path = "".join([r"file://", os.getcwd(), rel_path])
with sync_playwright() as playwright:
# Fetch drawing board
browser = playwright.chromium.launch(headless=False, slow_mo=0.5)
page = browser.new_page()
page.mouse.move(400,50) # Place mouse in a random position in the browser before fetching the page
page.goto(file_path)
# Start points
start_point = 100
x = 1200
# Move mouse
page.mouse.down()
for y in range(100, 1000, 100):
# Generate mouse points
points = []
wm(start_point, y, x, y, M_0=15, D_0=12, move_mouse=lambda x, y: points.append([x, y]))
# Draw
page.mouse.move(start_point, y)
page.mouse.down()
for point in points:
page.mouse.move(point[0], point[1])
page.mouse.up()
</code></pre>
<p>concurrent_browsers.py:</p>
<pre><code>import os
import sys
from concurrent.futures import ProcessPoolExecutor
from playwright.sync_api import sync_playwright
current = os.path.dirname(os.path.realpath(__file__))
parent = os.path.dirname(current)
sys.path.append(parent)
from MoveMouse.MoveMouse import wind_mouse as wm
def run_drawing_board(mouse_move, start_point):
rel_path = r"/mats/drawing_board.html"
file_path = "".join([r"file://", os.getcwd(), rel_path])
with sync_playwright() as playwright:
# Fetch drawing board
browser = playwright.chromium.launch(headless=False, slow_mo=0.5)
page = browser.new_page()
page.mouse.move(400, 50) # Place mouse in a random position in the browser before fetching the page
page.goto(file_path)
# Move mouse
x = 1200
page.mouse.down()
for y in range(100, 1000, 100):
# Generate mouse points
points = []
wm(start_point, y, x, y, M_0=15, D_0=12, move_mouse=lambda x, y: points.append([x, y]))
# Draw
page.mouse.move(start_point, y)
page.mouse.down()
for point in points:
page.mouse.move(point[0], point[1])
page.mouse.up()
def main():
with ProcessPoolExecutor() as executor:
start_points = [100, 400]
executor.map(run_drawing_board, start_points)
if __name__ == '__main__':
main()
</code></pre>
<p><strong>Both have the same above MouseMove import and are in the same directory so why does it work for <code>test_move_mouse.py</code> but not <code>concurrent_browsers.py</code>?</strong></p>
|
<python><modulenotfounderror>
|
2023-01-19 13:30:05
| 1
| 388
|
Andrea
|
75,172,921
| 6,282,576
|
Groupby and count number of children where each child has more than a specific number of children itself
|
<p>I have three models, <code>Business</code>, <code>Employee</code>, and <code>Client</code>, where each business can have many employees and each employee can have many clients:</p>
<pre class="lang-py prettyprint-override"><code>class Business(models.Model):
name = models.CharField(max_length=128)
menu = models.CharField(max_length=128, default="")
slogan = models.CharField(max_length=128, default="")
slug = models.CharField(max_length=128, default="")
class Employee(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
business = models.ForeignKey(
Business,
related_name="employees",
on_delete=models.CASCADE
)
class Client(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
employee = models.ForeignKey(
Employee,
related_name="clients",
on_delete=models.CASCADE
)
</code></pre>
<p>A sample data:</p>
<pre class="lang-py prettyprint-override"><code>Business.objects.create(name="first company")
Business.objects.create(name="second company")
Business.objects.create(name="third company")
Employee.objects.create(first_name="f1", last_name="l1", business_id=1)
Employee.objects.create(first_name="f2", last_name="l2", business_id=1)
Employee.objects.create(first_name="f3", last_name="l3", business_id=2)
Employee.objects.create(first_name="f4", last_name="l4", business_id=3)
Employee.objects.create(first_name="f5", last_name="l5", business_id=3)
Employee.objects.create(first_name="f6", last_name="l6", business_id=3)
Client.objects.create(first_name="cf1", last_name="cl1", employee_id=1)
Client.objects.create(first_name="cf2", last_name="cl2", employee_id=1)
Client.objects.create(first_name="cf3", last_name="cl3", employee_id=2)
Client.objects.create(first_name="cf4", last_name="cl4", employee_id=2)
Client.objects.create(first_name="cf5", last_name="cl5", employee_id=3)
Client.objects.create(first_name="cf6", last_name="cl6", employee_id=3)
Client.objects.create(first_name="cf7", last_name="cl7", employee_id=4)
Client.objects.create(first_name="cf8", last_name="cl8", employee_id=5)
Client.objects.create(first_name="cf9", last_name="cl9", employee_id=6)
</code></pre>
<p>If I wanted to see how many employees each business has, I could run a query like this:</p>
<pre class="lang-py prettyprint-override"><code>Business.objects.annotate(
employee_count=Count("employees")
).values(
"name", "employee_count"
).order_by("-employee_count")
<QuerySet [
{'name': 'third company', 'employee_count': 3},
{'name': 'first company', 'employee_count': 2},
{'name': 'second company', 'employee_count': 1}
]>
</code></pre>
<p>Similarly, if I wanted to see how many clients each employee has, I could run a query like this:</p>
<pre class="lang-py prettyprint-override"><code>Employee.objects.annotate(
client_count=Count("clients")
).values(
"first_name", "client_count"
).order_by("-client_count")
<QuerySet [
{'first_name': 'f1', 'client_count': 2},
{'first_name': 'f2', 'client_count': 2},
{'first_name': 'f3', 'client_count': 2},
{'first_name': 'f4', 'client_count': 1},
{'first_name': 'f5', 'client_count': 1},
{'first_name': 'f6', 'client_count': 1}
]>
</code></pre>
<p>But I want to see, for every business, the number of employees that have more than one clients. I'm expecting an output like this:</p>
<pre class="lang-py prettyprint-override"><code><QuerySet [
{'name': 'first company', 'employee_count': 2},
{'name': 'second company', 'employee_count': 1},
{'name': 'third company', 'employee_count': 0}
]>
</code></pre>
<p>I tried to use <code>Count</code> with <code>Subquery</code>, but the result isn't what I expected.</p>
<pre class="lang-py prettyprint-override"><code>employees_with_multiple_clients = Employee.objects.annotate(
client_count=Count("clients")
).filter(client_count__gt=1)
Business.objects.annotate(
employee_count=Count(Subquery(employees_with_multiple_clients.values('id')))
).values("name", "employee_count").order_by("-employee_count")
<QuerySet [
{'name': 'first company', 'employee_count': 1},
{'name': 'second company', 'employee_count': 1},
{'name': 'third company', 'employee_count': 1}]>
</code></pre>
<p>How can I retrieve, for every business, the number of employees that have more than one clients?</p>
|
<python><django><django-models><django-orm>
|
2023-01-19 13:24:20
| 1
| 4,313
|
Amir Shabani
|
75,172,878
| 17,561,414
|
explode function on struct type pyspark
|
<p>I would like to get the seperate <code>df</code> per level from <code>JSON</code> file. Below code allows me to go 2 level deep but later I cant use <code>explode</code> function <code>due to data type mismatch: input to function explode should be array or map type, not struct</code> error on <code>df_4</code>.</p>
<p>My code and <code>JSON</code> structure:</p>
<pre><code>start_df = spark.read.json(f'/mnt/bronze/products/**/*.json')
# Select eveything under _embedded
df = start_df.select('_embedded.*')
# explode items data
df1 = df.select(explode(df.items).alias('required'))
# get all data from
df2 = df1.select('required.*')
#create seperate df only including "VALUES" % "identifier"
df3= df2.select("identifier","values")
df3.display()
# expldoe "values" to present its key-value pari as col(key) and col(value)
df_4 = df3.select("identifier", explode("values"))
</code></pre>
<p>Structure of <code>JSON</code></p>
<pre><code> root
|-- _embedded: struct (nullable = true)
| |-- items: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- _links: struct (nullable = true)
| | | | |-- self: struct (nullable = true)
| | | | | |-- href: string (nullable = true)
| | | |-- values: struct (nullable = true)
| | | | |-- Contrex_table: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- UFI_Table: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: struct (containsNull = true)
| | | | | | | | |-- UFI: string (nullable = true)
| | | | | | | | |-- company: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- add_reg_info: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chem_can_product_id: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chemical_name: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chemical_name_marketing: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chemical_stability: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chemical_weapon_precursor: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chempax_302_dewolf_id: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chempax_311_case_id: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- chempax_312_ross_id: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- ci_number: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- coating_type: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- colour: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- commodity_code: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- compendial_grade: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- conditions_to_avoid: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- conflict_minerals: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- contains_nano_particles: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- country_of_origin: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- dmf_status: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
|-- _links: struct (nullable = true)
| |-- first: struct (nullable = true)
| | |-- href: string (nullable = true)
| |-- next: struct (nullable = true)
| | |-- href: string (nullable = true)
| |-- self: struct (nullable = true)
| | |-- href: string (nullable = true)
</code></pre>
|
<python><json><pyspark><flatten><melt>
|
2023-01-19 13:20:47
| 0
| 735
|
Greencolor
|
75,172,585
| 7,762,646
|
Python decorator to determine the memory size of objects in a function
|
<p>I would like to develop a Python decorator that tells me the <a href="https://stackoverflow.com/questions/449560/how-do-i-determine-the-size-of-an-object-in-python/30316760#30316760">real memory size</a> of the objects in a Python function.</p>
<pre><code>from pympler import asizeof
import functools
def get_size(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
print(f"Memory allocation (KB) of variables of the function {func.__name__!r}")
for key in locals().keys():
print(f"size of object {key} is {asizeof.asizeof(key)/10**3}")
return wrapper
@get_size
def my_function():
nums = [1,2,3,4,5]
more_nums = [nums, nums, nums, nums, nums]
my_function()
</code></pre>
<p>The current output is not what I am looking for, as it takes the function as the only object, not the objects in contains inside (<code>nums, more_nums</code>) which are the ones of interest to me.</p>
<pre><code>Memory allocation (KB) of variables of the function 'my_function'
size of object args is 0.056
size of object kwargs is 0.056
size of object func is 0.056
</code></pre>
|
<python><python-3.x><memory-management><python-decorators>
|
2023-01-19 12:56:32
| 0
| 1,541
|
G. Macia
|
75,172,484
| 2,318,839
|
UTF-8 support in reportlab (Python)
|
<p><strong>Problem</strong></p>
<p>I can't create a PDF from <code>UTF-8</code> encoded text using <a href="https://docs.reportlab.com/reportlab/userguide/ch1_intro/" rel="nofollow noreferrer">reportlab</a>. What I get is a document full of black squares.</p>
<p>See the screenshot below:</p>
<p><a href="https://i.sstatic.net/o48I7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o48I7.png" alt="enter image description here" /></a></p>
<p><strong>Prerequisites</strong></p>
<pre class="lang-bash prettyprint-override"><code>pip install faker reportlab
</code></pre>
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code>
import tempfile
from faker import Faker
from reportlab.lib.pagesizes import letter
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import inch
from reportlab.platypus import SimpleDocTemplate, Paragraph
# FAKER = Faker() # Latin based text generator
FAKER = Faker(locale="hy-AM") # UTF-8 text generator
text = FAKER.text(max_nb_chars=1_000)
filename = tempfile.NamedTemporaryFile(suffix=".pdf").name
styles = getSampleStyleSheet()
style_paragraph = styles["Normal"]
story = []
doc = SimpleDocTemplate(
filename,
pagesize=letter,
bottomMargin=.4 * inch,
topMargin=.6 * inch,
rightMargin=.8 * inch,
leftMargin=.8 * inch,
)
paragraph = Paragraph(text, style_paragraph)
story.append(paragraph)
doc.build(story)
</code></pre>
<p><strong>Also tried</strong></p>
<p>I also tried TTF font (Vera) but it didn't work either:</p>
<pre class="lang-py prettyprint-override"><code># ...
from reportlab.pdfbase import pdfmetrics
# ...
pdfmetrics.registerFont(TTFont("Vera", "Vera.ttf"))
pdfmetrics.registerFont(TTFont("VeraBd", "VeraBd.ttf"))
pdfmetrics.registerFont(TTFont("VeraIt", "VeraIt.ttf"))
pdfmetrics.registerFont(TTFont("VeraBI", "VeraBI.ttf"))
# ...
doc = SimpleDocTemplate(
filename,
pagesize=letter,
bottomMargin=.4 * inch,
topMargin=.6 * inch,
rightMargin=.8 * inch,
leftMargin=.8 * inch,
)
# ...
</code></pre>
|
<python><pdf><utf-8><reportlab>
|
2023-01-19 12:48:10
| 1
| 14,526
|
Artur Barseghyan
|
75,172,467
| 848,292
|
Extrude a concave, complex polygon in PyVista
|
<p>I wish to take a concave and complex (containing holes) polygon and extrude it 'vertically' into a polyhedron, purely for visualisation. I begin with a shapely <a href="https://shapely.readthedocs.io/en/stable/manual.html?highlight=Polygon#Polygon" rel="nofollow noreferrer"><code>Polygon</code></a>, like below:</p>
<pre class="lang-py prettyprint-override"><code>poly = Polygon(
[(0,0), (10,0), (10,10), (5,8), (0,10), (1,7), (0,5), (1,3)],
holes=[
[(2,2),(4,2),(4,4),(2,4)],
[(6,6), (7,6), (6.5,6.5), (7,7), (6,7), (6.2,6.5)]])
</code></pre>
<p>which I correctly plot (reorientating the exterior coordinates to be clockwise, and the hole coordinates to be counterclockwise) in matplotlib as:</p>
<p><a href="https://i.sstatic.net/wSqKut.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wSqKut.png" alt="enter image description here" /></a></p>
<p>I then seek to render this polygon extruded out-of-the-page (along <code>z</code>), using <a href="https://docs.pyvista.org/" rel="nofollow noreferrer">PyVista</a>. There are a few hurdles; PyVista doesn't directly support concave (nor complex) input to its <a href="https://docs.pyvista.org/api/core/_autosummary/pyvista.PolyData.html" rel="nofollow noreferrer"><code>PolyData</code></a> type. So we first create an extrusion of simple (hole-free) concave polygons, as per <a href="https://github.com/pyvista/pyvista/discussions/2398" rel="nofollow noreferrer">this discussion</a>.</p>
<pre class="lang-py prettyprint-override"><code>def extrude_simple_polygon(xy, z0, z1):
# force counter-clockwise ordering, so PyVista interprets polygon correctly
xy = _reorient_coords(xy, clockwise=False)
# remove duplication of first & last vertex
xyz0 = [(x,y,z0) for x,y in xy]
if (xyz0[0] == xyz0[-1]):
xyz0.pop()
# explicitly set edge_source
base_vert = [len(xyz0)] + list(range(len(xyz0)))
base_data = pyvista.PolyData(xyz0, base_vert)
base_mesh = base_data.delaunay_2d(edge_source=base_data)
vol_mesh = base_mesh.extrude((0, 0, z1-z0), capping=True)
# force triangulation, so PyVista allows boolean_difference
return vol_mesh.triangulate()
</code></pre>
<p>Observe this works when extruding the outer polygon and each of its internal polygons in-turn:</p>
<pre class="lang-py prettyprint-override"><code>extrude_simple_polygon(list(poly.exterior.coords), 0, 5).plot()
</code></pre>
<p><a href="https://i.sstatic.net/qhetCt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qhetCt.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>extrude_simple_polygon(list(poly.interiors[0].coords), 0, 5).plot()
extrude_simple_polygon(list(poly.interiors[1].coords), 0, 5).plot()
</code></pre>
<p><a href="https://i.sstatic.net/QD0e7t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QD0e7t.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/V7O9ct.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V7O9ct.png" alt="enter image description here" /></a></p>
<p>I reasoned that to create an extrusion of the original <em>complex</em> polygon, I could compute the <a href="https://docs.pyvista.org/api/core/_autosummary/pyvista.PolyDataFilters.boolean_difference.html#pyvista.PolyDataFilters.boolean_difference" rel="nofollow noreferrer"><code>boolean_difference</code></a>. Alas, the result of</p>
<pre><code>outer_vol = extrude_simple_polygon(list(poly.exterior.coords), 0, 5)
for hole in poly.interiors:
hole_vol = extrude_simple_polygon(list(hole.coords), 0, 5)
outer_vol = outer_vol.boolean_difference(hole_vol)
outer_vol.plot()
</code></pre>
<p>is erroneous:</p>
<p><a href="https://i.sstatic.net/5xskim.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5xskim.png" alt="enter image description here" /></a></p>
<p>The doc advises to inspect the normals via <a href="https://docs.pyvista.org/api/core/_autosummary/pyvista.PolyData.plot_normals.html" rel="nofollow noreferrer"><code>plot_normals</code></a>, revealing that all extruded volumes have inward-pointing (or else, unexpected) normals:</p>
<p><a href="https://i.sstatic.net/WG8Hnt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WG8Hnt.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/FgytJt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FgytJt.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/WFaCUt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WFaCUt.png" alt="enter image description here" /></a></p>
<p>The <a href="https://docs.pyvista.org/api/core/_autosummary/pyvista.PolyData.extrude.html" rel="nofollow noreferrer"><code>extrude</code></a> doc mentions nothing of the extruded surface normals nor the original object (in this case, a polygon) orientation.</p>
<p>We could be forgiven for expecting our polygons must be <em>clockwise</em>, so we set <code>clockwise=True</code> in the first line of <code>extrude_simple_polygon</code> and try again. Alas, <code>PolyData</code> now misinterprets our base polygon; calling <code>base_mesh.plot()</code> reveals (what <em>should</em> look like our original blue outer polygon):</p>
<p><a href="https://i.sstatic.net/pDtdNt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pDtdNt.png" alt="enter image description here" /></a></p>
<p>with extrusion</p>
<p><a href="https://i.sstatic.net/NpYvnt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NpYvnt.png" alt="enter image description here" /></a></p>
<ul>
<li>Does PyVista always expect counter-clockwise polygons?</li>
<li>Why does extrude create volumes with inward-pointing surface normals?</li>
<li>How can I correct the extruded surface normals?</li>
<li>Otherwise, how can I make PyVista correctly visualise what should be an incredibly simply-extruded concave complex polygon??</li>
</ul>
|
<python><cad><pyvista>
|
2023-01-19 12:47:03
| 1
| 4,931
|
Anti Earth
|
75,172,022
| 12,226,377
|
Token indices sequence length warning while using pretrained Roberta model for sentiment analysis
|
<p>I am presently using a pretrained Roberta model to identify the sentiment scores and categories for my dataset. I am truncating the length to 512 but I still get the warning. What is going wrong here? I am using the following code to achieve this:</p>
<pre><code>from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from scipy.special import softmax
model = f"j-hartmann/sentiment-roberta-large-english-3-classes"
tokenizer = AutoTokenizer.from_pretrained(model, model_max_length=512,truncation=True)
automodel = AutoModelForSequenceClassification.from_pretrained(model)
</code></pre>
<p>The warning that I am getting here:</p>
<pre><code>Token indices sequence length is longer than the specified maximum sequence length for this model (627 > 512). Running this sequence through the model will result in indexing errors
</code></pre>
|
<python><sentiment-analysis><roberta-language-model><roberta>
|
2023-01-19 12:08:53
| 1
| 807
|
Django0602
|
75,171,845
| 19,270,168
|
Discord API cloudflare banning my repl.it repo
|
<pre class="lang-py prettyprint-override"><code>#type: ignore
import os
from keep_alive import keep_alive
from discord.ext import commands
import discord
import asyncio
import datetime
import re
DC_TOK = os.environ['DC_TOK']
bot = commands.Bot(command_prefix='!', intents=discord.Intents(4194303))
@bot.event
async def on_message(msg: discord.Message):
if msg.author == bot.user:
return
guild = msg.guild
if msg.content == '!quack':
await msg.channel.send('''!ticket open subject - Creates a new ticket
**Ticket Support Team++**
!ticket close - Closes on-going ticket
!invite ping_message - Invites user to ticket
!remove ping_message - Removes user from ticket
!mark rank - Makes the ticket exclusive for specific rank
!category TITLE - Briefly categorises the ticket
''')
elif msg.content[:12] == '!ticket open':
chn = msg.channel
ticket_title = msg.content[13:]
if ticket_title.strip() == '':
ticket_title = 'None'
ticketer_acc = msg.author
ticketer = ticketer_acc.display_name
category = discord.utils.get(guild.categories, name="Tickets 2")
tcc = discord.utils.get(guild.channels, name="ticket-creation-logs")
elem = None
_ = None
async for mg in tcc.history():
if mg.content.startswith('CATEG'):
continue
elem, _ = mg.content.split(' ')
elem = int(elem)
_ = int(_)
break
assert (elem is not None)
elem += 1
await tcc.send(str(elem) + ' ' + str(msg.author.id))
tck_channel = await guild.create_text_channel(f'{elem}-{ticketer}',
category=category)
await tck_channel.set_permissions(ticketer_acc,
read_messages=True,
send_messages=True)
await chn.send(
f'**TICKET {elem}**\n\nYour ticket has been created. <#{tck_channel.id}>'
)
await tck_channel.send(
f'<@{ticketer_acc.id}> Hello emo! Your ticket has been created, subject: `{ticket_title}`.\nOur support team will be with you soon! Meanwhile please address your problem because those emos are really busy!!!'
)
elif msg.content == '!ticket close':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
closed_cat = discord.utils.get(guild.categories, name="Tickets 3")
nam = msg.channel.name.lstrip('🔱🛡️')
tick_id = int(nam[:nam.find('-')])
tcc = discord.utils.get(guild.channels, name="ticket-creation-logs")
elem = None
creator = None
async for mg in tcc.history():
if mg.content.startswith('CATEG'):
continue
elem, creator = mg.content.split(' ')
elem = int(elem)
creator = int(creator)
if elem == tick_id:
break
assert (elem is not None)
await msg.channel.send('Closing ticket...')
counter = {}
async for mg in msg.channel.history():
if mg.author.bot or mg.author.id == creator:
continue
if mg.author.id not in counter.keys():
counter[mg.author.id] = 1
else:
counter[mg.author.id] += 1
max_num = 0
max_authors = []
for key, value in counter.items():
if value > max_num:
max_num = value
max_authors = [key]
elif value == max_num:
max_authors.append(key)
user_ping_list = ' '.join([f'<@{usr}>' for usr in max_authors
]) + ' contributed the most.'
if user_ping_list == ' contributed the most.':
user_ping_list = 'No one contributed.'
await msg.channel.send(user_ping_list)
await msg.channel.send('We hope we were able to solve your problem.')
await asyncio.sleep(3)
tick_creator = discord.utils.get(guild.members, id=creator)
assert (tick_creator is not None)
await msg.channel.set_permissions(tick_creator, overwrite=None)
await msg.channel.edit(category=closed_cat)
dms = discord.utils.get(guild.members, id=creator)
assert (dms is not None)
DM = dms
dms = DM._user.dm_channel
if dms is None:
dms = await DM._user.create_dm()
del DM
cr = msg.channel.created_at
assert (cr is not None)
tick_created: datetime.datetime = cr.replace(tzinfo=None)
time_cur = datetime.datetime.utcnow().replace(tzinfo=None)
td = time_cur - tick_created
mm, ss = divmod(td.seconds, 60)
hh, mm = divmod(mm, 60)
timestr = ''
if td.days > 0:
timestr += f'{td.days}d '
if hh > 0:
timestr += f'{hh}h '
if mm > 0:
timestr += f'{mm}m '
if ss > 0:
timestr += f'{ss}s'
await dms.send(f'''Your ticket has been closed by <@{msg.author.id}>
Your ticket lasted `{timestr}`.
We hope we were able to solve your problem. :partying_face:''')
elif msg.content == '!mark co':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
await msg.channel.send('Requesting to mark this ticket for: `Co-owner`')
nam = msg.channel.name
if nam.startswith('🛡️'):
nam = nam[1:]
assert (nam is not None)
if nam.startswith('🔱'):
await msg.channel.send('Ticket already marked for `Co-owner`')
else:
await msg.channel.edit(name='🔱' + nam)
await msg.channel.send(
f'<@{msg.author.id}> marked this ticket for: `Co-owner`')
await msg.channel.send(':trident:')
elif msg.content == '!mark admin':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
await msg.channel.send(
'Requesting to mark this ticket for: `Administrator`')
nam = msg.channel.name
assert (nam is not None)
if nam.startswith('🛡️'):
await msg.channel.send('Ticket already marked for `Administrator`')
elif nam.startswith('🔱'):
await msg.channel.send('Ticket already marked for `Co-owner`')
else:
await msg.channel.edit(name='🛡️' + nam)
await msg.channel.send(
f'<@{msg.author.id}> marked this ticket for: `Adiministrator`')
await msg.channel.send(':shield:')
elif msg.content[:7] == '!invite':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
usr_ping = msg.content[8:]
if not (usr_ping.startswith('<@') and usr_ping.endswith('>')
and usr_ping[2:-1].isdigit()):
return
invited_usr = discord.utils.get(guild.members, id=int(usr_ping[2:-1]))
assert (invited_usr is not None)
await msg.channel.set_permissions(invited_usr,
read_messages=True,
send_messages=True)
await msg.channel.send(f'{usr_ping} was invited into the ticket.')
elif msg.content[:7] == '!remove':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
usr_ping = msg.content[8:]
if not (usr_ping.startswith('<@') and usr_ping.endswith('>')
and usr_ping[2:-1].isdigit()):
return
invited_usr = discord.utils.get(guild.members, id=int(usr_ping[2:-1]))
assert (invited_usr is not None)
await msg.channel.set_permissions(invited_usr, overwrite=None)
await msg.channel.send(f'{usr_ping} was removed from the ticket.')
elif msg.content[:9] == '!category':
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
if not (discord.utils.get(guild.roles, name='Ticket support team')
in msg.author.roles or discord.utils.get(
guild.roles, name='Administrator') in msg.author.roles
or discord.utils.get(guild.roles,
name='Co-owners (A.K.A. super admin)')
in msg.author.roles or discord.utils.get(
guild.roles, name='Owner') in msg.author.roles):
return
categ = msg.content[10:]
tcc = discord.utils.get(guild.channels, name="ticket-creation-logs")
nam = msg.channel.name.lstrip('🔱🛡️')
tick_id = int(nam[:nam.find('-')])
async for mg in tcc.history():
if mg.content.startswith(f'CATEG {tick_id}'):
await msg.channel.send(
f'''This ticket is already marked with category **{mg.content[len(f'CATEG{tick_id}')+2:]}**'''
)
return
await tcc.send(f'CATEG {tick_id} {categ}')
await msg.channel.send(
f'''<@{msg.author.id}> marked this ticket with category **{categ}**''')
else:
category = discord.utils.get(guild.categories, name="Tickets 2")
if msg.channel.category != category:
return
PING_PTN = '<@[0-9]+>'
pings = re.findall(PING_PTN, msg.content, re.UNICODE)
usrs = [
discord.utils.get(guild.members, id=int(ping[2:-1])) for ping in pings
]
remainder = msg.content
for ping in pings:
remainder = remainder.replace(str(ping), '')
for usr in usrs:
assert (usr is not None)
DM = usr
dms = DM._user.dm_channel
if dms is None:
dms = await DM._user.create_dm()
del DM
await dms.send(
f'''`{msg.author.name}#{msg.author.discriminator}`Pinged you in a ticket: <#{msg.channel.id}>
`{remainder}`''')
keep_alive()
bot.run(DC_TOK)
</code></pre>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
from threading import Thread
app = Flask('')
@app.route('/')
def home():
return "Hello. I am alive!"
def run():
app.run(host='0.0.0.0', port=8080)
def keep_alive():
t = Thread(target=run)
t.start()
</code></pre>
<p>keep_alive.py</p>
<p>I am hosting my project on repl.it and using uptimerobot to ping it every 10 minutes so that it does not go dormant. But discord cloudflare bans it. I hear some say it's because of uptimerobot but how does that even affect the discord API? Also, the plan you give me must not require payment or payment methods of any type. <strong>I am a minor.</strong></p>
|
<python><discord><discord.py><replit>
|
2023-01-19 11:53:40
| 1
| 1,196
|
openwld
|
75,171,569
| 4,908,844
|
Why is read_csv so much slower to parse dates than to_datetime
|
<p>I am using jupyter notebook and reading a .csv file with pandas read_csv. When using the following code, it takes a reeeeeeeally long time (more than 10 minutes). The dataset has 70821 entries.</p>
<pre><code> [Input]
df = pd.read_csv("file.csv", parse_dates=[0])
df.head()
[Output]
created_at field1 field2
2022-09-16T22:53:19+07:00 100.0 NaN
2022-09-16T22:54:46+07:00 100.0 NaN
2022-09-16T22:56:14+07:00 100.0 NaN
2022-09-16T23:02:01+07:00 100.0 NaN
2022-09-16T23:03:28+07:00 100.0 NaN
</code></pre>
<p>If I just use <code>parse_dates=True</code> it will not detect the date and keep it as object.</p>
<p>If I read the dataset without <code>parse_dates</code> it goes much faster like I would expect it (~1 second).
When I then use a seperate line of code to parse the date like</p>
<pre><code>df["created_at"]=pd.to_datetime(df['created_at'])
</code></pre>
<p>it goes faster than using <code>parse_dates</code> in read_csv but still takes couple of minutes (around 3 minutes).</p>
<p>Using the following</p>
<pre><code>df["created_at"]=pd.to_datetime(df['created_at'], format="%Y-%m-%d %H:%M:%S%z")
</code></pre>
<p>or with the <code>T</code> in the format string</p>
<pre><code>df["created_at"]=pd.to_datetime(df['created_at'], format="%Y-%m-%dT%H:%M:%S%z")
</code></pre>
<p>or</p>
<pre><code>df["created_at"]=pd.to_datetime(df['created_at'], infer_datetime_format=True)
</code></pre>
<p>does not increase the speed (still around 3 minutes)</p>
<p>So my questions are the following:</p>
<ol>
<li>Why is the slowest way to parse the date directly with read_csv?</li>
<li>Why is the way with <code>pd.to_datetime</code> faster and</li>
<li>Why does something like <code>format="%Y-%m-%d %H:%M:%S%z"</code> or <code>infer_datetime_format=True</code> not speed up the process</li>
<li>And finally how do I do it in a better way? If I read the file once and parsed it to datetime, what would be the best way to write it back to a csv file so I don't have to go through this process all over again? I assume I have to write a function and manually change every entry to a better formatted date?</li>
</ol>
<p>Can someone help me figuring out why those different approaches take such a different amount of time and how I speed up e.g. with something I tried in 3.?
Thanks a lot.</p>
<hr />
<p>EDIT:</p>
<p>I tried now to manually adjust the date format and see, where it causes trouble. Turns out, when I delete <code>+07:00</code> in the date string, it is fast (~500 ms).</p>
<p>Under this following link I uploaded to csv files. <code>example1</code> is the file with the problematic datetime format. In <code>example2_no_timezone</code> I deleted in every entry the <code>+07:00</code> part which makes the parsing fast again (expected behaviour).</p>
<p><a href="https://1drv.ms/u/s!AhMadgxHtPSBg6EuAY0mopdmwCtsPw?e=RerSd6" rel="nofollow noreferrer">Folder with two example datasets</a></p>
<p>The questions above do remain sadly.</p>
<ol>
<li>Why is pandas not able to read the original date string in an appropriate time</li>
<li>Why is to_datetime faster (but still too slow with the original dataset</li>
<li>How do I fix this without changing the format in the original dataset (e.g., with means of <code>to_datetime</code> and providing <code>format=...</code>)</li>
</ol>
|
<python><pandas><csv><datetime>
|
2023-01-19 11:29:46
| 0
| 361
|
Mrob
|
75,171,558
| 13,489,354
|
Python subprocess.Popen() is not starting the subprocess properly
|
<p>I have a python test that startes a tcp server as subprocess.</p>
<pre><code>def test_client(self):
if sys.platform.startswith('linux'):
proc = subprocess.Popen([f'{self.bin_output_path}/CaptureUnitHalServer'], stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
else:
proc = subprocess.Popen([f'{self.bin_output_path}/CaptureUnitHalServer'], stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
hal_access = HalAccess(port=12309)
hal = CaptureUnitHal(hal_access)
response = hal.test_scratch(TestScratchReq([1]))
assert TestScratchCnf(verdict=True, return_value=5) == response
print(f"\nRESPONSE: {response}\n")
# The first proc.communicate waits for the subprocess to finish. As the server runs forever a TimeoutExpired
# error is thrown the second proc.communicate in the exception handler get stdout and stderr from the server
try:
outs, errs = proc.communicate(timeout=2)
print(f'{outs.decode()}\n')
except subprocess.TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
print("CaptureUnitHalServer stdout:")
print(f'{outs.decode()}\n')
print("CaptureUnitHalServer stderr:")
print(f'{errs.decode()}\n')
</code></pre>
<p>The <code>hal</code> is a simple tcp client that sends a test request (TestScratchReq) to it and receives the response.
In linux this works perfectly fine. But when I run this in windows The code blocks forever at the line <code>response = hal.test_scratch(TestScratchReq([1]))</code>.
This line calls</p>
<pre><code>def send_and_receive(self, request: str) -> str:
self._send(request)
return self._receive()
</code></pre>
<p>and blocks in the socket.recv() call in self._receive()</p>
<pre><code>data = self._socket.recv(1024).decode(encoding=self.MESSAGE_ENCODING) # blocking
</code></pre>
<p>So it seems like the server is not started properly as a subprocess in windows when calling subprocess.Popen().
The following command shows the port as listening however:</p>
<pre><code>Get-NetTCPConnection -State Listen | grep 12309
0.0.0.0 12309 0.0.0.0 0 Listen
</code></pre>
<p>I have a second implementation that is also working on windows:</p>
<pre><code>def test_client(self):
daemon = Thread(target=self.server_thread, daemon=True, name='HalServer')
daemon.start()
hal_access = HalAccess(port=12309)
hal = CaptureUnitHal(hal_access)
response = hal.test_scratch(TestScratchReq([1]))
print(response)
assert TestScratchCnf(verdict=True, return_value=5) == response
daemon.join() # wait for daemon timeout
def server_thread(self):
if sys.platform.startswith('linux'):
result = subprocess.run([f'{self.bin_output_path}/CaptureUnitHalServer'], timeout=5, stdout=subprocess.PIPE)
else:
result = subprocess.run([f'{self.bin_output_path}/CaptureUnitHalServer'], timeout=5, creationflags=subprocess.CREATE_NEW_CONSOLE)
print(result.stdout.decode())
print(result.stderr.decode())
pass
</code></pre>
<p>But it seems overcomplicated to me to have another Thread just to start the blocking subprocess.run call that throws a TimeoutExpired error after 5 seconds. A problem with this solution is also that I don't get the stdout and stderr of the subprocess respectively the following two lines of code don't print anything as the Thread is killed by the exception before it reaches these lines.</p>
<pre><code>print(result.stdout.decode())
print(result.stderr.decode())
</code></pre>
<p>EDIT:</p>
<p>My question is: Why is the windows version in the first version of the code blocking? How does subprocess.Popen() differ between linux and windows? Is the subprocess not started properly in the windows case?</p>
|
<python><windows><subprocess>
|
2023-01-19 11:28:41
| 0
| 501
|
Simon Rechermann
|
75,171,541
| 13,345,744
|
How to convert categorial data into indices with nan values present in Python?
|
<p><strong>Context</strong></p>
<p>I have created a function, that converts <code>Categorial Data</code> into its unique indices. This works great with all values except <code>NaN</code>.
It seems that the comparison with <code>NaN</code> does not work. This results in the two problems seen below.</p>
<hr />
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code> col1
0 male
1 female
2 NaN
3 female
def categorial(series: pandas.Series) -> pandas.Series:
series = series.copy()
for index, value in enumerate(series.unique()):
# Problem 1: The output for the Value NaN is always 0.0 %, even though nan is present in the given series.
print(index, value, round(series[series == value].count() / len(series) * 100, 2), '%')
for index, value in enumerate(series.unique()):
# Problem 2: Every unique Value is converted to its Index except NaN.
series[series == value] = index
return series.astype(pandas.Int64Dtype())
</code></pre>
<hr />
<p><strong>Question</strong></p>
<ul>
<li>How can I solve the two problems seen in the code above?</li>
</ul>
|
<python><pandas><numpy><series><categorical-data>
|
2023-01-19 11:26:49
| 2
| 1,721
|
christophriepe
|
75,171,486
| 10,122,801
|
Python loop and reassign quantities based on limits
|
<p>Community!</p>
<p>I've been stuck on solving a problem for quite a while now. I guess it's a simple error in my for-loop, but I can't seem to locate where it fails.</p>
<p><strong>What am I trying to accomplish?</strong></p>
<ol>
<li>Loop through a dataset (from .csv) in format shown below.</li>
<li>Assess limits provided in the <code># Capacity constraints</code>-part.</li>
<li>For each line in the list of data, check level-category (<code>level_capacity = {"HIGH": 30, "MED": 100, "LOW": 100}</code>. The "Level" column contains one of the three as defined with limits in dict <code>level_capacity</code>.</li>
<li>If the item has not been assigned a week (Column 'Week' Is None), and level_capacity are for the Level of the row, then assign it to <code>current_week</code> and write it to column 'Week'.</li>
<li>This loop should repeat until all keys in <code>level_capacities</code> meet the requirement specified in <code>level_capacity</code>.</li>
<li>When <code>level_capacities.values()</code> == <code>level_capacity</code>, increment <code>current_week</code> by 1 (moving to next week) and repeat the process.</li>
</ol>
<p><strong>What's not working - output</strong></p>
<p>I'm trying to achieve this, but it seems to be incomplete. Somehow the code stops end breaks the loop when the DateFrame has been fully looped through. My goal is to have it keep looping, until all rows have been assigned a week in column 'Week', and all weeks have maximum quantity (i.e. <code>level_capacity == level_capacities.values()</code></p>
<p>For the sake of keeping the question compressed, below you'll find the first 8 rows of my .csv-file:</p>
<pre><code> Week Quantity Level
1 1 LOW
19 4 LOW
39 1 LOW
4 2 HIGH
9 18 MED
12 23 HIGH
51 11 MED
</code></pre>
<p>The actual dataset contains 1703 rows of data. I've ran the code, and extracted to Excel to see the distribution:</p>
<p><a href="https://i.sstatic.net/mw0vO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mw0vO.png" alt="PivotTable of output" /></a></p>
<p>However, as you can see the distribution does not align with the limits specified above.
Any help would be greatly appriciated!</p>
<p><strong>Code</strong></p>
<pre><code> import pandas as pd
# Define capacity constraints
level_capacity = {"HIGH": 30, "MED": 100, "LOW": 100}
weekly_capacity = sum(level_capacity.values())
# Init variables
current_week = 1
current_weekly_capacity = 0
level_capacities_met = {"HIGH": False, "MED": False, "LOW": False}
# Load data to DataFrame
d = {'Week': [1, 19, 39, 4, 9, 12, 51], 'Quantity': [1, 4, 1, 2, 18, 23, 11], 'Level': ['LOW','LOW','LOW','HIGH','MED','HIGH','MED']}
data = pd.DataFrame(data=d)
max_week = data["Week"].max()
while current_week <= max_week:
for index, row in data.iterrows():
# Check if the level capacity is met
if not level_capacities_met.get(row["Level"], False):
if current_weekly_capacity + row["Quantity"] <= weekly_capacity:
# Assign the current week
data.at[index, "Week"] = current_week
current_weekly_capacity += row["Quantity"]
if current_weekly_capacity == weekly_capacity:
level_capacities_met[row["Level"]] = True
else:
# Move to next week and reset capacities
current_week += 1
current_weekly_capacity = 0
elif current_weekly_capacity + row["Quantity"] <= weekly_capacity:
# Assign the current week
data.at[index, "Week"] = current_week
current_weekly_capacity += row["Quantity"]
# check if all level capacities are met
if all(level_capacities_met.values()):
current_week += 1
current_weekly_capacity = 0
level_capacities_met = {"HIGH": False, "MED": False, "LOW": False}
print (data)
</code></pre>
|
<python><python-3.x><pandas><csv>
|
2023-01-19 11:22:03
| 1
| 431
|
Havard Kleven
|
75,171,353
| 9,749,124
|
K-Means model always predicts the same class after saving and loading
|
<p>I have created K-Means clustering model with Python and Sklearn.
Now I want to save the model and vectoriser so that I can make predictios.</p>
<p>This is my model:</p>
<pre><code># Vectoriser - CountVectorizer
transformerVectoriser = CountVectorizer(analyzer = 'word', ngram_range = (1, 4), vocabulary = vocab_list)
vectorized_features = transformerVectoriser.fit_transform(features)
# PCA - TruncatedSVD
pca = TruncatedSVD(n_components = 2, random_state = None)
vec_matrix_pca = pca.fit_transform(vectorized_features)
# Clustering Model
kmeans = KMeans(n_clusters = 100, max_iter = 1500, init = 'k-means++', random_state = None)
kmeans.fit(vec_matrix_pca)
</code></pre>
<p>My question is, what I need to save and how?
I know that I need to save the model (how?), but should I also save <code>transformerVectoriser</code> (how?) and <code>pca</code> (how?) ?
Also, when I call/load the model, should I do <code>fit_transform</code> for <code>transformerVectoriser</code> and <code>pca</code>, or only <code>transform</code>?</p>
<p>The issue that I have that after saving and loading all 3 things (model and 2 vectorisers) my model always predicts only 1 cluster.</p>
<p>I got <code>silhouette_score</code> of 0.6 for number of 100 clusters.</p>
<p>I have saved like this:</p>
<pre><code>model_filename = 'kmeans_clustering_recommender_model.pkl'
pickle.dump(kmeans, open(model_filename, 'wb'))
vectorizer_filename = 'kmeans_clustering_recommender_vectoriser.pkl'
pickle.dump(transformerVectoriser, open(vectorizer_filename, 'wb'))
pca_filename = 'kmeans_clustering_recommender_PCA.pkl'
pickle.dump(pca, open(pca_filename, 'wb'))
</code></pre>
<p>This is how I make prediction:</p>
<pre><code>model = pickle.load(open("kmeans_clustering_recommender_model.pkl", 'rb'))
vectorizer = pickle.load(open("kmeans_clustering_recommender_vectoriser.pkl", 'rb'))
pca = pickle.load(open("kmeans_clustering_recommender_PCA.pkl", 'rb'))
def preprocessing(news_body):
# Simple text preprocessing
text = news_body.lower().strip()
vectorized_text = vectorizer.transform([text])
vectorized_text_pca = pca.transform(vectorized_text)
return vectorized_text_pca
text = "SAMPLE TEXT FOR CLUSTERING"
preprocessed_text = preprocessing(news_body)
result = model.predict(preprocessed_text)
print(result) # I am always getting the same number
</code></pre>
<p>I have only 7 clusters (out of 100) that has less than 20 datapoints, 80% of clusters has between 50 and 300 datapoints.</p>
<p>This is how it looks on the graph:</p>
<p><a href="https://i.sstatic.net/Mys0M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mys0M.png" alt="enter image description here" /></a></p>
<p>The thing is, I have tried with other algorithm, such as <code>GaussianMixture</code>, and still, I am getting the same class, and when I do this, before saving and loading the model:</p>
<pre><code>kmeans.labels_
</code></pre>
<p>Everything works good</p>
|
<python><machine-learning><scikit-learn>
|
2023-01-19 11:10:43
| 1
| 3,923
|
taga
|
75,171,352
| 2,706,344
|
Inserting an array with less elements as new column to an existing DataFrame
|
<p>Using the <code>df.insert() function</code> I want to add to an existing <code>DataFrame</code> a new column which I have in terms of an array (without any indexing). I know that my array has less entries than the DataFrame rows. This is intended. I want NaN values in the end of the new column to match the size of the DataFrame. However, I don't know how to accomplish this since I get the</p>
<pre><code>ValueError: Length of values (999) does not match length of index (1000)
</code></pre>
<p>Any ideas how to pad my array with one NaN (or NaT) value to make it match?</p>
<p><strong>Edit:</strong> Here the code which generates the problems with @Tranbi's answer:</p>
<pre><code>import numpy as np
import pandas as pd
from datetime import datetime,timedelta
times=np.random.randint(365*24*60,size=1000)
dates=datetime(2022,1,1)+timedelta(minutes=1)*times
data=pd.DataFrame({'date':np.sort(dates)})
data.set_index('date',inplace=True)
diff=(data.index[1:]-data.index[:-1]).array
pd.Series(diff).reindex(data.index)
</code></pre>
|
<python><pandas>
|
2023-01-19 11:10:30
| 1
| 4,346
|
principal-ideal-domain
|
75,171,296
| 3,586,606
|
Benefit of using generators while sending API calls
|
<p>The code below is sending request to some server, and getting a JSON object as a response.
Then, it is iterating over the response, and printing it.</p>
<p>I don't understand, what is the benefit of using generator in this case?</p>
<pre><code>def crawl():
response = requests.get('https://swapi.co/api/people')
api_results = json.loads(response.content)
for character in api_results['results']:
yield character['name']
generator = crawl()
for res in generator:
print(res)
</code></pre>
<p>I know that by using generators, we can save memory as the items are produced when required, unlike a normal function.</p>
<p>But in this case, it doesn't matter as the response of the query:</p>
<pre><code>response = requests.get('https://swapi.co/api/people')
</code></pre>
<p>Is already being loaded into the memory, isn't it?</p>
<p>So it means that <strong>response</strong> is now referencing to a place in memory which holds all the records returned from the query (could be 10 records or 10 millions records).</p>
|
<python><python-requests><generator>
|
2023-01-19 11:05:45
| 1
| 487
|
CleverDev
|
75,171,196
| 12,403,550
|
pyjanitor.pivot_longer: Unpivot multiple sets of columns with common prefix
|
<p>Reprex csv:</p>
<pre><code>col,ref_number_of_rows,ref_count,ref_unique,cur_number_of_rows,cur_count,ref_unique
region,2518,2518,42,212,212,12
country,2518,2518,6,212,212,2
year,2518,2518,15,212,212,15
</code></pre>
<p>I want to unpivot the dataset, where a <code>type</code>column contains the prefix of each column string: (cur|ref). My solution below does not match the first part of the string before <code>_</code> to fill the type column, though it does the rest.</p>
<pre><code>column_summary_frame \
.pivot_longer(
column_names="*_*",
names_to = ("type", ".value"),
names_sep = r"^[^_]+(?=_)")
</code></pre>
|
<python><janitor><pyjanitor>
|
2023-01-19 10:56:50
| 1
| 433
|
prayner
|
75,170,806
| 12,267,943
|
How to mark duplicate rows with the index of the first occurrence in Pandas?
|
<p>I'm trying to write a script that finds duplicate rows in a spreadsheet. I'm using the <strong>Pandas</strong> library. This is the initial dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'title': [1, 2, 3, 4, 5, 6, 7, 8],
'val1': [1.1, 1.1, 2.1, 8.8, 1.1, 1.1, 8.8, 8.8],
'val2': [2.2, 3.3, 5.5, 6.2, 2.2, 3.3, 6.2, 6.2],
'val3': [3.4, 4.4, 5.5, 8.4, 0.5, 3.4, 1.9, 3.7]
})
print(df)
title val1 val2 val3
1 1.1 2.2 3.4
2 1.1 3.3 4.4
3 2.1 5.5 5.5
4 8.8 6.2 8.4
5 1.1 2.2 0.5
6 1.1 3.3 3.4
7 8.8 6.2 1.9
8 8.8 6.2 3.7
</code></pre>
<p>I have found all duplicate rows using the <strong>duplicated</strong> method based on the indicated columns and marked them by adding a new column e.g.</p>
<pre class="lang-py prettyprint-override"><code>df['duplicate'] = df.duplicated(keep=False, subset=['val1', 'val2'])
print(df)
title val1 val2 duplicated
1 1.1 2.2 true
2 1.1 3.3 true
3 2.1 5.5 false
4 8.8 6.2 true
5 1.1 2.2 true
6 1.1 3.3 true
7 8.8 6.2 true
8 8.8 6.2 true
</code></pre>
<p>In the last step, I would like to mark all duplicate rows by adding information with the title of the first occurrence. This way I want to make it easier to sort and group them later. This is what the result would look like:</p>
<pre><code>title val1 val2 first_occurence
1 1.1 2.2 true
2 1.1 3.3 true
3 2.1 5.5 false
4 8.8 6.2 true
5 1.1 2.2 title1
6 1.1 3.3 title2
7 8.8 6.2 title4
8 8.8 6.2 title4
</code></pre>
<p>I tried to find a similar topic, but was unsuccessful. Does anyone have an idea how to do it?</p>
|
<python><excel><pandas><dataframe>
|
2023-01-19 10:24:36
| 3
| 349
|
nsog8sm43x
|
75,170,611
| 959,936
|
How to run a docker container with a previously determined ip address with the python docker sdk?
|
<p>On console this does the trick:</p>
<pre><code>docker run --net mynet --ip 172.18.0.22 --dns="8.8.8.8" -d testimage
</code></pre>
<p>is there an easy equivalent with the python docker sdk like this?</p>
<pre><code>container = client.containers.run("alpine", "ls /", detach=True, ipv4_address=ip_address)
</code></pre>
<p>but there is no ipv4_address param in the run function...</p>
|
<python><docker>
|
2023-01-19 10:09:17
| 1
| 9,263
|
Jurudocs
|
75,170,495
| 1,436,800
|
How to apply permissions on perform_create in ViewSet DRF
|
<p>This is my View Set:</p>
<pre><code>class MyViewSet(ModelViewSet):
serializer_class = MySerializer
queryset = MyClass.objects.all()
def get_serializer_class(self):
if self.request.user.is_superuser:
return self.serializer_class
return serializers.MyUserSerializer
def perform_create(self, serializer):
employee = models.Employee.objects.get(user=self.request.user)
serializer.save(employee=employee)
</code></pre>
<p>I want to apply permission before perform_create, this perform_create() should only be called if a currently logged in user is not a super user. If a currently logged in user is a superuser, default perform_create function should be called.</p>
<p>Edit:
Basically I want to implement the following logic, but with the help of permissions.</p>
<pre><code>def perform_create(self, serializer):
if self.request.user.is_superuser:
serializer.save()
else:
employee = models.Employee.objects.get(user=self.request.user)
serializer.save(employee=employee)
</code></pre>
<p>How to do that?</p>
|
<python><django><django-rest-framework><django-views><django-permissions>
|
2023-01-19 09:59:29
| 4
| 315
|
Waleed Farrukh
|
75,170,451
| 11,479,825
|
Expand column containing list of tuples into the current dataframe
|
<p>I have a dataframe in the following format:</p>
<pre><code>df = pd.DataFrame({'column_with_tuples': [[('word1', 10), ('word2', 20), ('word3', 30)], [('word4', 40), ('word5', 50), ('word6', 60)]],
'category':['category1','category2']})
</code></pre>
<p>I want to move the tuples into two separate columns and preserve the category column to be able to easily filter the most common words for each category.</p>
<p>So the final result should look like this:</p>
<pre><code>df_new = pd.DataFrame({'word': ['word1','word2', 'word3','word4','word5','word6'],
'frequency': [10, 20, 30, 40, 50, 60],
'category':['category1','category1', 'category1', 'category2', 'category2', 'category2']})
</code></pre>
<p>I tried with this code but the result is not the one I expect:</p>
<pre><code>df_tuples = pd.concat([pd.DataFrame(x) for x in df['column_with_tuples']], ignore_index=True)
df_tuples.columns = ['word', 'frequency']
df.drop(['column_with_tuples'], axis=1, inplace=True)
df = pd.concat([df, df_tuples], axis=1)
</code></pre>
<p>I would appreciate some help here.</p>
|
<python><pandas>
|
2023-01-19 09:56:26
| 3
| 985
|
Yana
|
75,170,207
| 19,041,437
|
How to change decimal format inside a transform_calculate function (Altair)
|
<p>I'm trying to limit the numbers after the decimal in the text displayed in my graph, but I'm having a hard time doing so inside a transform_calculate function, this is what it looks like :
(I am printing the equation to a linear regression under the form y = ax + b))</p>
<pre><code> params = alt.Chart(df).transform_regression(
axe_x ,axe_y, params=True
).transform_calculate(
intercept='datum.coef[0]',
slope='datum.coef[1]'
).transform_calculate(
text= "y = " + alt.datum.intercept + "x + " + alt.datum.slope
).mark_text(
baseline="top",
align="left"
).encode(
x=alt.value(20), # pixels from left
y=alt.value(20), # pixels from top
text='text:N',
)
</code></pre>
<p>I've tried many things like</p>
<pre><code> text= "y = " + f'format(datum.{intercept},".2f")' + "x + " + alt.datum.slope
</code></pre>
<p>and</p>
<pre><code> text= "y = " + alt.Text("alt.datum.intercept", format=".2f) + "x + " + alt.datum.slope
</code></pre>
<p>Current output :</p>
<p><a href="https://i.sstatic.net/ceNBK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ceNBK.png" alt="enter image description here" /></a></p>
<p>I'm a bit stuck here, help much appreciated thank you</p>
|
<python><linear-regression><altair>
|
2023-01-19 09:37:05
| 1
| 502
|
grymlin
|
75,170,120
| 4,432,671
|
NumPy: create ndarray of objects
|
<p>I like to create a numpy array -- with a shape, say [2, 3], where each array element is a list. Yes, I know this isn't efficient, or even desirable, and all the good reasons for that -- but can it be done?</p>
<p>So I want a 2x3 array where each point is a length 2 vector, <em>not</em> a 2x3x2 array. Yes, really.</p>
<p>If I fudge it by making it ragged, it works:</p>
<pre><code>>>> np.array([[0, 0], [0, 1], [0, 2], [1, 0], [9, 8], [2, 4, -1]], dtype=object).reshape([2, 3])
array([[list([0, 0]), list([0, 1]), list([0, 2])],
[list([1, 0]), list([9, 8]), list([2, 4, -1])]], dtype=object)
</code></pre>
<p>but if numpy <em>can</em> make it fit cleanly, it does so, and so throws an error:</p>
<pre><code>>>> np.array([[0, 0], [0, 1], [0, 2], [1, 0], [9, 8], [2, 4]], dtype=object).reshape([2, 3])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: cannot reshape array of size 12 into shape (2,3)
</code></pre>
<p>So putting aside your instincts that this is a silly thing to want to do (it is, and I know this), can I make the last example work somehow to create a shape 2x3 array where each element is a Python list of length 2, without numpy forcing it into 2x3x2?</p>
|
<python><numpy>
|
2023-01-19 09:30:13
| 3
| 3,737
|
xpqz
|
75,170,069
| 6,803,114
|
Strip colum values if startswith a specific string pandas
|
<p>I have a pandas dataframe(sample).</p>
<pre><code>id name
1 Mr-Mrs-Jon Snow
2 Mr-Mrs-Jane Smith
3 Mr-Mrs-Darth Vader
</code></pre>
<p>I'm looking to strip the "Mr-Mrs-" from the dataframe. i.e the output should be:</p>
<pre><code>id name
1 Jon Snow
2 Jane Smith
3 Darth Vader
</code></pre>
<p>I tried using</p>
<p><code>df['name'] = df['name'].str.lstrip("Mr-Mrs-")</code></p>
<p>But while doing so, some of the alphabets of names in some rows are also getting stripped out.</p>
<p>I don't want to run a loop and do .loc for every row, is there a better/optimized way to achieve this ?</p>
|
<python><pandas>
|
2023-01-19 09:26:25
| 1
| 7,676
|
Shubham R
|
75,169,981
| 10,695,613
|
Convert huge Polars dataframe to dict without consuming too much RAM
|
<p>When I load my parquet file into a Polars DataFrame, it takes about 5.5 GB of RAM. Polars is great compared to other options I have tried. However, Polars does not support creating indices like Pandas. This is troublesome for me because one column in my DataFrame is unique and the pattern of accessing the data in the df in my application is row lookups based on the unique column (dict-like).</p>
<p>Since the dataframe is massive, filtering is too expensive. However, I also seem to be short on RAM (32 GB). I am currently converting the df in "chunks" like this:</p>
<pre><code>h = df.height # number of rows
chunk_size = 1_000_000 # split each rows
b = (np.linspace(1, math.ceil(h/chunk_size), num=math.ceil(h/chunk_size)))
new_col = (np.repeat(b, chunk_size))[:-( chunk_size - (h%chunk_size))]
df = df.with_columns(polars.lit(new_col).alias('new_index'))
m = df.partition_by(by="new_index", as_dict=True)
del df
gc.collect()
my_dict = {}
for key, value in list(m.items()):
my_dict.update(
{
uas: frame.select(polars.exclude("unique_col")).to_dicts()[0]
for uas, frame in
(
value
.drop("new_index")
.unique(subset=["unique_col"], keep='last')
.partition_by(by=["unique_col"],
as_dict=True)
).items()
}
)
m.pop(key)
</code></pre>
<p>RAM consumption does not seem to have changed much. Plus, I get an error saying that the dict size has changed during iteration (true). But what can I do? Is getting more RAM the only option?</p>
|
<python><dictionary><parquet><python-polars>
|
2023-01-19 09:18:07
| 2
| 405
|
BovineScatologist
|
75,169,919
| 12,436,050
|
Explode is not working on pandas dataframe
|
<p>I have a dataframe with following columns</p>
<pre><code> col1 col2 col3 col4
0 HP:0005709 ['HP:0001770'] Toe syndactyly SNOMEDCT_US:32113001, C0265660
1 HP:0005709 ['HP:0001780'] Abnormality of toe C2674738
2 EFO:0009136 ['HP:0001507'] Growth abnormality C0262361
</code></pre>
<p>I would like to explode "col4", i tried different ways to do it but nothing is working.
The dtype of the column is "object".</p>
<p>My tries are following:</p>
<ol>
<li><p><code>df.explode('col4') </code></p>
</li>
<li></li>
</ol>
<p><code>df['col4']=df['col4'].str.split(',') df = df.set_index(['col2']).apply(pd.Series.explode).reset_index()</code></p>
<ol start="3">
<li><code>import ast df[['col4']] = df[['col4']].applymap(ast.literal_eval) df = df.apply(pd.Series.explode)</code></li>
</ol>
<p>The expected output is:</p>
<pre><code> col1 col2 col3 col4
0 HP:0005709 ['HP:0001770'] Toe syndactyly SNOMEDCT_US:32113001
0 HP:0005709 ['HP:0001770'] Toe syndactyly C0265660
1 HP:0005709 ['HP:0001780'] Abnormality of toe C2674738
2 EFO:0009136 ['HP:0001507'] Growth abnormality C0262361
</code></pre>
|
<python><pandas><dataframe><split><explode>
|
2023-01-19 09:12:46
| 3
| 1,495
|
rshar
|
75,169,907
| 11,867,978
|
unable to write tuple into xslx file using python without pandas?
|
<p>I am trying to write the output into xslx file, but able to only write the headers not able to write the data below headers.</p>
<pre><code>import xlsxwriter
csv_columns = (
'id', 'name', 'place', 'salary', 'email',
)
details = [{'id':1, 'name': 'A', 'place':'B', 'salary': 2, 'email': 'c@d.com'},
{'id':3, 'name':'C', 'place':'D', 'salary': 4, 'email':'e@f.com'}]
workbook = xlsxwriter.Workbook(path)
worksheet = workbook.add_worksheet()
for col, name in enumerate(csv_columns):
worksheet.write(0, col, name)
for row, det in enumerate(details, 1):
for col, value in enumerate(det):
worksheet.write(row, col, value)
workbook.close()
</code></pre>
<p>This code is only writing the csv_columns in xslx file and repeating same in all rows as below</p>
<pre><code>id name place salary email
id name place salary email
id name place salary email
</code></pre>
<p>How to solve this issue of repeating columns in xslx? any help ?</p>
<p>I expected like below:</p>
<pre><code> id name place salary email
1 A B 2 c@d.com
3 C D 4 e@f.com
</code></pre>
|
<python><python-3.x><excel>
|
2023-01-19 09:11:42
| 3
| 448
|
Mia
|
75,169,874
| 9,880,480
|
Is it possible to set the exploration rate to 0, and turn off network training for a Stable Baselines 3 algorithm?
|
<p>After training a stable baselines 3 RL algorithm (I am using mainly PPO) I want to set the exploration rate to 0, and turn off network training so I always get the same output (action) from the model when given the same input (observation). Is it possible to do that? If not, is there a reason for why it should not be possible?</p>
|
<python><reinforcement-learning><stable-baselines>
|
2023-01-19 09:07:59
| 1
| 397
|
HaakonFlaar
|
75,169,785
| 661,716
|
python dataframe number of last consequence rows less than current
|
<p>I need to set number of last consequence rows less than current.</p>
<p>Below is a sample input and the result.</p>
<pre><code>df = pd.DataFrame([10,9,8,11,10,11,13], columns=['value'])
df_result = pd.DataFrame([[10,9,8,11,10,11,13], [0,0,0,3,0,1,6]], columns=['value', 'number of last consequence rows less than current'])
</code></pre>
<p>Is it possible to achieve this without loop?</p>
<p>Otherwise solution with loop would be good.</p>
<p><strong>More question</strong></p>
<p>Could I do it with groupby operation, for the following input?</p>
<pre><code>df = pd.DataFrame([[10,0],[9,0],[7,0],[8,0],[11,1],[10,1],[11,1],[13,1]], columns=['value','group'])
</code></pre>
<p>Following printed an error.</p>
<pre><code>df.groupby('group')['value'].expanding()
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-19 08:59:52
| 1
| 1,226
|
tompal18
|
75,169,714
| 16,396,496
|
change the level of a column, when concat two data frames with the same column
|
<p>referral url : <a href="https://stackoverflow.com/questions/40464706/pandas-concatenating-two-multi-index-dataframes">Pandas - concatenating two multi-index dataframes</a></p>
<p>result what I want</p>
<pre><code> Name Q1 Q2 Q3
Student Name IS CC IS CC IS CC
Month Roll No
2016-08-01 0 Save Mithil Vinay ...
1 Abraham Ancy Chandy
2 Barabde Pranjal Sanjiv
3 Bari Siddhesh Kishor
4 Barretto Cleon Domnic
</code></pre>
|
<python><pandas>
|
2023-01-19 08:54:00
| 1
| 341
|
younghyun
|
75,169,594
| 9,749,124
|
Setting minimal number of data that goes into one cluster
|
<p>I have dataset of 20.000 rows. Al rows contain textual data.
I want to use K-Means so that I can check how many clusters do I have.
Optimal number of clusters that I get is 120 (based on silhouette_score).</p>
<p>This is my code:</p>
<pre><code>from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
for i in range(2, 500):
kmeans = KMeans(n_clusters = i, max_iter = 1500, init = 'k-means++', random_state = None)
kmeans.fit(vec_matrix_pca)
inertias.append(kmeans.inertia_)
cluster_labels = kmeans.fit_predict(vec_matrix_pca)
silhouette_avg = silhouette_score(vec_matrix_pca, cluster_labels)
print("Number of clusters", i, "Silhouette Score:", round(silhouette_avg, 4))
</code></pre>
<p>The problem that I have is that some clusters have more than 500 datapoints, and some less than 5.
So my question is, is there a way to set minimal size of each cluster based on the number of datapoints.</p>
|
<python><machine-learning><scikit-learn>
|
2023-01-19 08:41:33
| 1
| 3,923
|
taga
|
75,169,458
| 14,353,779
|
Pandas map values from Dictionary efficiently
|
<p>I have a pandas dataframe <code>df</code> :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">COST</th>
<th style="text-align: center;">1F</th>
<th style="text-align: center;">2F</th>
<th style="text-align: center;">3F</th>
<th>4G</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">362</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td>1</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">269</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td>0</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">346</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td>1</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">342</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I have a <code>total_cost</code> dictionary :</p>
<p>total_cost ={'1F' : 0.047,'2F' : 0.03,'3F': 0.023,'4G': 0.025}</p>
<p>I want to add a <code>TOTAL_COST</code> column such that wherever <code>1</code> is present, <code>COST</code>*(value from total_cost dictionary) for that col is to be multiplied and added together.</p>
<p>The dataframe has around a milion records, what would be the most efficient way to do this?
Expected <code>df</code> :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">COST</th>
<th style="text-align: center;">1F</th>
<th style="text-align: center;">2F</th>
<th>3F</th>
<th>4G</th>
<th>TOTAL_COST</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">362</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td>1</td>
<td>1</td>
<td>28.236</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">269</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td>0</td>
<td>0</td>
<td>8.07</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">346</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td>1</td>
<td>1</td>
<td>43.25</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">342</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><numpy>
|
2023-01-19 08:26:05
| 1
| 789
|
Scope
|
75,169,404
| 6,653,602
|
Is there a way to use Mongoexport with Airflow?
|
<p>I am trying to write Airflow DAG which will export data from certain collection in the MongoDB database. Is there any way to use Mongoexport with the Airflow?</p>
<p>I was thinking of something like this, based on airflow documentation:</p>
<pre><code>def exportFromMongoCollection():
try:
hook = MongoHook(mongo_conn_id=f"mongodb://{os.environ.get('MUSER_NAME', None)}:{os.environ.get('MPASSWORD', None)}@{os.environ.get('HOST_IP', None)}:PORT/?authSource=admin")
client = hook.get_conn()
db = client.mongo_db_dev
mongo_col=db.mongo_col
print(f"Connected to MongoDB - {client.server_info()}")
mongo_col.export() #need to figure out export here
except Exception as e:
print(f"Error connecting to MongoDB -- {e}")
with DAG(
'mongodbexport',
default_args=default_args,
description='mongodbexport',
schedule_interval="0 0 * * *",
catchup=False,
tags=['export','mongodb'],
) as dag:
t0 = PythonOperator(
task_id='export-mongodb',
python_callable=exportFromMongoCollection,
dag=dag
)
</code></pre>
<p>But I am not sure how to call mongoexport there in the python code, which would do the same operation as the following command (example):</p>
<pre><code>mongoexport --uri="URI" --collection=mongo_col type json --out=mongo_col.json
</code></pre>
|
<python><mongodb><airflow>
|
2023-01-19 08:21:08
| 1
| 3,918
|
Alex T
|
75,169,392
| 12,454,639
|
Django app not able to find database engine when engine is specified
|
<p>I have tried everything I can think of to run my initial migrations using <code>python manage.py migrate</code> and get my django app up. I am currently getting this error:</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 24, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3/dist-packages/django/core/management/commands/dbshell.py", line 22, in handle
connection.client.runshell()
File "/usr/lib/python3/dist-packages/django/db/backends/dummy/base.py", line 20, in complain
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
</code></pre>
<p>But this is the DATABASES var in my <code>settings.py</code> file:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</code></pre>
<ul>
<li>The db.sqlite3 file is at the location specified in the 'NAME' key.</li>
<li>The app folder is in a subdirectory and uniquely named.</li>
<li>The BASE_DIR var is constructed correctly and not causing the location string in the NAME key to be malformed.</li>
<li>The <code>models.py</code> file is in the /app folder subdirectory.</li>
</ul>
<p>I am completely out of ideas on how to get this to work and considering giving up on making this project altogether.</p>
<p>In case it helps here is my project tree</p>
<pre><code>.
└── randoplant
├── db.sqlite3
├── __init__.py
├── manage.py
├── __pycache__
│ └── __init__.cpython-38.pyc
└── randoplant_app
├── admin.py
├── __init__.py
├── models.py
├── plant.py
├── __pycache__
│ ├── admin.cpython-38.pyc
│ ├── __init__.cpython-38.pyc
│ ├── models.cpython-38.pyc
│ ├── settings.cpython-38.pyc
│ └── urls.cpython-38.pyc
├── settings.py
├── urls.py
└── wsgi.py
</code></pre>
<p>Please help. I am running Django version 2.2.12 and I am in a WSL shell.</p>
<pre><code></code></pre>
|
<python><python-3.x><django>
|
2023-01-19 08:19:50
| 3
| 314
|
Syllogism
|
75,169,381
| 8,855,527
|
Email Validator in Django Ninja
|
<p>I found that Django Ninja uses Pydantic.
I have created a Schema from Django model,</p>
<pre><code>class UserSchema(ModelSchema):
class Config:
model = User
model_fields = ["username", "first_name", "last_name", "email"]
class CreateUserSchema(UserSchema):
password: str
</code></pre>
<p>and I used <code>CreateUserSchema</code> in my view, and import <code>EmailStr</code> from <code>Pydantic</code></p>
<pre><code>...
async def register(request, user_data: schemas.CreateUserSchema):
email = EmailStr(user_data.pop("email"))
...
</code></pre>
<p>I want to validate <code>EmailField</code> but, it cannot validate, and store anything in the field.
How can if fix it?</p>
|
<python><pydantic><django-ninja>
|
2023-01-19 08:18:52
| 3
| 577
|
CC7052
|
75,169,350
| 10,437,110
|
How to replace 0 with "NA" and all other values to 0 in Python DataFrame?
|
<p>Consider a dataframe</p>
<pre><code>dict = {'col1': [1,2,np.nan,4],
'col2': [1,2,0,4],
'col3': [0,2,3,4],
'col4': [1,2,0,np.nan],
}
df = pd.DataFrame(dict)
</code></pre>
<pre><code>>> df
col1 col2 col3 col4
0 1.0 1 0 1.0
1 2.0 2 2 2.0
2 NaN 0 3 0.0
3 4.0 4 4 NaN
</code></pre>
<p>I want to replace all 0 with a string "NA" and all the rest with 0.</p>
|
<python><pandas><dataframe>
|
2023-01-19 08:15:27
| 2
| 397
|
Ash
|
75,169,333
| 293,594
|
Is there a way to calculate the half range Fourier series in SymPy?
|
<p>For example, if I want the Fourier series of the function <em>f(x) = x(π-x)</em> on [0, π], I can calculate the coefficients of the sine series:</p>
<p><a href="https://i.sstatic.net/vP4GJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vP4GJ.png" alt="enter image description here" /></a></p>
<p>which works by considering <em>f(x)</em> in a <a href="https://en.wikipedia.org/wiki/Half_range_Fourier_series" rel="nofollow noreferrer">half range Fourier series</a>, the half interval [0, π] extended to [-π, 0] by taking the extension of <em>f(x)</em> to be an odd function (so I don't need the cosine terms in the full Fourier series expansion).</p>
<p>Using SymPy, however, I get a cosine series:</p>
<pre><code>import sympy as sp
x = sp.symbols('x', real=True)
f = x * (sp.pi - x)
s = sp.fourier_series(f, (x, 0, sp.pi))
s.truncate(4)
-cos(2*x) - cos(4*x)/4 - cos(6*x)/9 + pi**2/6
</code></pre>
<p>Even constructing a <code>Piecewise</code> function with the correct parity doesn't work:</p>
<pre><code>p = sp.Piecewise((-f, x < 0), (f, x >= 0))
ps = sp.fourier_series(p, (x, 0, sp.pi))
ps.truncate(4)
-cos(2*x) - cos(4*x)/4 - cos(6*x)/9 + pi**2/6
</code></pre>
<p>The cosine series sort-of approximates <em>f(x)</em> but not nearly as well as the sine one. Is there any way to force SymPy to do what I did with paper and pen?</p>
|
<python><math><sympy>
|
2023-01-19 08:14:11
| 2
| 25,730
|
xnx
|
75,169,313
| 8,176,763
|
python path directing to older version in PATH
|
<p>In Linux when I issue the following command:</p>
<pre><code>echo $PATH
/opt/python/3.10.8/bin/python3:/home/d5291029/.local/bin:/home/d5291029/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
</code></pre>
<p>I get this. My <code>.bash_profile</code> is this:</p>
<pre><code># .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=/opt/python/3.10.8/bin/python3:$PATH
</code></pre>
<p>When I source the changes in bash_profile <code>source .bashprofile</code> and hit:</p>
<pre><code>python3
Python 3.6.8 (default, Jun 14 2022, 12:54:58)
</code></pre>
<p>So even though my python is version <code>3.10</code> , my path still defaults to the system installation version of python which is <code>3.6.8</code>. Why is that ?</p>
|
<python><linux>
|
2023-01-19 08:12:08
| 1
| 2,459
|
moth
|
75,169,285
| 20,589,275
|
How to change numbers in a list to make it monotonically decreasing?
|
<p>I have got a list:</p>
<pre><code>first = [100, 110, 60]
</code></pre>
<p>How to make that: if the next number is greater than the previous one, then it is necessary to reduce this number like preceding number.
For example, the answer should be:</p>
<pre><code>ans = [100, 100, 60]
</code></pre>
<p>The second example:</p>
<pre><code>arr = [60,50,60]
ans = [60, 50, 50]
</code></pre>
<p>The third example:</p>
<pre><code>arr = [20, 100, 150]
ans = [20, 20, 20]
</code></pre>
<p>I try to, but i think it's not good idea</p>
<pre><code>for i in range(len(arr)-1):
if arr[i] < arr[i+1]:
answer.append(a[i+1] - 10)
if arr[i] < arr[i+1]:
answer.append(a[i])
if arr[i] < arr [i+1]:
answer.append(arr[-1])
</code></pre>
|
<python>
|
2023-01-19 08:08:45
| 6
| 650
|
Proger228
|
75,168,725
| 5,371,582
|
Memory leak in Midas depthmap computation?
|
<p>I'm using Midas very like in <a href="https://huggingface.co/spaces/akhaliq/DPT-Large" rel="nofollow noreferrer">Huggingface's demo</a>.</p>
<p>My issue is that the RAM usage increase at each depth map computation.</p>
<p>Here is the full code.</p>
<pre><code>#!venv/bin/python3
from pathlib import Path
import psutil
import numpy as np
import torch
import cv2
def make_model():
model_type = "DPT_BEiT_L_512" # MiDaS v3.1 - Large
midas = torch.hub.load("intel-isl/MiDaS", model_type)
device = torch.device("cuda")
midas.to(device)
midas.eval()
midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")
transform = midas_transforms.dpt_transform
return {"transform": transform,
"device": device,
"midas": midas
}
def inference(cv_image, model):
"""Make the inference."""
transform = model['transform']
device = model["device"]
midas = model["midas"]
input_batch = transform(cv_image).to(device)
with torch.no_grad():
prediction = midas(input_batch)
prediction = torch.nn.functional.interpolate(
prediction.unsqueeze(1),
size=cv_image.shape[:2],
mode="bilinear",
align_corners=False,
).squeeze()
output = prediction.cpu().numpy()
formatted = (output * 255 / np.max(output)).astype('uint8')
return formatted
# Create Midas "DPT_BEiT_L_512" - MiDaS v3.1 - Large
model = make_model()
image_dir = Path('.') / "all_images"
for image_file in image_dir.iterdir():
ram_usage = psutil.virtual_memory()[2]
print("image", ram_usage)
cv_image = cv2.imread(str(image_file))
_ = inference(cv_image, model)
</code></pre>
<p>In short:</p>
<ul>
<li>Create the model "DPT_BEiT_L_512"</li>
<li>Define the function <code>inference</code></li>
<li>loop over the images in the directory <code>all_images</code></li>
<li>for each: <code>cv2.imread</code></li>
<li>compute the depthmap (do not keep the result in memory)</li>
</ul>
<p>I see that the RAM usage keeps raising over and over.</p>
<p>Variation:
I've tried to read only one image (so only one <code>cv2.imread</code>) and, in the loop, only add a random noise on that image. Up to random noise, the inference function always receive the same image.
In that case, the RAM usage is stable.</p>
<p>QUESTIONS:</p>
<ul>
<li>Where does the memory leak come from ?</li>
<li>Do I have to "reset" something between two inferences ?</li>
</ul>
<p>EDIT some variations</p>
<p>variation 1: always the same image
replace the <code>iterdir</code> loop by this:</p>
<pre><code>cv_image = cv2.imread("image.jpg")
for i in range(1, 100):
ram_usage = psutil.virtual_memory()[2]
print(i, ram_usage)
_ = get_depthmap(cv_image, model)
</code></pre>
<p>Here you get no memory leak.</p>
<p>Variation 2: do not compute the depth map</p>
<pre><code>for image_file in image_dir.iterdir():
ram_usage = psutil.virtual_memory()[2]
print("image", ram_usage)
cv_image = cv2.imread(str(image_file))
# _ = get_depthmap(cv_image, model)
</code></pre>
<p>The memory leak does not occurs.</p>
<p>I deduce that <code>cv2.imread</code> itself does not makes the leak.</p>
<p>Variation 3: same image, random noise:</p>
<pre><code>cv_image = cv2.imread("image.jpg")
for i in range(1, 100):
ram_usage = psutil.virtual_memory()[2]
print(i, ram_usage)
noise = np.random.randn(
cv_image.shape[0], cv_image.shape[1], cv_image.shape[2]) * 20
noisy_img = cv_image + noise
noisy_img = np.clip(noisy_img, 0, 255)
_ = get_depthmap(noisy_img, model)
</code></pre>
<p>No leak in this version.</p>
|
<python><opencv><pytorch><memory-leaks>
|
2023-01-19 07:02:27
| 0
| 705
|
Laurent Claessens
|
75,168,665
| 794,535
|
"Unsupported number of image dimensions" while using image_utils from Transformers
|
<p>I'm trying to follow this HuggingFace tutorial <a href="https://huggingface.co/blog/fine-tune-vit" rel="nofollow noreferrer">https://huggingface.co/blog/fine-tune-vit</a></p>
<p>Using their "beans" dataset everything works, but if I use my own dataset with my own images, I'm hitting "Unsupported number of image dimensions". I'm wondering if anyone here would have pointers for how to debug this.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_2042949/883871373.py in <module>
----> 1 train_results = trainer.train()
2 trainer.save_model()
3 trainer.log_metrics("train", train_results.metrics)
4 trainer.save_metrics("train", train_results.metrics)
5 trainer.save_state()
~/miniconda3/lib/python3.9/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1532 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1533 )
-> 1534 return inner_training_loop(
1535 args=args,
1536 resume_from_checkpoint=resume_from_checkpoint,
~/miniconda3/lib/python3.9/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1754
1755 step = -1
-> 1756 for step, inputs in enumerate(epoch_iterator):
1757
1758 # Skip past any already trained steps if resuming training
~/miniconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py in __next__(self)
626 # TODO(https://github.com/pytorch/pytorch/issues/76750)
...
--> 119 raise ValueError(f"Unsupported number of image dimensions: {image.ndim}")
120
121 if image.shape[first_dim] in (1, 3):
ValueError: Unsupported number of image dimensions: 2
</code></pre>
<p><a href="https://github.com/huggingface/transformers/blob/main/src/transformers/image_utils.py" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/main/src/transformers/image_utils.py</a></p>
<p>I tried looking at the shape of my data and theirs and it's the same.</p>
<pre><code>$ prepared_ds['train'][0:2]['pixel_values'].shape
torch.Size([2, 3, 224, 224])
</code></pre>
<p>I followed the stack trace and found that the error was in the <code>infer_channel_dimension_format</code> function, so I wrote this filth to find the problematic image:</p>
<pre><code>from transformers.image_utils import infer_channel_dimension_format
try:
for i, img in enumerate(prepared_ds["train"]):
infer_channel_dimension_format(img["pixel_values"])
except ValueError as ve:
print(i+1)
</code></pre>
<p>When I inspected that image, I saw that its not RGB like the others.</p>
<pre><code>$ ds["train"][8]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=390x540>,
'image_file_path': '/data/alamy/img/00000/000001069.jpg',
'labels': 0}
</code></pre>
<p>So the solution for me was to add a <code>convert('RGB')</code> to my transform:</p>
<pre><code>def transform(example_batch):
# Take a list of PIL images and turn them to pixel values
inputs = feature_extractor([x.convert("RGB") for x in example_batch['image']], return_tensors='pt')
# Don't forget to include the labels!
inputs['labels'] = example_batch['labels']
return inputs
</code></pre>
<p>I will try to find some time to come back here and clean this up with a fully reproducible example. (Sorry)</p>
|
<python><numpy><machine-learning><pytorch><huggingface-transformers>
|
2023-01-19 06:53:46
| 3
| 400
|
Pablo Mendes
|
75,168,484
| 7,218,871
|
PIL Python: A rounded rectangle image converted to numpy array is all zeros
|
<pre><code>a_rect = Image.new('RGBA', (400, 100))
draw = ImageDraw.Draw(a_rect)
draw.rounded_rectangle((0, 0, 400, 100),
outline=None,
radius=75,
fill='blue'
)
a_rect
</code></pre>
<p><a href="https://i.sstatic.net/dyS2M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dyS2M.png" alt="enter image description here" /></a></p>
<p>When i convert the above image to array using <code>np.asarray(a_rect)</code> I get following:</p>
<pre><code>array([[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
...,
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
...,
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
...,
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
...,
...
[0, 0, 0, 0],
...,
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]], dtype=uint8)
</code></pre>
<p>This is not the expected behaviour, I am unable to manipulate this array or apply animation to it due to it being all zeros.</p>
|
<python><python-imaging-library>
|
2023-01-19 06:30:08
| 0
| 620
|
Abhishek Jain
|
75,168,470
| 7,657,180
|
Read numbers on image using OCR python
|
<p>I am trying to extract numbers on images using OpenCV in Python and tesseract. Here's my try but I got nothing. The code doesn't return the expected numbers</p>
<pre><code>import fitz, pytesseract, os, re
import cv2
sTemp = "Number.png"
directory = '.\MyFolder'
def useMagick(img):
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
command = 'magick convert {} -resize 1024x640 -density 300 -quality 100 {}'.format(img, sTemp)
os.system(command)
def readNumber(img):
img = cv2.imread(img)
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
txt = pytesseract.image_to_string(gry)
print(txt)
try:
return re.findall(r'\d+\s?\/\s?(\d+)', txt)[0]
except:
blur = cv2.GaussianBlur(gry, (3,3), 0)
txt = pytesseract.image_to_string(blur)
try:
return re.findall(r'\d+\s?\/\s?(\d+)', txt)[0]
except:
return 'REVIEW'
sPath = os.path.join(directory, sTemp)
useMagick(sPath)
x = readNumber(sPath)
print(x)
</code></pre>
<p>Here's sample of the images
<a href="https://i.sstatic.net/lRwxM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lRwxM.png" alt="enter image description here" /></a></p>
<p>The code doesn't return any digits. How can I improve the quality of such an image to be able to extract the numbers?</p>
|
<python><opencv><imagemagick><ocr><tesseract>
|
2023-01-19 06:28:32
| 1
| 9,608
|
YasserKhalil
|
75,168,443
| 10,062,025
|
How to fix chromedriver status code -6 in colab
|
<p>I've been running this in collab without a trouble until just today. No changes was made to the code, however the error just appear today. The error shown:</p>
<pre><code>Message: Service chromedriver unexpectedly exited. Status code was: -6
</code></pre>
<p>My code is as follow</p>
<pre><code># install chromium, its driver, and selenium
!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium-wire
# set options to be headless, ..
from seleniumwire import webdriver
options = webdriver.ChromeOptions()
options.set_capability(
"goog:loggingPrefs", {"performance": "ALL", "browser": "ALL"}
)
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
# open it, go to a website, and get results
driver = webdriver.Chrome('chromedriver',options=options)
driver.get('url')
</code></pre>
<p>Please do help . Thank you.</p>
|
<python><selenium-chromedriver>
|
2023-01-19 06:23:48
| 0
| 333
|
Hal
|
75,168,396
| 13,061,012
|
How to create a function in Python to offset a datetime object by a certain number of months
|
<p>How can I create a function in Python that takes a datetime object and an integer as inputs, and returns a new datetime object with the desired offset?</p>
<p>I tried before with this:</p>
<pre><code>from datetime import datetime, timedelta
def get_offset_datetime(source_date_time, month_offset):
year, month, day = (source_date_time.year + month_offset // 12,
source_date_time.month + month_offset % 12,
source_date_time.day)
if month>12:
year+=1
month-=12
elif month<1:
year-=1
month+=12
offset_datetime = datetime(year, month, day, source_date_time.hour, source_date_time.minute, source_date_time.second)
return offset_datetime
</code></pre>
<p>but it raise error for some dates. for example:</p>
<pre><code>source_date_time = datetime(2022,1,31)
month_offset = 1
offset_datetime = get_offset_datetime(source_date_time, month_offset)
print(source_date_time)
print(offset_datetime)
</code></pre>
<p>I expected the code print this:</p>
<pre><code>2022-02-28 00:00:00
</code></pre>
<p>but i got this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/Desktop/exam/main.py", line 42, in <module>
offset_datetime = get_offset_datetime2(source_date_time, month_offset)
File "/home/user/Desktop/exam/main.py", line 27, in get_offset_datetime2
offset_datetime = datetime(year, month, day, source_date_time.hour, source_date_time.minute, source_date_time.second)
ValueError: day is out of range for month
</code></pre>
<p>please give me another clean code for doing this task.</p>
|
<python><datetime>
|
2023-01-19 06:17:02
| 1
| 1,108
|
Hossein Vejdani
|
75,168,219
| 15,632,586
|
How do we create GV files with non-overlapping subgraphs from a GV file?
|
<h3>Graph</h3>
<p>I am having a GV file containing 3 subgraphs:</p>
<ol>
<li><code>cluster_1</code></li>
<li><code>cluster_2</code></li>
<li><code>cluster_3</code></li>
</ol>
<p>Source of <code>Final_Graph.gv</code>:</p>
<pre><code>digraph Final_Graph {
graph [center=true rankdir=LR ratio=compress size="15,10"]
a
b
c
d
a -> b [label = 1]
a -> c [label = 2]
a -> d [label = 3]
b -> d [label = 4]
c -> d [label = 5]
subgraph cluster_1{
color=lightgrey style=filled
label="A"
a
b
}
subgraph cluster_2{
color=lightgrey style=filled
label="B"
a
b
}
subgraph cluster_3{
color=lightgrey style=filled
label="C"
c
d
}
}
</code></pre>
<p>Rendered:</p>
<p><img src="https://i.sstatic.net/rUB2D.png" alt="rendered GV" /></p>
<h3>Wanted</h3>
<p>I am looking to create other GV files with subgraphs being non-overlapping (that is subgraphs with no similar nodes, so for this case, the first file could have clusters 1 and 3, and the second file could have clusters 2 and 3).</p>
<h3>Code</h3>
<p>I am using this function in Python to do this task:</p>
<pre><code>import networkx as nx
import itertools
def draw_graph_combinations():
# Load the original graph
G = nx.drawing.nx_agraph.read_dot("Final_Graph.gv")
# Create an empty dictionary to store the subgraphs
subgraphs = {}
# Iterate over the edges of the graph
for u, v, data in G.edges(data=True):
label = data.get("label")
if label not in subgraphs:
subgraphs[label] = nx.DiGraph()
for node in G.nodes:
# Add the node to each subgraph
for label, subgraph in subgraphs.items():
subgraph.add_node(node)
for label, subgraph in subgraphs.items():
for edge in G.edges:
subgraph.add_edge(edge[0], edge[1])
# Get all combinations of subgraphs
combinations = itertools.combinations(subgraphs.items(), len(subgraphs))
# Iterate over the combinations
for i, subgraph_items in enumerate(combinations):
combined_subgraph = nx.DiGraph()
for label, subgraph in subgraph_items:
combined_subgraph = nx.compose(combined_subgraph, subgraph)
nx.drawing.nx_agraph.write_dot(combined_subgraph, f"combined_subgraph_{i}.gv")
</code></pre>
<h3>Issue</h3>
<p>However, when I run this function in Python, the files printed out only contains the nodes and edges of the original file, without the subgraphs being shown.</p>
<h3>Question</h3>
<p>Is there any method in Python to divide this GV file into other files with non-overlapping subgraphs?</p>
|
<python><graphviz>
|
2023-01-19 05:51:09
| 1
| 451
|
Hoang Cuong Nguyen
|
75,168,100
| 2,616,753
|
How can I define a Python Callable type for annotating a decorator, where the decorator uses some parameters and absorbs the rest in **kw?
|
<p>I'm looking to annotate a decorator in Python 3.11, where</p>
<ol>
<li>the decorated functions use keyword-only parameters,</li>
<li>the decorated functions share a small number of keyword-only parameters, say <code>a</code> & <code>b</code>,</li>
<li>the decorator's inner function uses those parameters, and</li>
<li>each function may specify the number of additional keyword parameters (which, besides forwarding them along, the decorator does not care about).</li>
</ol>
<p>How can I annotate this in a way that will satisfy mypy 0.991?</p>
<p>I expected the following would work:</p>
<pre><code>import functools
from typing import Protocol
class MyCallable(Protocol):
async def __call__(self, *, a: int, b: str, **kw):
...
def deco(f: MyCallable) -> MyCallable:
@functools.wraps(f)
async def wrapper(*, a: int, b: str, **kw):
return await f(a=a, b=b, **kw)
return wrapper
@deco
async def func1(*, a: int, b: str):
"""
error: Argument 1 to "deco" has incompatible type "Callable[[NamedArg(int, 'a'), NamedArg(str, 'b')], Coroutine[Any, Any, Any]]"; expected "MyCallable" [arg-type]
"""
@deco
async def func2(*, a: int, b: str, c: int):
"""
error: Argument 1 to "deco" has incompatible type "Callable[[NamedArg(int, 'a'), NamedArg(str, 'b'), NamedArg(int, 'c')], Coroutine[Any, Any, Any]]"; expected "MyCallable" [arg-type]
"""
@deco3
async def func3(*, a: int, b: str, d: int, e: int):
"""
error: Argument 1 to "deco" has incompatible type "Callable[[NamedArg(int, 'a'), NamedArg(str, 'b'), NamedArg(int, 'd'), NamedArg(int, 'e')], Coroutine[Any, Any, Any]]"; expected "MyCallable" [arg-type]
"""
</code></pre>
<p>but all three cases give errors (see docstrings).</p>
<p>Adding <code>**kw</code> to <code>func1</code> does the trick there:</p>
<pre><code>@deco
async def func1(*, a: int, b: str, **kw):
...
</code></pre>
<p>The same move fails (expectedly) for <code>func2</code>:</p>
<blockquote>
<p>error: Argument 1 to "deco" has incompatible type "Callable[[NamedArg(int, 'a'), NamedArg(str, 'b'), NamedArg(int, 'c'), KwArg(Any)], Coroutine[Any, Any, Any]]"; expected "MyCallable" [arg-type]</p>
</blockquote>
<p>I suspect the issue is due to my protocol "saying" it allows any number of keyword parameters but my funcs not actually accepting that. However, since <code>deco</code> neither injects nor removes parameters, I naively assumed mypy would see that a+b+kw perfectly overlaps the function signature and we'd be good to go.</p>
<hr />
<p>The following works in the general case:</p>
<pre><code>MyCallable: TypeAlias = Callable[..., T]
</code></pre>
<p>but I'm not happy with the amount of information this discards. I.e., since the decorator relies on <code>a</code> & <code>b</code>, I want to be warned during the static analysis pass if the decorated function is missing either of those.</p>
<hr />
<p>Is this possible?</p>
<p>If not, what's the best alternative? (I have a preference for leaving the <code>func</code> signatures as they are, with no <code>**kw</code>, if possible.)</p>
|
<python><python-3.x><python-decorators><mypy><python-typing>
|
2023-01-19 05:33:24
| 0
| 2,083
|
chris
|
75,168,079
| 9,090,340
|
filter data based on a condition in json
|
<p>I am working on a requirement where I need to filter data if a condition is satisfied from json data into a data frame in python. When I use the below code I run into following error. I am trying to filter data based on a random condition here, I am checking if the Country Code is US then I need data frame to be populated with all the records where country is USA.</p>
<p><em><strong>Code:</strong></em></p>
<pre><code>import json
data = {
"demographic": [
{
"id": 1,
"country": {
"code": "AU",
"name": "Australia"
},
"state": {
"name": "New South Wales"
},
"location": {
"time_zone": {
"name": "(UTC+10:00) Canberra, Melbourne, Sydney",
"standard_name": "AUS Eastern Standard Time",
"symbol": "AUS Eastern Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": ""
}
},
{
"id": 2,
"country": {
"code": "AU",
"name": "Australia"
},
"state": {
"name": "New South Wales"
},
"location": {
"time_zone": {
"name": "(UTC+10:00) Canberra, Melbourne, Sydney",
"standard_name": "AUS Eastern Standard Time",
"symbol": "AUS Eastern Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": ""
}
},
{
"id": 3,
"country": {
"code": "US",
"name": "United States"
},
"state": {
"name": "Illinois"
},
"location": {
"time_zone": {
"name": "(UTC-06:00) Central Time (US & Canada)",
"standard_name": "Central Standard Time",
"symbol": "Central Standard Time"
}
},
"address_info": {
"address_1": "",
"address_2": "",
"city": "",
"zip_code": "60611"
}
}
]
}
jd = json.loads(data)
df = [cnt for cnt in jd["demographic"] if cnt["country"]["code"] == "US"]
print(df)
</code></pre>
<p><em><strong>Error:</strong></em></p>
<pre><code>TypeError: string indices must be integers
</code></pre>
|
<python><json><pandas><dataframe>
|
2023-01-19 05:30:31
| 1
| 937
|
paone
|
75,167,963
| 14,923,024
|
polars slower than numpy?
|
<p>I was thinking about using <code>polars</code> in place of <code>numpy</code> in a parsing problem where I turn a structured text file into a character table and operate on different columns. However, it seems that <code>polars</code> is about 5 times slower than <code>numpy</code> in most operations I'm performing. I was wondering why that's the case and whether I'm doing something wrong given that <code>polars</code> is supposed to be faster.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import numpy as np
import polars as pl
# Download the text file
text = requests.get("https://files.rcsb.org/download/3w32.pdb").text
# Turn it into a 2D array of characters
char_tab_np = np.array(file.splitlines()).view(dtype=(str,1)).reshape(-1, 80)
# Create a polars DataFrame from the numpy array
char_tab_pl = pl.DataFrame(char_tab_np)
# Sort by first column with numpy
char_tab_np[np.argsort(char_tab_np[:,0])]
# Sort by first column with polars
char_tab_pl.sort(by="column_0")
</code></pre>
<p>Using <code>%%timeit</code> in <code>Jupyter</code>, the <code>numpy</code> sorting takes about <strong>320 microseconds</strong>, whereas the <code>polars</code> sort takes about <strong>1.3 milliseconds</strong>, i.e. about five times slower.</p>
<p>I also tried <code>char_tab_pl.lazy().sort(by="column_0").collect()</code>, but it had no effect on the duration.</p>
<p>Another example (Take all rows where the first column is equal to 'A'):</p>
<pre class="lang-py prettyprint-override"><code># with numpy
%%timeit
char_tab_np[char_tab_np[:, 0] == "A"]
</code></pre>
<pre class="lang-py prettyprint-override"><code># with polars
%%timeit
char_tab_pl.filter(pl.col("column_0") == "A")
</code></pre>
<p>Again, <code>numpy</code> takes 226 microseconds, whereas <code>polars</code> takes 673 microseconds, about three times slower.</p>
<h2>Update</h2>
<p>Based on the comments I tried two other things:</p>
<p><strong>1. Making the file 1000 times larger to see whether polars performs better on larger data.</strong></p>
<p>Results: <code>numpy</code> was still about 2 times faster (1.3 ms vs. 2.1 ms). In addition, creating the character array took <code>numpy</code> about 2 seconds, whereas <code>polars</code> needed about <strong>2 minutes</strong> to create the dataframe, i.e. 60 times slower.</p>
<p>To re-produce, just add <code>text *= 1000</code> before creating the numpy array in the code above.</p>
<p><strong>2. Casting to integer.</strong></p>
<p>For the original (smaller) file, casting to int sped up the process for both <code>numpy</code> and <code>polars</code>. The filtering in <code>numpy</code> was still about 5 times faster than <code>polars</code> (30 microseconds vs. 120), wheres the sorting time became more similar (150 microseconds for numpy vs. 200 for polars).</p>
<p>However, for the large file, <code>polars</code> was marginally faster than <code>numpy</code>, but the huge instantiation time makes it only worth if the dataframe is to be queried thousands of times.</p>
|
<python><numpy><python-polars>
|
2023-01-19 05:08:38
| 1
| 457
|
AAriam
|
75,167,941
| 326,389
|
Python inequality comparison dealing with floating point precision
|
<p>I need to compare floating point numbers with inequalities:</p>
<pre class="lang-py prettyprint-override"><code>if x >= y:
</code></pre>
<p>However, due to floating point precision issues, sometimes this fails when it should succeed (<code>x = 0.21</code> and <code>y= 0.21000000000000002</code>). I was thinking to create an epsilon:</p>
<pre class="lang-py prettyprint-override"><code>epsilon = 0.000000001
if x >= y - epsilon:
</code></pre>
<p>I'd rather use a standard mechanism for this. Python has a <code>math.isclose</code> function that works for equality, but I couldn't find anything for inequality. So I have to write something like this:</p>
<pre class="lang-py prettyprint-override"><code>import math
if x > y or math.isclose(x, y):
</code></pre>
<p>I have to do this a <em>ton</em> in my application... easy enough, I'll just create a function. My question is if there's a standard way to deal with inequalities and floats? Is there a <code>numpy.greater_or_equal(x, y)</code> type function?</p>
|
<python><python-3.x><numpy><floating-accuracy><inequality>
|
2023-01-19 05:04:43
| 1
| 52,820
|
at.
|
75,167,840
| 758,982
|
pyspark dataframe schema, not able to set nullable false for csv files
|
<p>I am trying to load csv file using pyspark.
I am giving my own schema with columns nullable false, still when I print schema it shows them true.
I checked the file data, there are no null entries for columns which are nullable false.</p>
<p><strong>Code</strong></p>
<pre><code>from pyspark.sql.types import *
udemy_comments_file = '/Users/harbeerkadian/Documents/workspace/learn-spark/source_data/udemy/comments_spark.csv'
schema = StructType([StructField("id",StringType(),False),
StructField("course_id",StringType(),True),
StructField("rate",DoubleType(),True),
StructField("date",TimestampType(),True),
StructField("display_name",StringType(),True),
StructField("comment",StringType(),True),
StructField("new_id",StringType(),True)])
comments_df = spark.read.format('csv').option('header', 'true').schema(schema).load(udemy_comments_file)
comments_df.printSchema()
print("non null record count for id", comments_df.filter(comments_df.id.isNull()).count())
</code></pre>
<p><strong>output</strong></p>
<pre><code>root
|-- id: string (nullable = true)
|-- course_id: string (nullable = true)
|-- rate: double (nullable = true)
|-- date: timestamp (nullable = true)
|-- display_name: string (nullable = true)
|-- comment: string (nullable = true)
|-- new_id: string (nullable = true)
non null record count for id 0
</code></pre>
<p>Ideally the id column nullable property should be false, as there are zero non null records.</p>
|
<python><dataframe><csv><pyspark><schema>
|
2023-01-19 04:45:32
| 2
| 394
|
Harbeer Kadian
|
75,167,811
| 17,101,330
|
Unwanted additional/current matplotlib window while embedding in tkinter gui
|
<p>I'm plotting streamed data with tkinter but my app opens the current plot in an additional window:</p>
<p><a href="https://i.sstatic.net/kR4a5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kR4a5.jpg" alt="" /></a></p>
<p>My App looks something like this:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
from random import randint
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
class App(tk.Tk):
def __init__(self):
super().__init__()
self.button_run = tk.Button(master=self, text='run', bg='grey', command=self.run)
self.button_run.pack()
self.fig = plt.Figure()
self.fig, self.axes_dict = plt.subplot_mosaic([['o', 'o'], ['_', '__']])
self.canvas = FigureCanvasTkAgg(figure=self.fig, master=self)
self.canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
self.canvas.draw_idle()
self.fig.canvas.flush_events()
def run(self):
S = Streamer(parent=self)
S.start()
</code></pre>
<p>And I stream data like this:</p>
<pre><code>class Streamer:
def __init__(self, parent):
self.parent = parent
def start(self):
# plot random data
for x in range(6):
self.parent.axes_dict['o'].add_patch(Rectangle((x, randint(x, x*2)), height=0.4, width=0.4))
self.parent.axes_dict['o'].relim()
self.parent.axes_dict['o'].autoscale_view(True, True, True)
self.parent.fig.canvas.draw_idle()
self.parent.fig.canvas.flush_events()
plt.pause(0.4)
</code></pre>
<p>Starting the app:</p>
<pre><code>if __name__ == '__main__':
A = App()
A.mainloop()
</code></pre>
<p>How can I close this matplotlib window and why does it appear?</p>
|
<python><matplotlib><tkinter>
|
2023-01-19 04:39:41
| 1
| 530
|
jamesB
|
75,167,660
| 11,065,874
|
why two equal ranges are different objects in cpython?
|
<p>I am wondering why</p>
<pre><code>a = "abcdghf23"
b = "abcdghf23"
c = range(100)
d = range(100)
e = []
f = []
</code></pre>
<p><code>a is b</code> retruns True</p>
<p>while <code>c is d</code> returns False</p>
<p>and <code>e is f</code> returns False</p>
<p>actually the question is why id of a and b are the same but id of the rest is different.</p>
<p>they are all iterables.</p>
<p>a b c d are immutable</p>
<p>e and f are mutable</p>
<p>Cannot figure out the pattern and the cause of this</p>
|
<python><object><cpython>
|
2023-01-19 04:06:22
| 0
| 2,555
|
Amin Ba
|
75,167,317
| 134,044
|
Make Pydantic BaseModel fields optional including sub-models for PATCH
|
<p>As already asked in <em><strong>similar</strong></em> questions, I want to support <code>PATCH</code> operations for a FastApi application where the caller can specify as many or as few fields as they like, of a Pydantic <code>BaseModel</code> <em><strong>with sub-models</strong></em>, so that efficient <code>PATCH</code> operations can be performed, without the caller having to supply an entire valid model just in order to update two or three of the fields.</p>
<p>I've discovered there are <em><strong>2 steps</strong></em> in Pydantic <code>PATCH</code> from the <a href="https://fastapi.tiangolo.com/tutorial/body-updates/#partial-updates-with-patch" rel="nofollow noreferrer">tutorial</a> that <em><strong>don't support sub-models</strong></em>. However, Pydantic is far too good for me to criticise it for something that it seems can be built using the tools that Pydantic provides. This question is to request implementation of those 2 things <em><strong>while also supporting sub-models</strong></em>:</p>
<ol>
<li>generate a new DRY <code>BaseModel</code> with all fields optional</li>
<li>implement deep copy with update of <code>BaseModel</code></li>
</ol>
<p>These problems are already recognised by Pydantic.</p>
<ul>
<li>There is <a href="https://github.com/pydantic/pydantic/discussions/3089" rel="nofollow noreferrer">discussion</a> of a class based solution to the optional model</li>
<li>And there <a href="https://github.com/pydantic/pydantic/issues/4177" rel="nofollow noreferrer">two</a> <a href="https://github.com/pydantic/pydantic/issues/3785" rel="nofollow noreferrer">issues</a> open on the deep copy with update</li>
</ul>
<p>A <em><strong>similar</strong></em> <a href="https://stackoverflow.com/q/67699451">question</a> has been asked one or two times here on SO and there are some great answers with different approaches to generating an all-fields optional version of the nested <code>BaseModel</code>. After considering them all <a href="https://stackoverflow.com/a/72365032">this particular answer</a> by <a href="https://stackoverflow.com/users/10416012/ziur-olpa">Ziur Olpa</a> seemed to me to be the best, providing a function that takes the existing model with optional and mandatory fields, and returning a new model with <em>all fields optional</em>: <a href="https://stackoverflow.com/a/72365032">https://stackoverflow.com/a/72365032</a></p>
<p>The beauty of this approach is that you can hide the (actually quite compact) little function in a library and just use it as a dependency so that it appears in-line in the path operation function and there's no other code or boilerplate.</p>
<p>But the implementation provided in the previous answer did not take the step of dealing with sub-objects in the <code>BaseModel</code> being patched.</p>
<p><strong>This question therefore requests an improved implementation of the all-fields-optional function that also deals with sub-objects, as well as a deep copy with update.</strong></p>
<p>I have a simple example as a demonstration of this use-case, which although aiming to be simple for demonstration purposes, also includes a number of fields to more closely reflect the real world examples we see. Hopefully this example provides a test scenario for implementations, saving work:</p>
<pre><code>import logging
from datetime import datetime, date
from collections import defaultdict
from pydantic import BaseModel
from fastapi import FastAPI, HTTPException, status, Depends
from fastapi.encoders import jsonable_encoder
app = FastAPI(title="PATCH demo")
logging.basicConfig(level=logging.DEBUG)
class Collection:
collection = defaultdict(dict)
def __init__(self, this, that):
logging.debug("-".join((this, that)))
self.this = this
self.that = that
def get_document(self):
document = self.collection[self.this].get(self.that)
if not document:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Not Found",
)
logging.debug(document)
return document
def save_document(self, document):
logging.debug(document)
self.collection[self.this][self.that] = document
return document
class SubOne(BaseModel):
original: date
verified: str = ""
source: str = ""
incurred: str = ""
reason: str = ""
attachments: list[str] = []
class SubTwo(BaseModel):
this: str
that: str
amount: float
plan_code: str = ""
plan_name: str = ""
plan_type: str = ""
meta_a: str = ""
meta_b: str = ""
meta_c: str = ""
class Document(BaseModel):
this: str
that: str
created: datetime
updated: datetime
sub_one: SubOne
sub_two: SubTwo
the_code: str = ""
the_status: str = ""
the_type: str = ""
phase: str = ""
process: str = ""
option: str = ""
@app.get("/endpoint/{this}/{that}", response_model=Document)
async def get_submission(this: str, that: str) -> Document:
collection = Collection(this=this, that=that)
return collection.get_document()
@app.put("/endpoint/{this}/{that}", response_model=Document)
async def put_submission(this: str, that: str, document: Document) -> Document:
collection = Collection(this=this, that=that)
return collection.save_document(jsonable_encoder(document))
@app.patch("/endpoint/{this}/{that}", response_model=Document)
async def patch_submission(
document: Document,
# document: optional(Document), # <<< IMPLEMENT optional <<<
this: str,
that: str,
) -> Document:
collection = Collection(this=this, that=that)
existing = collection.get_document()
existing = Document(**existing)
update = document.dict(exclude_unset=True)
updated = existing.copy(update=update, deep=True) # <<< FIX THIS <<<
updated = jsonable_encoder(updated)
collection.save_document(updated)
return updated
</code></pre>
<p>This example is a working FastAPI application, following the tutorial, and can be run with <code>uvicorn example:app --reload</code>. Except it doesn't work, because there's no all-optional fields model, and Pydantic's deep copy with update actually <em><strong>overwrites</strong></em> sub-models rather than <em><strong>updating</strong></em> them.</p>
<p>In order to test it the following Bash script can be used to run <code>curl</code> requests. Again I'm supplying this just to hopefully make it easier to get started with this question.
Just comment out the other commands each time you run it so that the command you want is used.
To demonstrate this initial state of the example app working you would run <code>GET</code> (expect 404), <code>PUT</code> (document stored), <code>GET</code> (expect 200 and same document returned), <code>PATCH</code> (expect 200), <code>GET</code> (expect 200 and updated document returned).</p>
<pre><code>host='http://127.0.0.1:8000'
path="/endpoint/A123/B456"
method='PUT'
data='
{
"this":"A123",
"that":"B456",
"created":"2022-12-01T01:02:03.456",
"updated":"2023-01-01T01:02:03.456",
"sub_one":{"original":"2022-12-12","verified":"Y"},
"sub_two":{"this":"A123","that":"B456","amount":0.88,"plan_code":"HELLO"},
"the_code":"BYE"}
'
# method='PATCH'
# data='{"this":"A123","that":"B456","created":"2022-12-01T01:02:03.456","updated":"2023-01-02T03:04:05.678","sub_one":{"original":"2022-12-12","verified":"N"},"sub_two":{"this":"A123","that":"B456","amount":123.456}}'
method='GET'
data=''
if [[ -n data ]]; then data=" --data '$data'"; fi
curl="curl -K curlrc -X $method '$host$path' $data"
echo $curl >&2
eval $curl
</code></pre>
<p>This <code>curlrc</code> will need to be co-located to ensure the content type headers are correct:</p>
<pre><code>--cookie "_cookies"
--cookie-jar "_cookies"
--header "Content-Type: application/json"
--header "Accept: application/json"
--header "Accept-Encoding: compress, gzip"
--header "Cache-Control: no-cache"
</code></pre>
<p>So what I'm looking for is the implementation of <code>optional</code> that is commented out in the code, and a fix for <code>existing.copy</code> with the <code>update</code> parameter, that will enable this example to be used with <code>PATCH</code> calls that omit otherwise mandatory fields.
The implementation does not have to conform precisely to the commented out line, I just provided that based on <a href="https://stackoverflow.com/users/10416012/ziur-olpa">Ziur Olpa's</a> previous <a href="https://stackoverflow.com/a/72365032">answer</a>.</p>
|
<python><fastapi><crud><pydantic>
|
2023-01-19 02:51:30
| 1
| 4,109
|
NeilG
|
75,167,258
| 12,709,566
|
Discord Bot joins voice channel muted
|
<p>I am trying to make my discord bot play an MP3 file in a voice channel, however, upon joining the discord bot remains muted, and therefore the audio file isn't audible, even if it may be playing. Here is my code:</p>
<pre><code>@client.event
async def on_ready():
print(f'{client.user} is now running')
channel = discord.utils.get(client.get_all_channels(), id=<vc id>)
voice = await channel.connect()
source = FFmpegPCMAudio('<filename>.mp3')
player = voice.play(source)
</code></pre>
<p>Does anyone know how to unmute the discord bot?
I'm confident that I've had set the bot permission to be able to speak in voice chats before I invited it to my discord server.</p>
|
<python><discord><discord.py><bots>
|
2023-01-19 02:39:44
| 3
| 603
|
hmood
|
75,167,203
| 2,138,610
|
ML model fit with training data outperforms model fit with generator
|
<p>My ultimate goal is to fit a ML autoencoder by inputting a data generator into the <em>fit</em> method within the keras API. However, I am finding that models fit with a generator are inferior to models fit on the raw data themselves.</p>
<p>To demonstrate this, I've taken the following steps:</p>
<ol>
<li>define a data generator to create a set variably damped
sine waves. Importantly, I define the batch size of the generator to equal the entire training dataset. In this way, I can eleminate the batch size as a possible reason for poor performance of the model fit with the generator.</li>
<li>define a very simple ML autoencoder. Note the
latent space of the autoencoder is larger than the size of the
original data so it should learn how to reproduce the original
signal relatively quickly.</li>
<li>Train one model using the generator</li>
<li>Create a set of training data with the generator's <code>__getitem__</code> method and use those data to fit the same ML model.</li>
</ol>
<p>Results from the model trained on the generator are far inferior to those trained on the data themselves.</p>
<p>My formulation of the generator must be wrong but, for the life of me, I can not find my mistake. For reference, I emulated generators discussed <a href="https://stanford.edu/%7Eshervine/blog/keras-how-to-generate-data-on-the-fly" rel="nofollow noreferrer">here</a> and <a href="https://medium.com/analytics-vidhya/write-your-own-custom-data-generator-for-tensorflow-keras-1252b64e41c3" rel="nofollow noreferrer">here</a>.</p>
<h2>Update:</h2>
<p>I simplified the problem such that the generator, instead of producing a series of randomly parameterized damped sine waves, now produces a vector of ones (i.e., <code>np.ones(batch_size, 1000, 1)</code>. I fit my autoencoder model and, as before, the model fit with the generator still under performs relative to the model fit on the raw data themselves.</p>
<p>Side note: I edited the originally posted code to reflect this update.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv1D, Conv1DTranspose, MaxPool1D
import tensorflow as tf
""" Generate training/testing data data (i.e., a vector of ones) """
class DataGenerator(keras.utils.Sequence):
def __init__(
self,
batch_size,
vector_length,
):
self.batch_size = batch_size
self.vector_length = vector_length
def __getitem__(self, index):
x = np.ones((self.batch_size, self.vector_length, 1))
y = np.ones((self.batch_size, self.vector_length, 1))
return x, y
def __len__(self):
return 1 #one batch of data
vector_length = 1000
train_gen = DataGenerator(800, vector_length)
test_gen = DataGenerator(200, vector_length)
""" Machine Learning Model and Training """
# Class to hold ML model
class MLModel:
def __init__(self, n_inputs):
self.n_inputs = n_inputs
visible = Input(shape=n_inputs)
encoder = Conv1D(
filters=1,
kernel_size=100,
padding="same",
strides=1,
activation="LeakyReLU",
)(visible)
encoder = MaxPool1D(pool_size=2)(encoder)
# decoder
decoder = Conv1DTranspose(
filters=1,
kernel_size=100,
padding="same",
strides=2,
activation="linear",
)(encoder)
model = Model(inputs=visible, outputs=decoder)
model.compile(optimizer="adam", loss="mse")
self.model = model
""" EXPERIMENT 1 """
# instantiate a model
n_inputs = (vector_length, 1)
model1 = MLModel(n_inputs).model
# train first model!
model1.fit(x=train_gen, epochs=10, validation_data=test_gen)
""" EXPERIMENT 2 """
# use the generator to create training and testing data
train_x, train_y = train_gen.__getitem__(0)
test_x, test_y = test_gen.__getitem__(0)
# instantiate a new model
model2 = MLModel(n_inputs).model
# train second model!
history = model2.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=10)
""" Model evaluation and plotting """
pred_y1 = model1.predict(test_x)
pred_y2 = model2.predict(test_x)
plt.ion()
plt.clf()
n = 0
plt.plot(test_y[n, :, 0], label="True Signal")
plt.plot(pred_y1[n, :, 0], label="Model1 Prediction")
plt.plot(pred_y2[n, :, 0], label="Model2 Prediction")
plt.legend()
</code></pre>
|
<python><tensorflow><machine-learning><keras><autoencoder>
|
2023-01-19 02:27:21
| 1
| 872
|
Alex Witsil
|
75,167,166
| 10,200,497
|
Delete rows from pandas dataframe by using regex
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [
'#x{LA 0.098:abc}',
'#x{LA abc:0.31}',
'#x{BC abc:0.1231}',
'#x{LA 0.333:abc}',
'#x{CN 0.031:abc}',
'#x{YM abc:12345}',
'#x{YM 1222:abc}',
]
}
)
</code></pre>
<p>I have two list of ids that are needed in order to delete rows based on the postion of "<code>abc</code>" from the colon. That is whether <code>abc</code> is on the right side of colon or left side.
These are my lists:</p>
<pre><code>labels_that_abc_is_right = ['LA', 'CN']
labels_that_abc_is_left = ['YM', 'BC']
</code></pre>
<p>For example I want to omit rows that contain <code>LA</code> and <code>abc</code> is on the right side of colon. The same applies for <code>CN</code>. I want to delete rows that contain <code>YM</code> and <code>abc</code> is on the left side of colon. This is just a sample. I have hundreds of Ids.
This is the output that I want after deleting rows:</p>
<pre><code> a
1 #x{LA abc:0.31}
6 #x{YM 1222:abc}
</code></pre>
<p>I have tried the solutions of these two answers: <a href="https://stackoverflow.com/a/53622146/10200497">answer1</a> and <a href="https://stackoverflow.com/a/37011978/10200497">answer2</a>. And I know that I probably need to use <code>df.a.str.contains</code> with a regex. But it still doesn't work</p>
|
<python><pandas>
|
2023-01-19 02:20:00
| 1
| 2,679
|
AmirX
|
75,167,124
| 9,934,276
|
Check if its a table inside a td
|
<p>Supposed I have a HTML like this:</p>
<pre><code><tr>
<td colspan="2"><br>
<blockquote>
<p align="center"><br>Test<br></p>
<p align="center"><b>Test2</b></p><b>
</b>
<p>Test 3</p>
<table>
<tbody>
<tr valign="top">
<td>Header1</td>
<td>Header2</td>
<td>Header3</td>
</tr>
<tr valign="top">
<td>val1</td>
<td>val2</td>
<td>val3</td>
</tr>
</tbody>
</table>
<p align="justify">test 5</p>
<table>
<tbody>
<tr align="center">
<td>test z</td>
</tr>
</tbody>
</table>
<p align="justify">ee</p>
<p>qq</p>
<p>yy</p>
</blockquote>
</td>
</tr>
</code></pre>
<p>I want to check if its a table, if its not print its text, the output should be like this:</p>
<pre><code>Test
Test2
Test 3
test 5
ee
qq
yy
</code></pre>
<p>This is the code that i have right now:</p>
<pre><code>soup = BeautifulSoup(html, 'html.parser')
td = soup.find('td')
for child in td.find_all(recursive=False):
if child.name != 'table':
print(child.text)
</code></pre>
<p>However it still gives me the text inside the table:</p>
<pre><code>Test
Test2
Test 3
Header1
Header2
Header3
val1
val2
val3
test 5
test z
ee
qq
yy
</code></pre>
<p>I dont want to strip the table inside the TD I I have something to do with that. Thanks</p>
|
<python><beautifulsoup>
|
2023-01-19 02:09:44
| 1
| 1,750
|
Beginner
|
75,167,112
| 11,930,470
|
How to create a dictionary from list with single for loop?
|
<p>List is a = [1,2,3]</p>
<p>How to get the dictionary from that list like below?</p>
<p>{1: 1, 1: 2, 1: 3, 2: 1, 2: 2, 2: 3, 3: 1, 3: 2, 3: 3}</p>
<p>Condition is to do not use two for loops.</p>
<p>It was asked in an interview, but I was not able to implement without two for loops so I thought interviewer might be wrong.</p>
<p>Note: List a is of variable length</p>
|
<python><python-3.x>
|
2023-01-19 02:07:08
| 2
| 407
|
Kalyan Kumar
|
75,167,035
| 17,900,955
|
JWT authentication does not work in Django Rest Framework after deploying it to IIS on live server
|
<p>I built a Rest API using Django's Rest framework. And I've deployed it on Windows OS's IIS. When I deploy the application to the live IIS server, it throws the following issue even though it functions well on the local IIS server. I have used <a href="https://pypi.org/project/wfastcgi/" rel="nofollow noreferrer">wfastcgi</a> package for deploying my application to IIS.</p>
<p>DEFAULT_AUTHENTICATION_CLASSES in <code>settings.py</code>:</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework_simplejwt.authentication.JWTAuthentication',
)
}
</code></pre>
<p>Here's an example request in postman:
<a href="https://i.sstatic.net/bWE80.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bWE80.png" alt="enter image description here" /></a></p>
|
<python><django><windows><iis><django-rest-framework>
|
2023-01-19 01:47:28
| 1
| 497
|
Parth Panchal
|
75,167,022
| 8,089,312
|
AttributeError: 'XGBRegressor' object has no attribute 'feature_names_in_'
|
<p>I'm building a model with XGBRegressor.
After fitting the model, I want to visualise the feature importance.</p>
<pre><code>reg = xgb.XGBRegressor(base_score=0.5, booster='gbtree', n_estimators=1000)
reg.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], verbose=100)
fi = pd.DataFrame(data=reg.feature_importances_, index=reg.feature_names_in_, columns=['importance'])
fi.sort_values('importance').plot(kind='barh', title='Feature Importance')
plt.show()
</code></pre>
<p>I received the error <code>AttributeError: 'XGBRegressor' object has no attribute 'feature_names_in_'</code>. I've already upgraded sklearn and restarted the jupyter notebook but still receiving this error. All my feature names are string.</p>
<p>I have no issue of getting <code>feature_importances_</code> .</p>
|
<python><xgboost>
|
2023-01-19 01:45:59
| 1
| 1,744
|
Osca
|
75,166,900
| 8,869,570
|
df.index vs df["index"] after resetting index
|
<pre><code>import pandas as pd
df1 = pd.DataFrame({
"value": [1, 1, 1, 2, 2, 2]})
print(df1)
print("-------------------------")
print(df1.reset_index())
print("-------------------------")
print(df1.reset_index().index)
print("-------------------------")
print(df1.reset_index()["index"])
</code></pre>
<p>produces the output</p>
<pre><code> value
0 1
1 1
2 1
3 2
4 2
5 2
-------------------------
index value
0 0 1
1 1 1
2 2 1
3 3 2
4 4 2
5 5 2
-------------------------
RangeIndex(start=0, stop=6, step=1)
-------------------------
0 0
1 1
2 2
3 3
4 4
5 5
Name: index, dtype: int64
</code></pre>
<p>I am wondering why <code>print(df1.reset_index().index)</code> and
<code>print(df1.reset_index()["index"])</code> prints different things in this case? The latter prints the "index" column, while the former prints the indices.</p>
<p>If we want to access the reset indices (the column), then it seems we have to use brackets?</p>
|
<python><pandas><dataframe>
|
2023-01-19 01:15:40
| 3
| 2,328
|
24n8
|
75,166,877
| 5,212,614
|
How can we count items greater than a value and less than a value?
|
<p>I have this DF.</p>
<pre><code>import pandas as pd
import scipy.optimize as sco
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
data = [['ATTICA',1,2,590,680],['ATTICA',1,2,800,1080],['AVON',14,2,950,1250],['AVON',15,3,500,870],['AVON',20,4,1350,1700]]
df = pd.DataFrame(data, columns=['cities','min_workers','max_workers','min_minutes','max_minutes'])
df
df['Non_HT_Outages'] = (df['min_workers'] < 15).groupby(df['cities']).transform('count')
df['HT_Outages'] = (df['min_workers'] >= 15).groupby(df['cities']).transform('count')
df
</code></pre>
<p><a href="https://i.sstatic.net/KZR7p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KZR7p.png" alt="enter image description here" /></a></p>
<p>I am trying to count items in a column named 'min_workers' and if <15, put into column 'Non-HT' but if >=15, put into column 'HT'. My counts seem to be off.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-01-19 01:09:53
| 1
| 20,492
|
ASH
|
75,166,819
| 2,487,509
|
how do install python packages using python3.11 -m pip
|
<p>I am trying to install a python application (elastalert2) which requires the latest version of python on an Ubuntu 20.04 machine.</p>
<p>I have managed to install python3.11 and my searching suggested strongly that to install packages in this environment I should use python3.11 -m pip install but when I try I get:</p>
<pre><code>elastalert@secmgrtst02:~$ /usr/bin/python3.11 -m pip install elastalert2==2.9.0
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/lib/python3/dist-packages/pip/__main__.py", line 16, in <module>
from pip._internal.cli.main import main as _main # isort:skip # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 19, in <module>
from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'
</code></pre>
<p>I have very limited experience with python and do not know what the problem is.</p>
<p>I actually want the app installed in the current directory and initially (before I realised I need 3.11 I used <code>pip3 install -t . elastalert3</code> which worked fine but would not run...</p>
|
<python><pip>
|
2023-01-19 00:57:54
| 1
| 610
|
Russell Fulton
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.