QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,769,241
| 2,587,816
|
How do you format a list of Python values to be compatible with the COM SAFEARRAY format?
|
<p>I am sort of surprised this hasn't been covered before.</p>
<p>The calling for the method (in C) is:</p>
<pre><code>SetValues(BSTR Keyword, SAFEARRAY * Data)
</code></pre>
<p>I have tried:</p>
<pre><code>handle = win32com.client.Dispatch("My.Application")
vals = (1.1, 2.2, 3.3)
safe_vals = win32com.client.VARIANT(pythoncom.VT_ARRAY | pythoncom.VT_R8, vals)
handle.SetValues("PUT_IT_HERE", safe_vals)
</code></pre>
<p>This gives me the error:</p>
<pre><code>TypeError: Objects for SAFEARRAYS must be sequences (of sequences), or a buffer object.
</code></pre>
<p>If I just try entering 'vals':</p>
<pre><code> res = handle.SetValues("PUT_IT_HERE", vals)
File "<COMObject My.Application>", line 2, in SetValues
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147220988), None)
</code></pre>
<p>I assume there is some kind of conversion needed to make it compatible with (SAFEARRAY *) but no one has been very clear about this.</p>
|
<python><com><safearray>
|
2023-03-17 15:11:56
| 1
| 5,170
|
Jiminion
|
75,769,145
| 5,847,235
|
Difference between echoing static text and echoing a variable for Java JSch exec command
|
<p>I have this bash script "run_transcribe_py.sh":</p>
<pre><code>RESULT=$(/usr/bin/docker run --mount type=bind,source="/usr/src/Projects/docker/transcribe/audio",target=/home/audio --mount type=bind,source="/usr/src/Projects/docker/transcribe",target=/home/transcribe -it --rm --name transcribe transcribe python /home/transcribe/transcribe.py $1)
echo $RESULT
echo "ABC"
</code></pre>
<p>The Python script run in the docker:</p>
<pre><code>import sys
transcript="abcdef"
print(transcript)
</code></pre>
<p>If I run the bash script in BASH:</p>
<pre><code># /bin/bash /usr/src/Projects/docker/transcribe/run_transcribe_py.sh /home/audio/102303160955qhhs5zq.mp3
abcdef
ABC
#
</code></pre>
<p>However, if run via a JSCh exec command from Java:</p>
<pre><code>public boolean commitTranscription(ArrayList<transcriptionMetaData> pRecordingsInfo) {
boolean retVal = false;
JSch localJsch = null;
localJsch = new JSch();
Session localSession = initJSch(localJsch, AppSettings.getPbxServer(), AppSettings.getPbxUser(), AppSettings.getPbxServerPassword(), AppSettings.getPbxPort());
try {
for (transcriptionMetaData iterateRecData : pRecordingsInfo) {
ArrayList<String> transcribeLines = new ArrayList<String>();
ChannelExec shellChannel = (ChannelExec) localSession.openChannel("exec");
try ( BufferedReader resultOfTranscription = new BufferedReader(new InputStreamReader(shellChannel.getInputStream()))) {
shellChannel.setCommand("/bin/bash /usr/src/Projects/docker/transcribe/run_transcribe_py.sh /home/audio/"
+ iterateRecData.getCallLogNo() + ".mp3");
shellChannel.connect((int) TimeUnit.SECONDS.toMillis(10));
String resultLine = null;
while ((resultLine = resultOfTranscription.readLine()) != null) {
transcribeLines.add(resultLine);
}
iterateRecData.setTranscript(transcribeLines.toString());
if (shellChannel != null) {
if (shellChannel.isConnected()) {
shellChannel.disconnect();
}
shellChannel = null;
}
}
transcribeLines = null;
}
} catch (JSchException jex) {
localLogger.error((String) logEntryRefNumLocal.get()
+ "JSch exception in commitTranscription() method in ExperimentalRecordingsTranscription. JSch exception: " + jex.toString() + ". Contact software support." + jex.getMessage(), jex);
} catch (RuntimeException rex) {
localLogger.error((String) logEntryRefNumLocal.get()
+ "Runtime exception in commitTranscription() method in ExperimentalRecordingsTranscription. Runtime exception: " + rex.toString() + ". Contact software support." + rex.getMessage(), rex);
} catch (Exception ex) {
localLogger.error((String) logEntryRefNumLocal.get()
+ "Exception in commitTranscription() method in ExperimentalRecordingsTranscription. Exception: " + ex.toString() + ". Contact software support." + ex.getMessage(), ex);
} finally {
if (localSession != null) {
if (localSession.isConnected()) {
localSession.disconnect();
}
localSession = null;
}
localJsch = null;
}
return retVal;
}
</code></pre>
<p>Java literally sees</p>
<pre><code>#
ABC
#
</code></pre>
<p>The output from the docker Python script (e. g. "abcdef") is just blank in Java, while, if the script is run from BASH itself, both lines are present.</p>
<p>Why is this</p>
<pre><code>RESULT=$(/usr/bin/docker run --mount type=bind,source="/usr/src/Projects/docker/transcribe/audio",target=/home/audio --mount type=bind,source="/usr/src/Projects/docker/transcribe",target=/home/transcribe -it --rm --name transcribe transcribe python /home/transcribe/transcribe.py $1)
echo $RESULT
</code></pre>
<p>invisble to Java only, but shows in BASH in the console, just above "abc"?</p>
<p>Anybody got any idea?</p>
<p>Thanks!</p>
<p>EDIT: From feedback from Charles Duffy (thanks Charles!) I've changed my BASH script as referenced above to</p>
<pre><code>#!/bin/bash
declare RESULT
RESULT="$(/usr/bin/docker run --mount type=bind,source="/usr/src/Projects/docker/transcribe/audio",target=/home/audio --mount type=bind,source="/usr/src/Projects/docker/transcribe",target=/home/transcribe -it --rm --name transcribe transcribe python /home/transcribe/transcribe.py "$1")"
printf '%s\n' "$RESULT"
echo "ABC"
</code></pre>
<p>This still however, in Java results in the exact same blank output if the BASH script is called from the JsCH exec method:</p>
<pre><code>
ABC
</code></pre>
<p>while running it straight in BASH results in</p>
<pre><code>abcdef
ABC
</code></pre>
<p>I literally just want the "abcdef" to be "visible" to Java in the Java code above... so nothing changes even if I clean up the variable instantiation in BASH and the output of it via printf instead of echo as advised by the link Charles gave...</p>
<p>EDIT: I also tried calling the dockerised Python instance directly from Java and skip BASH alltogether - behaviour remains exactly the same. Java never sees the output printed to stdout by docker from running the Python script inside docker.</p>
<p>E. g.</p>
<pre><code>shellChannel.setCommand("/bin/bash /usr/src/Projects/docker/transcribe/run_transcribe_py.sh /home/audio/" + iterateRecData.getCallLogNo() + ".mp3");
</code></pre>
<p>changed to</p>
<pre><code>shellChannel.setCommand("/usr/bin/docker run --mount type=bind,source=\"/usr/src/Projects/docker/transcribe/audio\",target=/home/audio --mount type=bind,source=\\\"/usr/src/Projects/docker/transcribe\\\",target=/home/transcribe -it --rm --name transcribe transcribe python /home/transcribe/transcribe.py " + iterateRecData.getCallLogNo() + ".mp3");
</code></pre>
<p>still gives the same blank result. The Java BufferedReader never sees the output printed to stdout by Python running inside docker. If run from the terminal directly with the above commandline, result is as expected - the letters "abcdef" appears in the terminal.</p>
|
<python><java><bash>
|
2023-03-17 15:03:17
| 1
| 408
|
Stefan
|
75,769,005
| 1,478,905
|
Normalize a nested json file in Python
|
<p>Suppose a list of objects: one, two, three,...</p>
<p>Every object is composed of name, foo1, and foo2 fields.</p>
<pre><code>[{
'name':'one',
'foo2':{
'id':'1.1',
'id':'1.2'
},
'foo1':[
{
'foo2':{
'id':'1.1',
'name':'one.one'
}
},
{
'foo2':{
'id':'1.2',
'name':'one.two'
}
}
]
}]
</code></pre>
<p>If the object is normalize using:</p>
<pre><code>obj_row_normalize = pd.json_normalize( object_row,
record_path =['foo2'],
meta=['name'],
record_prefix='num_',
)
</code></pre>
<p>The result is:</p>
<pre><code>id num_name
1.1 one
1.2 one
</code></pre>
<p>Given that, how can foo1.foo2.name be added to each resulting row as below:</p>
<pre><code>id num_name name
1.1 one one.one
1.2 one one.two
</code></pre>
|
<python><json><json-normalize>
|
2023-03-17 14:49:59
| 1
| 997
|
Diego Quirós
|
75,768,894
| 13,296,497
|
How to create a NULL Boolean column in a pyspark dataframe
|
<p>I have a Boolean column that is sometimes NULL and want to assign it as such. My code:</p>
<pre><code>from pyspark.sql import functions as F
df = df.withColumn('my_column_name', F.lit(None).cast("string"))
</code></pre>
<p>My error: Column type: BOOLEAN, Parquet schema: optional byte_array</p>
<p>My attempted solution was
<code>df = df.withColumn('my_column_name', F.lit(None).cast('boolean'))</code> but this doesn't seem right</p>
|
<python><dataframe><pyspark>
|
2023-03-17 14:38:59
| 1
| 1,092
|
DSolei
|
75,768,794
| 1,100,060
|
Text file line count - different number in python2 vs python3
|
<p>I am reading a text file using this command:</p>
<pre><code>print(len(list(open(filename))))
</code></pre>
<p>Now, when I run it in Python2 I get 5622862 lines, but when I read it with Python3 I get 5622865 lines. How can it be? Btw, when I do in command line <code>cat file.txt | wc -l</code> I get same result as Python2.</p>
<p>This is driving me nuts. Two files that should have same line length (5622862) indeed have the same length in python2, but not in python3 (5622862 vs 5622865). Can it be that python3 has a bug?</p>
<p>Doing a different read in python3 does not work either:</p>
<pre><code>list(open(filename, "r", encoding="utf-8"))
</code></pre>
|
<python><python-3.x><text-files><python-2.x><line-count>
|
2023-03-17 14:27:57
| 1
| 1,869
|
Nathan G
|
75,768,779
| 11,004,423
|
numpy get start, exact, end of convergence
|
<p>I have a numpy array of floats:</p>
<pre class="lang-py prettyprint-override"><code>[..., 50.0, 51.0, 52.2, ..., 59.3, 60.4, 61.3, 62.1, ..., 67.9, 68.1, 69.2, ...]
</code></pre>
<p>You can see that the numbers are first converging to 60, and then diverging from it. There is a range: from 52.0 to 68.0; in the middle of this range is 60.0; 60.0 is called the <em>exact</em>, 52.0 is called the start, 68.0 is called the end.</p>
<p>I'm trying to find indexes of start, exact, and end. In the array above: 52.2 is the start, 60.4 is the exact, 67.9 is the end. You can see that 52.2 is not exactly 52.0, but it is still the start because it's closest to 52.0; 60.4 is not quite the same as 60.0, but it's closest to 60.0 so it works as exact; same goes for 67.9 which is closest to 68.0</p>
<p>Is there a way to find the indexes of start, exact, end with numpy? What complicates the problems a bit is: we can have an array, floats of which are descending instead of ascending:</p>
<pre class="lang-py prettyprint-override"><code>[..., 69.2, 68.1, 67.9, ..., 62.1, 61.3, 60.4, 59.3, ..., 52.2, 51.0, 50.0, ...]
</code></pre>
<p>In this case: 67.9 is the start, 60.4 is the exact, and 52.2 is the end.</p>
<p>So the numbers can be ascending or descending, and in any case I need to find the indexes.</p>
<p>I can more or less easily do it with loops, keeping track of current and next number, comparing them, and seeing whether it's converging to 60.0 and then diverging from it. But I have lots of numbers, even more: I have an array of arrays, and for each array I need to get the indexes. That will be very slow with plain loops. And I'm a newbie in numpy, so I'm not sure how to vectorize it.</p>
<p>Please help if you can. If you need more information, you can ask it in the comments; I will be answering as quickly as possible. Thank you.</p>
<p><strong>UPDATE</strong></p>
<p>Thank you Sajad Safarveisi! Your solution works, and you are absolutely awesome!</p>
<p>However, it turns out that my arrays are not that simple. Sometimes I would have arrays with multiple ranges, so like this:</p>
<pre class="lang-py prettyprint-override"><code>[..., 52.2, ..., 60.4, ..., 67.9, ..., 52.4, ..., 60.0, ..., 67.7, ..., 52.0, ..., 60.6, ..., 67.3, ...]
</code></pre>
<p>So in this case I would need a list of lists of indexes:</p>
<pre class="lang-py prettyprint-override"><code>[
[start0, exact0, end0],
[start1, exact1, end1],
[start2, exact2, end2],
]
</code></pre>
<p>Sajad, do you know a way to do this? I'd appreciate it. The problem is that different arrays can have different number of ranges: sometimes it can be 1 range, sometimes 3, and sometimes 0. I'm sorry for not posting the whole problem.</p>
|
<python><arrays><numpy><convergence>
|
2023-03-17 14:26:55
| 2
| 1,117
|
astroboy
|
75,768,729
| 10,192,593
|
Create dictionary in a loop
|
<p>I would like to create a dictionary in a loop. Also, I would like to name each element in the dictionary using the loop.</p>
<pre><code>table = {}
elasticity = np.array([[1,2],[3,4]])
elasticity.shape
genders = ['female', 'male']
income = ['i0','i1']
for i in range(len(genders)):
for j in range(len(income)):
key = genders[i]+"_"+income[j]
table = ["key"=elasticity[i,j]]
</code></pre>
<p>I know I have an error in the last line somehow the "=" is wrong, but I can't fix it.</p>
<p>this is what I would like to have:</p>
<pre><code>female_i0 = 1
female_i1 = 2
male_i0 = 3
male_i1 = 4
</code></pre>
|
<python><numpy>
|
2023-03-17 14:22:04
| 4
| 564
|
Stata_user
|
75,768,713
| 6,730,854
|
Tensorboard error while training with Yolov7
|
<p>I'm trying to use tensorboard according to training instructions of yolov7 to see the results as I'm training.</p>
<p>It said to run this command to view it at http://localhost:6006/</p>
<p>However, I get this error when I run it:</p>
<pre><code>(yolov7) C:\Users\user\Documents\Python\yolov7>tensorboard --logdir=runs/train
Traceback (most recent call last):
File "C:\Users\user\anaconda3\envs\yolov7\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\user\anaconda3\envs\yolov7\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\user\anaconda3\envs\yolov7\Scripts\tensorboard.exe\__main__.py", line 4, in <module>
File "C:\Users\user\anaconda3\envs\yolov7\lib\site-packages\tensorboard\main.py", line 27, in <module>
from tensorboard import default
File "C:\Users\user\anaconda3\envs\yolov7\lib\site-packages\tensorboard\default.py", line 33, in <module>
from tensorboard.plugins.audio import audio_plugin
File "C:\Users\user\anaconda3\envs\yolov7\lib\site-packages\tensorboard\plugins\audio\audio_plugin.py", line 23, in <module>
from tensorboard import plugin_util
File "C:\Users\user\anaconda3\envs\yolov7\lib\site-packages\tensorboard\plugin_util.py", line 24, in <module>
import markdown
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\markdown\__init__.py", line 29, in <module>
from .core import Markdown, markdown, markdownFromFile # noqa: E402
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\markdown\core.py", line 26, in <module>
from . import util
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\markdown\util.py", line 87, in <module>
INSTALLED_EXTENSIONS = metadata.entry_points().get('markdown.extensions', ())
AttributeError: 'EntryPoints' object has no attribute 'get'
</code></pre>
<p>Couldn't find any documentation to see what I'm missing.</p>
|
<python><pytorch><torch><tensorboard><yolov7>
|
2023-03-17 14:21:00
| 0
| 472
|
Mike Azatov
|
75,768,651
| 8,294,752
|
Assign and re-use quantile-based buckets by group in Pandas
|
<h2>What I am trying to achieve</h2>
<p>I have a pandas DataFrame in a long format, containing values for different groups. I want to compute and apply a quantile based-binning (e.g. quintiles in this example) to each group of the DataFrame.</p>
<p>I also need to be able to keep the bins edges for each group and apply the same labelling (via <code>pd.cut</code>) to a new DataFrame.</p>
<p>E.g. for each of group, find the quintiles and assign them to a new column <code>value_label</code>.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df1 = pd.DataFrame({"group": "A", "val": np.random.normal(loc=10, scale=5, size=100)})
df2 = pd.DataFrame({"group": "B", "val": np.random.normal(loc=5, scale=3, size=100)})
df = pd.concat([df1, df2], ignore_index=True)
# apply qcut
labels_and_bins = df.groupby("group")["val"].apply(
lambda x: pd.qcut(x, q=5, duplicates="drop", retbins=True)
)
# where e.g.
labels_and_bins["A"][0] # are the applied labels to all the rows in group "A"
labels_and_bins["A"][1] # are the bin edges to apply the same segmentation going forward
for group in df.group.unique():
df.loc[df["group"] == group, "value_label"] = labels_and_bins[group][0]
</code></pre>
<p>When I try to run this, at the second iteration I get the following error:
<code>TypeError: Cannot set a Categorical with another, without identical categories</code></p>
<p>So, essentially I need Pandas to accept extending the categories belonging to the column dtype.</p>
<h2>What I have tried/considered</h2>
<h4>Transform</h4>
<p>Using <code>.transform()</code> would probably solve the issue of assigning the label on the first DataFrame, but it's not clear to me how I could re-use the identified bins in future iterations</p>
<h4>Unioning the categorical dtype</h4>
<p>I tried two approaches:</p>
<p><strong>add_categories()</strong></p>
<p><code>labels_and_bins['A'][0].cat.add_categories(labels_and_bins['B'][0].cat.as_unordered())</code></p>
<p>This results in <code>ValueError: Categorical categories must be unique</code></p>
<p><strong>union_categoricals()</strong></p>
<pre class="lang-py prettyprint-override"><code>pd.api.types.union_categoricals(
[labels_and_bins["A"][0].cat.as_unordered(), labels_and_bins["B"][0].cat.as_unordered()].get_inde
)
</code></pre>
<p>This results in <code>InvalidIndexError: cannot handle overlapping indices; use IntervalIndex.get_indexer_non_unique</code></p>
<h2>One solution</h2>
<p>Get rid of the Interval object by calling qcut without labels, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>labels_and_bins = df.groupby("group")["val"].apply(
lambda x: pd.qcut(x, q=5, duplicates="drop", retbins=True, labels=False)
)
</code></pre>
<p>However I would be interested in a way to keep the Interval if possible for better interpretability</p>
<hr />
<p><strong>Overall this feels like a big anti-pattern, so I'm confident I'm missing a much more basic solution to this problem!</strong></p>
<p>Thanks in advance for your inputs!</p>
|
<python><pandas><dataframe>
|
2023-03-17 14:14:55
| 3
| 1,526
|
arabinelli
|
75,768,472
| 4,772,565
|
How to reopen bokeh html file and restore the last zoomed status of every figures?
|
<p>I have a html file generated by <code>bokeh</code> <code>gridplot</code> which contains multiple figures.</p>
<p>Suppose I open the html file, zoomed-in some (or all) figures to different levels and then closed the file. Next time when I reopen this file, I want all figures automatically set to the previous zoomed-level.</p>
<p>Could you please show me how to do it?</p>
<p>Here is an example code to quickly generate a html file with two figures.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from bokeh.io import save
from bokeh.layouts import gridplot
from bokeh.plotting import figure
x = np.linspace(0, 4 * np.pi, 100)
y = np.sin(x)
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select"
p1 = figure(title="Legend Example", tools=TOOLS)
p1.circle(x, y, legend_label="sin(x)")
p1.circle(x, 2 * y, legend_label="2*sin(x)", color="orange")
p1.circle(x, 3 * y, legend_label="3*sin(x)", color="green")
p1.legend.title = "Markers"
p2 = figure(title="Another Legend Example", tools=TOOLS)
p2.circle(x, y, legend_label="sin(x)")
p2.line(x, y, legend_label="sin(x)")
p2.line(
x,
2 * y,
legend_label="2*sin(x)",
line_dash=(4, 4),
line_color="orange",
line_width=2,
)
p2.square(x, 3 * y, legend_label="3*sin(x)", fill_color=None, line_color="green")
p2.line(x, 3 * y, legend_label="3*sin(x)", line_color="green")
p2.legend.title = "Lines"
p2.x_range = p1.x_range
save(gridplot([p1, p2], ncols=2, width=400, height=400))
</code></pre>
<p>Thanks.</p>
|
<python><bokeh>
|
2023-03-17 13:59:16
| 2
| 539
|
aura
|
75,768,412
| 4,772,565
|
How to save the multiple figures in a bokeh gridplot into separate png files?
|
<p>I have a html file generated by <code>bokeh</code> <code>gridplot</code> containing multiple figures.</p>
<p>My use case is:</p>
<ol>
<li>I eye-check each figure, zoom-in/out individually.</li>
<li>Then, I want click a button to save all the figures into separate png files. So each png file is for one figure.</li>
</ol>
<p>Could you please show me how to do it?</p>
<p>Here is an example code to quickly generate a gridplot with 2 figures.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from bokeh.io import save
from bokeh.layouts import gridplot
from bokeh.plotting import figure
x = np.linspace(0, 4 * np.pi, 100)
y = np.sin(x)
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select"
p1 = figure(title="Legend Example", tools=TOOLS)
p1.circle(x, y, legend_label="sin(x)")
p1.circle(x, 2 * y, legend_label="2*sin(x)", color="orange")
p1.circle(x, 3 * y, legend_label="3*sin(x)", color="green")
p1.legend.title = "Markers"
p2 = figure(title="Another Legend Example", tools=TOOLS)
p2.circle(x, y, legend_label="sin(x)")
p2.line(x, y, legend_label="sin(x)")
p2.line(
x,
2 * y,
legend_label="2*sin(x)",
line_dash=(4, 4),
line_color="orange",
line_width=2,
)
p2.square(x, 3 * y, legend_label="3*sin(x)", fill_color=None, line_color="green")
p2.line(x, 3 * y, legend_label="3*sin(x)", line_color="green")
p2.legend.title = "Lines"
p2.x_range = p1.x_range
save(gridplot([p1, p2], ncols=2, width=400, height=400))
</code></pre>
<p>Thanks.</p>
|
<python><bokeh>
|
2023-03-17 13:52:59
| 1
| 539
|
aura
|
75,768,357
| 2,804,197
|
Use libvlc log_set_file with Python bindings
|
<p>How can I use libvlc's <code>Instance.log_set_file</code> function from Python bindings?</p>
<p>Using <code>open</code> or <code>os.open</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>import vlc
instance = vlc.Instance()
f = open("/tmp/vlc.log", "w")
instance.log_set_file(f)
</code></pre>
<p>fails with:</p>
<pre><code>ctypes.ArgumentError: argument 2: TypeError: expected LP_FILE instance instead of _io.TextIOWrapper
</code></pre>
|
<python><linux><libvlc>
|
2023-03-17 13:46:55
| 2
| 402
|
user2804197
|
75,768,106
| 3,702,859
|
How to best mix-and-match returning/non-returning tasks on Airflow Taskflow API?
|
<p>Note: All examples below seek a one-line serial execution.</p>
<p>Dependencies on Taskflow API can be easily made serial if they don't return data that is used afterwards:</p>
<pre><code>t1() >> t2()
</code></pre>
<p>If the task T1 returning a value and task T2 using it, you can also link them like this:</p>
<pre><code>return_value = t1()
t2(return_value)
</code></pre>
<p>However, if you have to mix and match returning statements, this is no longer clear:</p>
<pre><code>t1() >>
returned_value = t2()
t3(returned_value)
</code></pre>
<p>will fail due to syntax error (<code>>></code> operator cannot be used before a returning-value task).</p>
<p>Also, this would work, but not generate the serial (t1 >> t2 >> t3) dag required:</p>
<pre><code>t1()
returned_value = t2()
t3(returned_value)
</code></pre>
<p>since then t1 and t2/t3 will run in parallel.</p>
<p>A way to make this is to force t2 to use a returned value from t1, even if not needed:</p>
<pre><code>returned_fake_t1 = t1()
returned_value_t2 = t2(returned_fake_t1)
t3(returned_value_t2)
</code></pre>
<p>But this is not elegant since it needs to change the logic.
What is the idiomatic way in Taskflow API to express this? I was not able to find such a scenario on Airflow Documentation.</p>
|
<python><airflow><airflow-2.x><airflow-taskflow>
|
2023-03-17 13:23:39
| 2
| 1,849
|
xmar
|
75,768,039
| 7,848,173
|
Plotting both x and y messes up y values in graph
|
<p>Plotting with just the y plots a graph exactly as intended but I want to use another column from the same dataframe for the x-axis. Doing so messes up my y plots though. I can't figure out why.</p>
<p>Here's the section of the code in question:</p>
<pre><code>temps = df['temps'] # floats
hours = df['timestamps'] # ints
fig, ax = plt.subplots()
ax.plot(hours, temps)
</code></pre>
<p><a href="https://i.sstatic.net/zkMXm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zkMXm.png" alt="enter image description here" /></a></p>
<p>If I leave out <code>hours</code> and just do <code>ax.plot(temps)</code>, it looks right.</p>
<p><a href="https://i.sstatic.net/ZYlVL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZYlVL.png" alt="enter image description here" /></a></p>
<p>Both <code>hours</code> and <code>temps</code> are the same length. How do I get the <code>hours</code> as the x-axis values without ruining my y-plot?</p>
|
<python><pandas><matplotlib>
|
2023-03-17 13:17:01
| 1
| 386
|
breakthatbass
|
75,768,022
| 6,241,997
|
Converting 7 digit dates to normal calendar dates in Databricks python
|
<p>I am generating data using TPC-DS.</p>
<p>I load the customers table to a dataframe. The <code>c_first_sales_date_sk</code> column has values such as <code>2449001</code>, which makes me think they are Julian calendar dates of type <code>yyyyDD</code>.</p>
<p>So far I have tried:</p>
<pre><code>from pyspark.sql.functions import to_date, from_unixtime
df_with_date = df.withColumn("c_first_sales_date", to_date(col("c_first_sales_date_sk"), format="yyyyDDD"))
display(df_with_date)
</code></pre>
<p>Applying this, it will convert <code>2449001</code> to <code>2449-01-01</code>, which is wrong. The online convert at <a href="http://www.longpelaexpertise.com/toolsJulian.php" rel="nofollow noreferrer">http://www.longpelaexpertise.com/toolsJulian.php</a> converts the same date to <code>01-Jan-2024</code>.</p>
<p>What am I doing wrong? How do I convert this column properly?</p>
|
<python><date><datetime><azure-databricks><julian-date>
|
2023-03-17 13:15:03
| 1
| 1,633
|
crystyxn
|
75,767,800
| 7,074,716
|
How to draw a weighted bidirectional graph with at least 3 edges going from one node to the central node?
|
<p>I would like to draw the following graph using networkx with random edge weights.
<a href="https://i.sstatic.net/HKYZE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HKYZE.png" alt="enter image description here" /></a></p>
<p>The point is that at least 3 edges have to go from nodes 2 and 3 to node 1.</p>
<p>I followed the code from <a href="https://stackoverflow.com/questions/22785849/drawing-multiple-edges-between-two-nodes-with-networkx">here</a> and got the following:</p>
<pre><code>G = nx.DiGraph()
edge_list = [(1,2,{'w':'A1'}),(2,1,{'w':'A2'}), (2, 1, {'w':'A3'}), (1,3,{'w':'B'}),(3,1,{'w':'C'})]
G.add_edges_from(edge_list)
pos=nx.spring_layout(G,seed=5)
fig, ax = plt.subplots()
nx.draw_networkx_nodes(G, pos, ax=ax)
nx.draw_networkx_labels(G, pos, ax=ax)
curved_edges = [edge for edge in G.edges() if reversed(edge) in G.edges()]
straight_edges = list(set(G.edges()) - set(curved_edges))
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=straight_edges)
arc_rad = 0.25
nx.draw_networkx_edges(G, pos, ax=ax, edgelist=curved_edges, connectionstyle=f'arc3, rad = {arc_rad}')
</code></pre>
<p><a href="https://i.sstatic.net/PxoHJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PxoHJ.png" alt="enter image description here" /></a></p>
<p>The first issue is that all the edges have to be undirected or at least bidirectional.
Secondly, I added one more edge from 1 to 2 but it does not show up in the final graph.</p>
<p>How do I fix these two issues?</p>
|
<python><graph><networkx>
|
2023-03-17 12:50:31
| 1
| 825
|
Curious
|
75,767,773
| 9,537,439
|
Anaconda + VSCode vs Desktop VSCode alone
|
<p><a href="https://stackoverflow.com/questions/54894897/desktop-vscode-vs-anaconda-vscode">Here</a> says that the only difference between those two is that the Anaconda version comes with the Anaconda extension pack out-of-the-box. However, I have been working with some project with both of them and I seen some starking differences and I'm wondering if somebody have some explanation for them (I work with Jupyter notebooks mostly as I'm a Data Scientist, if that information matters)</p>
<p>So, I have a folder that represents the whole project. Inside that folder I have multiple notebooks, datasets, visualizations, etc. I'm working with one of those notebooks specifically.</p>
<ol>
<li><p>Yesterday I opened that notebook with VSCode from Anaconda (let's call it A-VSCode), and worked normally, at the end of the day I saved the changes. No problem. However, today I open the VSCode Desktop (let's call it D-VSCode) and it didn't have any of the work I did yesterday! And as it was the same notebook in the same folder, I was scared as hell as I thought I have lost a full day of work. However, when I open the same notebook with A-VSCode the whole work is there. <strong>So I'm wondering if that's the same notebook, how A-VSCode (or D-VSCode?) keeps a different version? Where is that version saved?</strong></p>
</li>
<li><p>When I try to use the terminal, A-VSCode shows a different path than the D-VSCode. Why?, and why A-VSCode automatically activate the base environment if I have a specific environment for that notebook?</p>
</li>
</ol>
<p>A-VSCode terminal:</p>
<pre><code>PS E:\all_my_projects\project_A> E:/anaconda3/Scripts/activate
PS E:\all_my_projects\project_A> conda activate base
PS E:\all_my_projects\project_A>
</code></pre>
<p>D-VSCode terminal:</p>
<pre><code>PS E:\all_my_projects\project_A>
</code></pre>
|
<python><visual-studio-code><anaconda>
|
2023-03-17 12:47:50
| 1
| 2,081
|
Chris
|
75,767,630
| 14,045,537
|
Numpy np.where condition with multiple columns
|
<p>I have a dataframe</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({"col1": [0, 1, 1, 1,1, 0],
"col2": [False, True, False, False, True, False]
})
data
</code></pre>
<p>I'm trying to create a column <code>col3</code> where <code>col1=1</code> and <code>col2==True</code> its <strong>1</strong> else <strong>0</strong></p>
<p>Using <code>np.where</code>:</p>
<pre><code>data.assign(col3=np.where(data["col1"]==1 & data["col2"], 1, 0))
</code></pre>
<pre><code>col1 col2 col3
0 0 False 1
1 1 True 1
2 1 False 0
3 1 False 0
4 1 True 1
5 0 False 1
</code></pre>
<p>For row 1: col1==0 & col2=False, but I'm getting col3 as 1.</p>
<p>What am I missing??</p>
<p>The desired output:</p>
<pre><code>
col1 col2 col3
0 0 False 0
1 1 True 1
2 1 False 0
3 1 False 0
4 1 True 1
5 0 False 0
</code></pre>
|
<python><pandas><numpy>
|
2023-03-17 12:32:16
| 1
| 3,025
|
Ailurophile
|
75,767,562
| 12,436,050
|
Filter triples from a turtle file using SPARQL
|
<p>I have following triples in my turtle file on which I would like to apply a regex to filter out the last triples.</p>
<pre><code>ttl file:
### https://rmswi/#/lists/100000000001
<https://rmswi/#/lists/100000000001> rdf:type owl:Class ;
rdfs:label "Age Range" .
### https://rmswi/#/lists/100000000001/terms/100000000029
<https://rmswi/#/lists/100000000001/terms/100000000029> rdf:type owl:Class ;
rdfs:subClassOf <https://rmswi/#/lists/100000000001> ;
<http://purl.obolibrary.org/obo/IAO_0000115> "Any human before birth." ;
<http://www.geneontology.org/formats/oboInOwl#hasExactSynonym> "Fetus" ,
"Foetus" ,
"In utero" ;
rdfs:label "In utero" ;
<https://ontology/properties/Domain> "https://rmswi/#/lists/100000000004/terms/100000000012" ;
<https://ontology/properties/Term_Status> "CURRENT" .
</code></pre>
<p>I have following SPARQL query to extract all the triples (with predicate as rdfs:label).</p>
<pre><code>query = """
prefix oboInOwl: <http://www.geneontology.org/formats/oboInOwl#>
prefix obo: <http://purl.obolibrary.org/obo/>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT distinct ?s ?p ?o
WHERE {
?s rdfs:label ?o ;
FILTER (strstarts(str(?s), 'https://rmswi/#/lists/')) .
FILTER REGEX(?s, 'https:\/\/#\/lists\/\d+$' )
}
"""
qres = g.query(query)
for row in qres:
print (row)
</code></pre>
<p>The expected output is:</p>
<pre><code>ttl file:
(rdflib.term.URIRef('https://#/lists/100000000001'), None, rdflib.term.Literal('Age Range'))
</code></pre>
<p>Any help is highly appreciated</p>
|
<python><sparql><turtle-rdf>
|
2023-03-17 12:24:27
| 0
| 1,495
|
rshar
|
75,767,506
| 5,852,692
|
Generating n-random float values between -x and y summed up to a predefined value
|
<p>I would like to generate n random float values which should be constrainted with a lower and upper bounds, which the total should be also 0 or a predefined value.</p>
<p>For example, I want to generate 4 float values between -10.0 and 2.0, total=0, then following values would work:</p>
<pre><code>[-1.0, -1.5, 1.75, 1.75]
</code></pre>
<p>or 5 floats between -10.0 and 1.0, and total=1.0 then:</p>
<pre><code>[0.5, -0.5, 0.0, 0.0, 1.0]
</code></pre>
<p>is there any generic way or a good distribution method via numpy or a custom function via vanilla Python? If because of the constraints these numbers cannot be summed up to the predefined value than the function could give an error, like:</p>
<p>3 floats between -2.0 and -1.0, total=1.0:</p>
<pre><code>raise ValueError
</code></pre>
|
<python><numpy><random><distribution>
|
2023-03-17 12:18:19
| 2
| 1,588
|
oakca
|
75,767,468
| 1,021,819
|
In pandas, how can I select the second row, unless there is only one row (in which case that row should be returned)?
|
<p>I have a dataframe with <code>n</code> rows.</p>
<p>I want to select the second row, except when there is only one row I want that row.</p>
<p>Here are some dummy data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df=pd.DataFrame(range(20,28),columns=["day"])
df.sort_values("day",ascending=True,inplace=True)
df
</code></pre>
<p>Things I have tried:</p>
<pre class="lang-py prettyprint-override"><code>df.head(2).tail(1)
</code></pre>
<p>This fails for <code>n</code> = 1 - e.g.</p>
<pre class="lang-py prettyprint-override"><code>df.head(1).iloc[1]
</code></pre>
<blockquote>
<p>IndexError: single positional indexer is out-of-bounds</p>
</blockquote>
<p>Thanks as ever</p>
|
<python><pandas>
|
2023-03-17 12:15:04
| 5
| 8,527
|
jtlz2
|
75,767,309
| 1,901,071
|
Python Speeding Up a Loop for text comparisons
|
<p>I have a loop which is comparing street addresses.
It then uses fuzzy matching to tokenise the addresses and compare the addresses. I have tried this both with <code>fuzzywuzzy</code> and <code>rapidfuzz</code>.
It subsequently returns how close the match is.
The aim is to try and take all my street addresses 30k or so and match a variation of the street address to a structured street address in my dataset.
The end result would be a reference table with two columns:</p>
<ul>
<li>Column A is the reference column address</li>
<li>Column B is an address where the match is good enough to be associated with column A</li>
<li>Column A can have many associated addresses.</li>
</ul>
<p>I am not a huge python user but do know that for loops are the last resort for most problems (<a href="https://stackoverflow.com/questions/45670242/loop-through-dataframe-one-by-one-pandas">third answer</a>). With that in mind, I have used for loops. However my loops will take approx 235 hours which is sub-optimal to say the least. I have created a reproducible example below. Can anyone see where i can make any tweaks? I have added a progress bar to give you an idea of the speed. You can increase the number of addresses by changing the line <code>for _ in range(20):</code></p>
<pre><code>import pandas as pd
from tqdm import tqdm
from faker import Faker
from rapidfuzz import process, fuzz
# GENERATE FAKE ADDRESSES FOR THE REPRODUCIBLE EXAMPLE -----------------------------------------------
fake = Faker()
fake_addresses = pd.DataFrame()
for _ in range(20):
# Generate fake address
d = {'add':fake.address()}
df = pd.DataFrame(data = [d])
# Append it to the addresses dataframe
fake_addresses = pd.concat([fake_addresses, df])
fake_addresses = fake_addresses.reset_index(drop=True)
# COMPARE ADDRESSES ---------------------------------------------------------------------------------
# Here we are making a "dictionary" of the addresses where we use left side as a reference address
# We use the right side as all the different variations of the address. The addresses have to be
# 0% similar. Normally this is 95% similarity
reference = fake_addresses['add'].drop_duplicates()
ref_addresses = pd.DataFrame()
# This takes a long time. I have added tqdm to show how long when the number of addresses is increased dramatically
for address in tqdm(reference):
for raw_address in reference:
result = fuzz.token_sort_ratio(address, raw_address)
d = {'reference_address': address,
'matched_address': raw_address,
'matched_result': result}
df = pd.DataFrame(data = [d])
if len(df.index) > 0:
filt = df['matched_result'] >= 0
df = df.loc[filt]
ref_addresses = pd.concat([ref_addresses, df], ignore_index=True)
else:
ref_addresses = ref_addresses
</code></pre>
|
<python><pandas><loops><fuzzywuzzy><rapidfuzz>
|
2023-03-17 11:53:49
| 3
| 2,946
|
John Smith
|
75,767,083
| 17,795,398
|
SQLite: how to concatenate JOIN
|
<p>I have three tables in my database, related to physical exercises:</p>
<pre><code>exercises(id, name)
tags(id, name)
exercises_tags(id_exercise, id_tag)
</code></pre>
<p>The <code>exercises_tags</code> table stores which tags corresponds to which exercise. One exercise can have multiple tags.</p>
<p>Example:</p>
<pre><code>exercises(id, name):
(1, 'squat')
tags(id, name)
(1, 'legs')
(2, 'quads')
(3, triceps)
exercises_tags(id_exercise, id_tag)
(1, 1)
(1, 2)
</code></pre>
<p>I want a query that, for a given exercise name, it returns the corresponding tag names, in the example input: <code>squat</code>; output = <code>['legs', 'quads']</code> . I know how to get the exercise <code>id</code> in a query and the corresponding <code>id_tag</code> in another query, but I don't know how to put all together and get the final result. I think I have to use <code>JOIN</code>, but I don't know how to perform two <code>JOIN</code>s.</p>
|
<python><sql><list><sqlite><inner-join>
|
2023-03-17 11:30:19
| 1
| 472
|
Abel Gutiérrez
|
75,766,931
| 5,390,316
|
How to uniformize dictionary keys casing in Python
|
<p>I have a consumer lambda, that consumes data from two different services, that provide the "same-ish" reposes, that I'll need to reconcile, for instance:</p>
<p>Response from service A: (There will be N items in each array, showing one as example).</p>
<pre class="lang-json prettyprint-override"><code>[{
"property_type": "something",
"property": "something_nah",
"property_context": {
"some_thing": "nasty"
},
"value": null,
"priority": 42
}]
</code></pre>
<p>Response from service B:</p>
<pre class="lang-json prettyprint-override"><code>[{
"propertyType": "something",
"propertyId": "something_different",
"propertyContext": {
"someThing": "even more nasty",
"anotherThing": "Unlimited power ... nested items"
},,
"value": null,
"priority": 12
}]
</code></pre>
<p>Now, I need to create a single list with those responses, order and filter based on their <code>property_type</code>/<code>propertyType</code>, context and Id. Basically, I need to sanitise/uniformize the casing on the property names.</p>
<p>Being a Typescript/C# developer I can imagine some ways of doing this but all of them incur into iterating over each key recursively to fix the casing but I heard there's something called "Pythonic" way of doing stuff that could help me here. So here's the question:</p>
<p>How can I uniformize the property name casing on a <code>requests</code> (lib that does http calls) response?</p>
<p>For instance, transform the response from service A above into this (matching response from service B):</p>
<pre class="lang-json prettyprint-override"><code>[{
"propertyType": "something",
"propertyId": "something_nah",
"propertyContext": {
"someThing": "nasty"
},
"value": null,
"priority": 42
}]
</code></pre>
<p><a href="https://stackoverflow.com/questions/60148175/convert-all-keys-in-a-nested-dictionary-from-camelcase-to-snake-case">This</a> Seems like a solution but I'm looking for something that <strong>does not enumerate</strong> all the responses (or, at least, postpone it), as they're usually many and I handle tons of them.</p>
<p>I was thinking about something like a special type casting on the <code>requests</code> side, that I could cast it to a type and, I don't know, when serialising such type it would use a different casing on the serialisation, or a Class that receives the response as dict and has getters that do the casing change.</p>
|
<python><python-requests><casing>
|
2023-03-17 11:15:24
| 0
| 879
|
Eduardo Elias Saléh
|
75,766,873
| 6,733,421
|
Celery - No matching distribution found after pip update
|
<p>I am unable to install celery/kombu. When installing, I get the following error.</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement kombu<6.0,>=5.2.3 (from celery) (from versions: none)
ERROR: No matching distribution found for kombu<6.0,>=5.2.3
</code></pre>
<hr />
<p>I am using:</p>
<ul>
<li>Python: 3.8</li>
<li>Pip: 23.0.1</li>
</ul>
|
<python><pip><celery><kombu>
|
2023-03-17 11:07:26
| 1
| 989
|
Sai Chander
|
75,766,842
| 724,395
|
I need to create a linear color map with a color for "errors"
|
<p>I need to create a color map that behaves linearly between 0 and 1 on a grey scale.
Then I also want that any number outside the range [0,1] is drown red.</p>
<p>Would it be possible?</p>
|
<python><matplotlib><colormap>
|
2023-03-17 11:05:12
| 1
| 10,402
|
Aslan986
|
75,766,380
| 7,479,675
|
Cannot upload YouTube video with thumbnails and tags
|
<p>This is the code that I used to upload a video with its title, description, thumbnails, and tags. While the video and its title and description were successfully uploaded, the thumbnails and tags were not. As a result, the video has no thumbnails and tags</p>
<pre><code>import sys
import os
import base64
import io
import google_auth_oauthlib.flow
import googleapiclient.discovery
import googleapiclient.errors
from google.oauth2.credentials import Credentials
from googleapiclient.errors import HttpError
from googleapiclient.http import MediaFileUpload
from oauth2client.file import Storage
from google.auth.transport.requests import Request
# Define scopes for authorization
scopes = ["https://www.googleapis.com/auth/youtube.upload"]
CLIENT_SECRETS_FILE = "client_secrets.json"
VIDEO_FILE = "video.mkv"
CREDENTIALS_FILE = "credentials.json"
def get_credentials():
creds = None
if os.path.exists(CREDENTIALS_FILE):
# Load existing credentials from the storage file
with open(CREDENTIALS_FILE, 'r') as f:
creds = Credentials.from_authorized_user_info(eval(f.read()))
if not creds or not creds.valid:
# If there are no (valid) credentials available, let the user log in.
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open(CREDENTIALS_FILE, 'w') as f:
f.write(str(creds.to_json()))
return creds
def main():
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
# Get credentials and create an API client
credentials = get_credentials()
youtube = googleapiclient.discovery.build(
"youtube", "v3", credentials=credentials)
# Define request body for video resource
request_body = {
"snippet": {
"title": "My awesome video",
"description": "This is a video that I made for YouTube.",
"thumbnails": {
"maxres": {
"url": "https://iili.io/HXD7zTg.jpg"
}
},
"tags": ["youtube", "video", "awesome"],
"categoryId": "22"
},
"status": {
"privacyStatus": "public"
}
}
media_type = 'video/*'
# Upload the video
request = youtube.videos().insert(
part="snippet,status",
body=request_body,
media_body=googleapiclient.http.MediaFileUpload(VIDEO_FILE,
mimetype=media_type,
resumable=True)
)
response = request.execute()
if __name__ == "__main__":
main()
</code></pre>
<p>I am using the JSON structure format for video uploading from the following link: <a href="https://developers.google.com/youtube/v3/docs/videos" rel="nofollow noreferrer">https://developers.google.com/youtube/v3/docs/videos</a>.</p>
<p><em>maxres - The highest resolution version of the thumbnail image. This image size is available for some videos and other resources that refer to videos, like playlist items or search results. This image is 1280px wide and 720px tall.</em></p>
<p>The size of the image is 1280x720px</p>
<p>What is wrong with this code? Help find the error.</p>
|
<python><youtube><youtube-api><youtube-data-api>
|
2023-03-17 10:15:33
| 1
| 392
|
Oleksandr Myronchuk
|
75,766,250
| 5,952,166
|
Pass Ipython / Jupyter variables to Powershell commands
|
<p>When on Linux (bash) to list the contents of a directory in Jupyter I can simply do:</p>
<pre class="lang-py prettyprint-override"><code>path = './foo/bar'
%ls {path}
</code></pre>
<p>I could not find a way achieve the same when running Jupyter on Windows (powershell). Is there one?</p>
|
<python><windows><jupyter-notebook><jupyter><ipython>
|
2023-03-17 10:05:35
| 1
| 776
|
ezatterin
|
75,766,189
| 5,890,300
|
How to show column names of Pyspark joined DataFrame with dataframe aliases?
|
<p>Let's say we join 2 dataframes in pyspark, each one has its alias and they have the same columns:</p>
<pre><code> joined_df = source_df.alias("source").join(target_df.alias("target"), \
(col("source.A_column") == col("target.A_column")), 'outer')
</code></pre>
<p>How can I get a list of column names of <em><strong>joined_df</strong></em> dataframe with aliases from dataframes, something like:</p>
<blockquote>
<p>[source.A_column, target.A_column, source.B_column, target.B_column, source.C_column, target.C_column]</p>
</blockquote>
<p>It shows in case of analysis exception, so there is obviously the information stored somewhere, but I didn't found a way, how to show it somehow without the exception...</p>
<p>What I tried:</p>
<ol>
<li>Get the names as above with some direct method or property, but there is no such thing like <em>df.columns_with_alias</em> property</li>
<li>Get list of columns from Dataframe as instances of Column class (because this class saves the alias info), but df.columns just gives you strings... And I found no other way.</li>
</ol>
<p>Is there any way, how to show this column names?</p>
|
<python><dataframe><pyspark>
|
2023-03-17 10:00:49
| 2
| 566
|
Lohi
|
75,765,862
| 8,844,500
|
Fast way of getting all positions of the elements of a sub-list
|
<p>I'm trying to obtain all positions of a sub-list of elements taken from a big list.</p>
<p>In Python, using numpy, say I have</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime as dt
import numpy as np
from numba import jit, int64
n, N = 20, 120000
int_vocabulary = np.array(range(N))
np.random.shuffle(int_vocabulary) # to make the problem non-trivial
int_sequence = np.random.choice(int_vocabulary, n, replace=False)
</code></pre>
<p>and I want to get all positions the integers taken in <code>int_sequence</code> have in the big sequence <code>int_vocabulary</code>. I'm interested in fast computation.</p>
<p>So far I've tried using numba brute force research, numpy mask approach, list comprehension brute force (for baseline), and list comprehension and numpy mask mixing.</p>
<pre class="lang-py prettyprint-override"><code>@jit(int64[:](int64[:], int64[:], int64, int64))
def check(int_sequence, int_vocabulary, n, N):
all_indices = np.full(n, N)
for xi in range(n):
for i in range(N):
if int_sequence[xi] == int_vocabulary[i]:
all_indices[xi] = i
return all_indices
t0 = dt.now()
for _ in range(10):
all_indices0 = check(int_sequence, int_vocabulary, n, N)
t0 = (dt.now() - t0).total_seconds()
print("numba : ", t0)
t0 = dt.now()
for _ in range(10):
mask = np.full(len(int_vocabulary), False)
for x in int_sequence:
mask += int_vocabulary == x
all_indices1 = np.flatnonzero(mask)
t0 = (dt.now() - t0).total_seconds()
print("numpy :", t0)
t0 = dt.now()
for _ in range(10):
all_indices2 = np.array([i for i, x in enumerate(int_vocabulary)
if x in int_sequence])
t0 = (dt.now() - t0).total_seconds()
print("list comprehension : ", t0)
t0 = dt.now()
for _ in range(10):
mask = np.sum(np.array([int_vocabulary == x for x in int_sequence]), axis=0)
all_indices3 = np.flatnonzero(mask)
t0 = (dt.now() - t0).total_seconds()
print("mixed numpy + list comprehension : ", t0)
assert np.sum(all_indices0) == np.sum(all_indices1)
assert np.sum(all_indices1) == np.sum(all_indices2)
assert np.sum(all_indices2) == np.sum(all_indices3)
</code></pre>
<p>each time I do the calculation 10 times to get comparable statistics. The outcome is</p>
<pre class="lang-bash prettyprint-override"><code>numba : 0.028039
numpy : 0.011616
list comprehension : 3.116753
mixed numpy + list comprehension : 0.032301
</code></pre>
<p>I nevertheless wonder whether there are faster algorithm for this problem.</p>
|
<python><numpy><numba>
|
2023-03-17 09:31:11
| 1
| 329
|
FraSchelle
|
75,765,499
| 10,722,169
|
Convert rows containing certain characters to columns in Python
|
<p>I've a pandas dataframe which contains data like this:-</p>
<pre><code>name value
Data size Building name
Data size Empire State
Data size Petronas Tower
Data size Eiffel Tower
Data size USA
Data size 20
Data size 24
Data size 32
Data size Brazil
Data size 38
Data size 42
Data size 87
Data size France
Data size 37
Data size 43
Data size 18
</code></pre>
<p>I want to convert this data into this form:-</p>
<pre><code>Building Name Country Name Data Sizes
Empire State USA 20
Empire State Brazil 38
Empire State France 37
Petronas Tower USA 24
Petronas Tower Brazil 42
Petronas Tower France 43
Eiffel Tower USA 32
Eiffel Tower Brazil 87
Eiffel Tower France 18
</code></pre>
<p>I tried <code>unstack()</code> method but it was of no avail.
It'd be great if someone could help me figure this out.</p>
|
<python><pandas><dataframe>
|
2023-03-17 08:54:26
| 1
| 435
|
vesuvius
|
75,765,430
| 5,308,851
|
Share test verifier in various test modules with pytest
|
<p>I'm completely new to python testing and currently run into the following problem using pytest:</p>
<p>I have a around 20 different types of objects (Let's call them A, B, C, ...) and I want to create a separate test-module (test_a, test_b, ...) for each of them. After playing around with the tests I found out that the final verification function is identical for each object, so I refactored the code into a class and put this into a separate module (Lets call it utils.py):</p>
<pre><code>import pytest
class Verifyer:
def __init__(self) -> None:
# Lots of state variables, limited to 2 for simplicity
self.obj1 = None
self.obj2 = None
def verify(self, line_1: str, line_2: str):
# Lots of comparisons based on internal state ...
assert line_1 == "Hello, World"
assert line_2 == "Goodbye, World"
@pytest.fixture
def verifier():
return Verifyer()
</code></pre>
<p>In my test module (here only for test_a.py) I can the do the following:</p>
<pre><code>from .utils import verifier
def test_a_essential(verifier):
line_1 = '*A XYZ %'
line_2 = '*A XYZ %'
verifier.verify(line_1, line_2)
</code></pre>
<p>In principle, this all works, but as soon as the test fails, I do not get any detailed error-information (even with -vv). If the string comparison fails I only see the following message:</p>
<pre><code> def verify(self, line_1: str, line_2: str):
# Lots of comparisons based on internal state ...
> assert line_1 == "Hello, World"
E AssertionError
</code></pre>
<p>As soon as I inline the content of utils.py into test_a.py the error-message looks like this:</p>
<pre><code>Expected :'Hello, World'
Actual :'*A XYZ %'
</code></pre>
<p>Is there a possible way to share the verify method among all 20+ test modules? If not, I can only copy+paste the code 20+ times or throw all tests for each object type into one big test module. Both options seem highly unattractive to me.</p>
|
<python><pytest>
|
2023-03-17 08:46:06
| 0
| 345
|
Markus Moll
|
75,765,316
| 4,473,615
|
Nested for loop in Python to get equivalent result
|
<p>I have two for loops to get the result as below.</p>
<p>First loop:</p>
<pre><code> for row1 in value1:
print(row1)
The result is:
A
B
C
</code></pre>
<p>Second loop:</p>
<pre><code> for row2 in value2:
print(row2)
The result is:
A
B
C
</code></pre>
<p>The result is expected as below using both the loops:</p>
<pre><code>A A
B B
C C
</code></pre>
<p>I have tried using nested loops:</p>
<pre><code>for row1 in value1:
for row2 in value2:
print(row1, row2)
The result is coming in as Cartesian values:
A A
A B
A C
B A
B B
B C
C A
C B
C V
</code></pre>
<p>What can I try next?</p>
|
<python><python-3.x><for-loop>
|
2023-03-17 08:34:32
| 1
| 5,241
|
Jim Macaulay
|
75,765,235
| 3,723,197
|
Error when importing the processor library
|
<p>I am trying to use the <code>processor</code> library but whenever I do <code>import processor</code>, it gives me this error:</p>
<pre><code>File c:\users\souregi\appdata\local\programs\python\python38\lib\site-packages\hy\macros.py:67 in import
parse_tree = pattern.parse(args)
File c:\users\souregi\appdata\local\programs\python\python38\lib\site-packages\funcparserlib\parser.py:221 in parse
(tree, _) = self.run(tokens, State(0, 0, None))
File c:\users\souregi\appdata\local\programs\python\python38\lib\site-packages\funcparserlib\parser.py:383 in _shift
(v, s2) = self.run(tokens, s)
File c:\users\souregi\appdata\local\programs\python\python38\lib\site-packages\funcparserlib\parser.py:306 in _add
(v2, s3) = other.run(tokens, s2)
File c:\users\souregi\appdata\local\programs\python\python38\lib\site-packages\funcparserlib\parser.py:534 in finished
raise NoParseError("got unexpected token", s2)
NoParseError: got unexpected token: hy.models.List([
hy.models.Expression([
hy.models.Symbol('.'),
hy.models.Symbol('collections'),
hy.models.Symbol('abc')]),
hy.models.List([
hy.models.Symbol('Iterable')])]), expected: end of input
</code></pre>
<p>steps to reproduce: <code>pip install processor</code> and then <code>import processor</code>. This error happens with python 3.8 and 3.9</p>
|
<python>
|
2023-03-17 08:25:11
| 1
| 3,273
|
Mehdi Souregi
|
75,765,233
| 14,045,537
|
Pandas drop duplicates based on one group and keep the last value
|
<p>I have a dataframe:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({"col1": ["a", "a", "a", "a", "a", "a"],
"col2": [0,0,0,1,1, 1],
"col3": [1,2,3,4,5, 6]})
</code></pre>
<p><strong>data</strong></p>
<pre><code>
col1 col2 col3
0 a 0 1
1 a 0 2
2 a 0 3
3 a 1 4
4 a 1 5
5 a 1 6
</code></pre>
<p>I'm trying to remove the duplicates based on <code>col2 == 1</code> and keep the last entry</p>
<p>Using the below code I was able to keep the first and drop others.</p>
<pre><code>data[~(data.duplicated(["col2"]) & data.col2.eq(1))]
</code></pre>
<pre><code>col1 col2 col3
0 a 0 1
1 a 0 2
2 a 0 3
3 a 1 4
</code></pre>
<p>How to remove duplicates based on one category in a column and keep the last entry?</p>
<p><strong>Desired Output</strong></p>
<pre><code> col1 col2 col3
0 a 0 1
1 a 0 2
2 a 0 3
3 a 1 6
</code></pre>
|
<python><pandas><data-manipulation><drop-duplicates>
|
2023-03-17 08:25:01
| 2
| 3,025
|
Ailurophile
|
75,765,087
| 12,436,050
|
Join/merge multiple JSON files into one file using Python 3.7
|
<p>I have multiple JSON files in a folder (~200) which I would like to combine and generate a single JSON file. I am trying following lines of code.</p>
<pre><code>result = ''
for f in glob.glob("*.json"):
with open(f, "r", encoding='utf-8') as infile:
result += infile.read()
with open("merged_file.json", "w", encoding='utf-8') as outfile:
outfile.writelines(result)
</code></pre>
<p>However, it generate the file with JSON content but all the content is in only one line. How can I append the content as it is in the files.</p>
|
<python><json><glob>
|
2023-03-17 08:07:28
| 3
| 1,495
|
rshar
|
75,765,000
| 4,902,679
|
Python - Optimal way to check for a condition with a timeout?
|
<p>Wrote two functions, 1. <code>uploads a gzip file to Artifactory</code> and 2. <code>listens to the Kafka topic</code>.
As further steps, wanted to validate whether the upload is successful within 5 minutes by listening to the Kafka topic for the keyword - <code>push_status: True</code>, else fail the script.
I'm kind of stumped with the logic that is more elegant and pythonic on the <code>else</code> part.</p>
<p>Code snippet:</p>
<pre><code>upload_status = fetch_kafka_topic("topic", "brokers") #Returns messages from particular topic.
if bool(upload_status) == True: #Checks if the returned dictionary is empty or not
if upload_status['file_sha'] == EXPECTED_SHA: # CHeck if the SHA Values matches with the one I've in prior.
if upload_status['push_status'] == True:
print("upload is successful ...")
else:
Monitor and query the kafka topic again for the keyword for 5 mins and fail the build if it exceeds the timeout.
</code></pre>
|
<python><python-3.x><kafka-python>
|
2023-03-17 07:55:55
| 1
| 544
|
Goku
|
75,764,504
| 5,210,052
|
How can we remove the dummy layers in the SAP BOM?
|
<p>In the bill of materials (BOM) in SAP, there are typically dummy or fake materials are that used to bridge relationships or create BOM hierachies. One example is in the <code>df</code> below.</p>
<p>In this example, to produce <code>product1</code>, we need material <code>A</code> and <code>dummy1</code>.</p>
<p><code>A</code> is already an actual material so we may stop here.</p>
<p><code>dummy1</code> is actually a list or group of more materials (<code>B</code>, <code>C</code> and <code>dummy2</code> in this example)</p>
<p><code>dummy2</code> then includes <code>D</code> and <code>E</code>.</p>
<p>Finally, the actual information is just <code>product1</code> -> <code>A</code>,<code>B</code>,<code>C</code>,<code>D</code>,<code>E</code>.</p>
<p>My sample code actually breaks the structure and fails to generate what we need.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'father': ['product1', 'product1', 'dummy1', 'dummy1', 'dummy1', 'product2', 'dummy2', 'dummy2'],
'child': ['A', 'dummy1', 'B', 'C', 'dummy2','F', 'D', 'E']
})
# father child
# 0 product1 A
# 1 product1 dummy1
# 2 dummy1 B
# 3 dummy1 C
# 4 dummy1 dummy2
# 5 product2 F
# 6 dummy2 D
# 7 dummy2 E
df_desired = pd.DataFrame({
'father': ['product1', 'product1','product1','product1','product1', 'product2'],
'child': ['A', 'B', 'C', 'D', 'E', 'F']
})
# Desired Data Format:
# father child
# 0 product1 A
# 1 product1 B
# 2 product1 C
# 3 product1 D
# 4 product1 E
# 5 product2 F
dummy_list = ['dummy1', 'dummy2']
L = pd.DataFrame()
for dummy in dummy_list:
print(f'----{dummy}----')
the_father_list = df[df.child==dummy]
the_children_list = df[df.father==dummy]
print(the_father_list)
print(the_children_list)
updated_df = the_children_list.copy()
updated_df['father'] = the_father_list['father'].values[0]
print(updated_df)
L = pd.concat([L, updated_df], axis=0)
</code></pre>
|
<python><pandas>
|
2023-03-17 06:46:34
| 3
| 1,828
|
John
|
75,764,419
| 12,931,358
|
How to convert a json file to parser by argparse?
|
<p>I have a json file like,</p>
<pre><code>{
"model_name_or_path": "microsoft/layoutlmv3-base",
"config_name": null,
"tokenizer_name": null,
"cache_dir": null,
"model_revision": "main",
"use_auth_token": false
}
</code></pre>
<p>I want to convert this JSON file into a parser format, so as to use by this method:</p>
<pre><code>print(model_args.model_name_or_path)
print(model_args.tokenizer_name) #etc..
</code></pre>
<p>I followed some instruction by using <code>argparse</code> in python, but still have no idea after creating a new argparser.</p>
<pre><code>with open("model_args.json", 'r') as f:
model_json = json.load(f)
model_args = argparse.ArgumentParser()
# model_args.add_argument(default=model_json)
# model_args.parse_args(namespace=argparse.Namespace(**json.loads(f)))
print(model_args.model_name_or_path)
</code></pre>
<p>What should I do next step, if I want to add this json file into parser?</p>
|
<python><json><python-3.x><argparse>
|
2023-03-17 06:34:18
| 1
| 2,077
|
4daJKong
|
75,764,111
| 7,735,772
|
Why is the calculated time returned in the __enter__ method using context manager in python?
|
<pre><code># timing.py
from time import perf_counter
class Timer:
def __enter__(self):
self.start = perf_counter()
self.end = 0.0
return lambda: self.end - self.start # Does it not evaluate to 0.0 - self.start???
def __exit__(self, *args):
self.end = perf_counter()
With Timer() as time:
# do something
print(time())
</code></pre>
<p>In the above code, according to me, the return statement should be placed in the <code>__exit__()</code> method as it hold the true value of the end time.</p>
<p>I know that the <code>__enter__()</code> method is called once at the beginning of the <code>with</code> block, so it should hold the starting time. Logically it makes sense to me that the <code>__exit__()</code>, which is also called once, can calculate the ending time and then return the elapsed time.</p>
<p>The above code gives me the impression that the <code>__enter__()</code> method is called twice.</p>
<p>Kindly provide me clarity on what is going on.</p>
|
<python><python-3.x>
|
2023-03-17 05:42:48
| 1
| 737
|
Monojit Sarkar
|
75,764,062
| 1,230,724
|
Pickle custom classes and unpickle without access to these classes
|
<p>Is it possible to include the object's class definitions into the pickled file, so that objects can be unpickled without providing the source code to these classes? Or perhaps there's another serialisation method/standard that can be used instead...</p>
<p>Pickling:</p>
<pre><code>
>>> class A(object):
... def __init__(self, foo):
... self.foo = foo
...
>>>
>>> a = A('abc')
>>> a
<__main__.A object at 0x7f78ce7f3550>
>>> a.foo
'abc'
>>> import pickle
>>> with open('test.pickle', 'wb') as handle:
... pickle.dump(a, handle, protocol=pickle.HIGHEST_PROTOCOL)
...
>>>
</code></pre>
<p>Unpickle without access to class A:</p>
<pre><code>>>> import pickle
>>>
>>>
>>> with open('test.pickle', 'rb') as handle:
... b = pickle.load(handle)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: Can't get attribute 'A' on <module '__main__' (built-in)>
</code></pre>
<p>I'm trying to create test data for some test cases and am wondering whether those test cases can be executed separate to the code that produced this data.</p>
|
<python><pickle>
|
2023-03-17 05:31:03
| 2
| 8,252
|
orange
|
75,764,014
| 18,551,983
|
Generate list of values from nested dictionary as values
|
<p>I have a nested dictionary like this:</p>
<pre><code>d = {'A': {'A': 0.11, 'C': 0.12, 'D': 1.0}, 'B': {'B': 0.13, 'C': 0.14}}
</code></pre>
<p>I want to generate this output where in values only the list of keys of inner dictionary are selected.</p>
<p>output dictionary</p>
<pre><code>output = {'A':['A', 'C', 'D'], 'B':['B', 'C']}
</code></pre>
<p>Is there any way to do?</p>
|
<python><python-3.x><list><dictionary>
|
2023-03-17 05:23:18
| 2
| 343
|
Noorulain Islam
|
75,763,930
| 7,700,802
|
Faster way of fillna() with a group by clause
|
<p>I am trying to perform a fillna() but following a group by clause. I found this one-liner</p>
<pre><code>df["val"] = df.groupby(["group_by1", "group_by2"]).transform(lambda x: x.fillna(x.mean()))
</code></pre>
<p>We could just write a couple for loops (I know not Pythonic!)</p>
<pre><code>l = []
for f, df_f in df.groupby('group_by1'):
for h, df_h in df_f.groupby('group_by2'):
df_copy = df_h.copy()
df_copy['val'].fillna(df_copy['val'].median(),inplace=True)
l.append(df_copy)
</code></pre>
<p>then just concat</p>
<pre><code>df_output = pd.concat(l)
</code></pre>
<p>I noticed the for loops are a lot faster. Although when I was testing this, the for loop approach still contained some NaN values which I thought was odd (or most likely did something wrong). Thus, my question is simple, what method should I use? Is there something faster?</p>
<p>Here is an example dataframe to use</p>
<pre><code>N = 20_000 ; df = pd.DataFrame({'val': np.random.choice([1, np.nan], p=[0.9, 0.1], size=N), 'group_by1': np.random.randint(1, 10, size=N), 'group_by2': np.random.randint(1, 10, size=N),})
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-17 05:05:03
| 1
| 480
|
Wolfy
|
75,763,811
| 5,192,123
|
how to insert csv.gz to table in clickhouse?
|
<p>I created a table called <code>test_table</code> and I am trying to insert a <code>csv.gz</code> file data into it. My clickhouse db is running locally on a docker container with port 18123:8123. This is what I did using their python client:</p>
<pre class="lang-py prettyprint-override"><code>import clickhouse_connect
client = clickhouse.connect.get_client(host='localhost', port=18123)
stmt = f"""
INSERT INTO test_table
FROM INFILE 'data.csv.gz'
COMPRESSION 'gzip' FORMAT CSV
"""
client.command(stmt)
</code></pre>
<p>but I get an error saying</p>
<pre><code>clickhouse_connect.driver.exceptions.DatabaseError: :HTTPDriver for http://localhost:18123 returned response code 404)
DB::Exception: Query has infile and was send directly to server. (UNKNOWN_TYPE_OF_QUERY) (version 23.2.1.2537 (official build))
</code></pre>
<p>Any clue to what I'm doing wrong and how I can fix it? Thanks in advance!</p>
|
<python><clickhouse>
|
2023-03-17 04:37:00
| 3
| 2,633
|
MoneyBall
|
75,763,654
| 12,125,395
|
How to use lag function inside a pyspark agg function
|
<p>I would like to use <code>lag</code> function inside <code>agg</code> function. My data is as follow:</p>
<pre><code>from pyspark.sql.functions import *
from pyspark.sql import SparkSession
from pyspark.sql import Window
data = [
('a', 10, 15),
('a', 40, 60),
('a', 70, 100),
('b', 10, 20),
('b', 30, 50),
('b', 60, 80)
]
schema = ['id', 'start', 'end']
df = spark.createDataFrame(data, schema=schema)
</code></pre>
<p>I want to calculate the sum of difference of current start and previous end. Expected result:</p>
<pre><code>a: (40 - 15) + (70 - 60) = 35
b: (30 - 20) + (60 - 50) = 20
</code></pre>
<p>This is what I have tried:</p>
<pre><code>def duration(start, end):
end_lag = lag(end, 1).over(Window.orderBy(end)) # <--- error
d = start - end_lag
return sum(d)
df.groupBy('id').agg(duration(col('start'), col('end'))).show()
</code></pre>
<p>But it shows an error:</p>
<pre><code>pyspark.sql.utils.AnalysisException: It is not allowed to use a window function inside an aggregate function. Please use the inner window function in a sub-query.
</code></pre>
<p>Is there any way I can put <code>lag</code> function inside a <code>agg</code> function?</p>
|
<python><apache-spark><pyspark>
|
2023-03-17 04:01:10
| 1
| 889
|
wong.lok.yin
|
75,763,556
| 235,698
|
sys.getrefcount() returning very large reference counts
|
<p>In CPython 3.11, the following code returns very large reference counts for some objects. It seems to follow pre-cached objects like integers -5 to 256, but CPython 3.10 does not:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> for i in (-6, -5, 0, 255, 256, 257):
... print(i, sys.getrefcount(i))
...
-6 5
-5 1000000004
0 1000000535
255 1000000010
256 1000000040
257 5
</code></pre>
<pre class="lang-py prettyprint-override"><code>Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> for i in (-6, -5, 0, 255, 256, 257):
... print(i, sys.getrefcount(i))
...
-6 5
-5 6
0 234
255 8
256 26
257 5
</code></pre>
<p><a href="https://peps.python.org/pep-0683/" rel="noreferrer">PEP 683 - Immortal Objects, Using a Fixed Refcount</a> may be related, but it isn't mentioned in <a href="https://docs.python.org/3/whatsnew/3.11.html" rel="noreferrer">What's New in Python 3.11</a>, nor is a change in <code>sys.getrefcount()</code> documented.</p>
<p>Anybody have knowledge about this change?</p>
|
<python><cpython><python-3.11>
|
2023-03-17 03:39:16
| 1
| 180,513
|
Mark Tolonen
|
75,763,234
| 3,059,546
|
Send StringSet to DynamoDB from JSON file using Boto3
|
<h2>Problem description</h2>
<p>How can I create an item to send to DynamoDB from a JSON file, and have the type in DynamoDB end up as a StringSet as defined in <a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html#API_PutItem_RequestSyntax" rel="nofollow noreferrer">AWS documentation</a> using Python?</p>
<p>I am using the Boto3 <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/client/put_item.html" rel="nofollow noreferrer">put_item</a> API.</p>
<h2>Approaches attempted</h2>
<p>I've found that if I define the JSON object as a JSON list it gets converted to a List of Objects, which is not what I am going for here - I know all of the types are the same, and I want them to be treated as a Set of one type of Objects. I don't think JSON has an explicit way to define a Set, but please let me know if there is one.</p>
<p>If I define the item within my Python script itself, and make the object a Python Set explicitly the data will not be loaded through the <code>put_item</code> call.</p>
<p>I did find some reference to using the <a href="https://stackoverflow.com/questions/44489485/api-gateway-and-dynamodb-putitem-for-string-set">JS Document Client</a> to create a StringSet, but I haven't seen something similar for Python.</p>
<p>I've seen some references to putting the literal values like "SS" or "M" as the keys, with the object as values for those keys, but I've tried that in a variety of ways and haven't found it makes it through the <code>put_item</code> call at all. If that's a way to do this I'm fine with that as well.</p>
<h2>Example JSON</h2>
<pre><code>{
"mykey":
{
"secondkey": ["val1", "val2", "val3"]
}
}
</code></pre>
<h2>Motivation:</h2>
<p>Having the data stored as a StringSet makes it easier to read the objects out in a Step Function further down the line. Instead of needing to Flow Map through the items in the list, I can instead hand them directly to something else that needs them. Alternatives to this approach are also welcome, especially if this doesn't turn out to be possible.</p>
|
<python><json><amazon-web-services><amazon-dynamodb>
|
2023-03-17 02:20:17
| 2
| 1,010
|
WarSame
|
75,763,168
| 207,791
|
How to print field offsets in scapy?
|
<p>I have a <code>scapy</code> protocol and a packet.</p>
<p>Like "First steps" say, it's easy to print a packet with its fields, and the packet's binary dump:</p>
<pre><code>>>> a=Ether()/IP(dst="www.slashdot.org")/TCP()/"GET /index.html HTTP/1.0 \n\n"
>>> a
<Ether type=IPv4 |<IP frag=0 proto=6 dst=Net("www.slashdot.org/32") |<TCP
|<Raw load='GET /index.html HTTP/1.0 \n\n' |>
>>>
>>> hexdump(a)
0000 FF FF FF FF FF FF 02 42 AC 1F 80 3C 08 00 45 00 .......B...<..E.
0010 00 43 00 01 00 00 40 06 C9 F0 AC 1F 80 3C 68 12 .C....@......<h.
0020 1C 56 00 14 00 50 00 00 00 00 00 00 00 00 50 02 .V...P........P.
0030 20 00 0C EE 00 00 47 45 54 20 2F 69 6E 64 65 78 .....GET /index
0040 2E 68 74 6D 6C 20 48 54 54 50 2F 31 2E 30 20 0A .html HTTP/1.0 .
0050 0A .
>>>
</code></pre>
<p>Now, I want to know, for example, what offset does <code>dport</code> field in <code>TCP</code> have. Or all of the fields in all the layers.</p>
<p>Can I print them with <code>scapy</code>? Is there a single way to do it for all protocols, including custom ones?</p>
|
<python><scapy>
|
2023-03-17 02:06:52
| 2
| 13,707
|
Victor Sergienko
|
75,763,163
| 3,869,543
|
How to do this relative import in Python?
|
<pre><code>folder0
| __init__.py
| folder1
| | __init__.py
| | folder2
| | | __init__.py
| | | script.py
| folder3
| | __init__.py
| | module.py
</code></pre>
<p>How to import <code>module.py</code> in <code>script.py</code>? I tried having <code>from ...folder3 import module</code> in <code>script.py</code> and running <code>python script.py</code> in <code>folder2</code> gives me <code>ImportError: attempted relative import with no known parent package</code>. I am using Python 3.7. I have spent many hours reading many posts on the topic and it is still not working. Please help.</p>
|
<python><relative-import>
|
2023-03-17 02:03:28
| 0
| 833
|
Lei
|
75,763,086
| 4,582,240
|
spark dataframe convert a few flattened columns to one array of struct column
|
<p>I'd like to have some guidance what functions in spark dataframe together with scala/python code to achieve this transformation.</p>
<p>given a dataframe which has below columns</p>
<pre><code>columnA, columnB, columnA1, ColumnB1, ColumnA2, ColumnB2 .... ColumnA10, ColumnB10
eg.
Fat Value, Fat Measure, Salt Value, Salt Measure, Iron Value, Iron Measure
10, mg, 2 mg etc etc
</code></pre>
<p>I'd like to convert it to a column which has type : array of struct.
eg.</p>
<pre><code>type=Fat
amount=10
measure=mg
type = Salt
amount=2
measure=mg
</code></pre>
|
<python><dataframe><scala><apache-spark><pyspark>
|
2023-03-17 01:45:22
| 1
| 1,045
|
soMuchToLearnAndShare
|
75,762,852
| 2,666,270
|
Training sentence transformers with MultipleNegativesRankingLoss
|
<p>I have to fine tune a <code>sentence-transformers</code> model with only positive data. Because of that, I want to use MNR Loss.</p>
<p>I can run this toy example from <code>sentence-transformers</code>, and it all works fine:</p>
<pre><code>from sentence_transformers import SentenceTransformer, InputExample, losses
from torch.utils.data import DataLoader
#Define the model. Either from scratch of by loading a pre-trained model
model = SentenceTransformer('all-mpnet-base-v2')
#Define your train examples. You need more than just two examples...
train_examples = [InputExample(texts=['My first sentence', 'My second sentence'], label=0.8),
InputExample(texts=['Another pair', 'Unrelated sentence'], label=0.3)]
#Define your train dataset, the dataloader and the train loss
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
train_loss = losses.CosineSimilarityLoss(model)
#Tune the model
model.fit(train_objectives=[(loader, train_loss)], epochs=10, warmup_steps=100)
</code></pre>
<p><a href="https://i.sstatic.net/gsOW2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gsOW2.png" alt="enter image description here" /></a>
Once I try to adapt this toy example to the problem I need to solve, following the strategy present <a href="https://towardsdatascience.com/fine-tuning-sentence-transformers-with-mnr-loss-cd6a26685b81" rel="nofollow noreferrer">here</a>, the training just hangs:</p>
<pre><code>from sentence_transformers import SentenceTransformer, InputExample, losses
from torch.utils.data import DataLoader
#Define the model. Either from scratch of by loading a pre-trained model
model = SentenceTransformer('all-mpnet-base-v2')
#Define your train examples. You need more than just two examples...
train_examples = [InputExample(texts=['My first sentence', 'My second sentence']),
InputExample(texts=['Another pair', 'Unrelated sentence'])]
#Define your train dataset, the dataloader and the train loss
train_loss = losses.MultipleNegativesRankingLoss(model)
train_dataloader = sentence_transformers.datasets.NoDuplicatesDataLoader(train_examples, batch_size=16)
#Tune the model
model.fit(train_objectives=[(loader, train_loss)], epochs=10, warmup_steps=100)
</code></pre>
<p><a href="https://i.sstatic.net/Ab9MB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ab9MB.png" alt="enter image description here" /></a></p>
<p>It seems the issue is related to my data loader or loss (or both), but so far I haven't figure it out what exactly is the issue yet.</p>
|
<python><machine-learning><loss-function><sentence-transformers>
|
2023-03-17 00:44:57
| 1
| 9,924
|
pceccon
|
75,762,739
| 9,801,811
|
Xarray interpolate returns nan for all values
|
<p>I'm trying to extract the value of a variable (rainrate) for a given point from a NetCDF dataset. The shape of the NetCDF dataset is <code>24 (time) x 720 (outlat) x 1680 (outlon).</code> The data can be accessed <a href="https://www.dropbox.com/s/373j0i9xth621jc/ST4.20170823.newregridded.nc?dl=0" rel="nofollow noreferrer">here</a>.</p>
<p><strong>What I have tried:</strong></p>
<pre><code>import xarray as xr
import pandas as pd
ds = xr.open_dataset(path_to_NetCDF.nc)
data_ts = ds.interp(outlat=[35.5], outlon=[-97.33], method='nearest') #or method ='linear'
data_rain_rate = data_ts['rainrate'][:, 0, 0].to_series()
df_rain_rate = pd.DataFrame(data_rain_rate).reset_index()
</code></pre>
<p>All the values returned are nan even though the entire data is not nan. What am I doing wrong, and how can I understand why the code is outputting just nan?</p>
<p><strong>Desired output:</strong> A time series of the variable (rainrate).</p>
<p><a href="https://i.sstatic.net/aqvF7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aqvF7.png" alt="A screenshot of the data showing values" /></a>
A screenshot of the data showing values</p>
|
<python><python-xarray>
|
2023-03-17 00:19:36
| 0
| 447
|
PPR
|
75,762,712
| 12,946,401
|
How to train XGBoost with probabilities instead of class?
|
<p>I am trying to train an XGBoost classifier by inputting the training dataset and the training labels. The labels are one hot encodings and instead of sending in the classes such as [0,1,0,0], I wanted to send an inputs of the probabilities like [0,0.6,0.4,0] for a particular training datapoint. The reason for this is because I want to implement the <a href="https://keras.io/examples/vision/mixup/" rel="nofollow noreferrer">mixup algorithm</a> for data augmentation which outputs the augmented data as floating points for the one hot codes for the new augmented labels.</p>
<p>However, I get an error on model.fit because it expects labels in the one-hot code and not probabilities for each class. How can I implement the data augmentation algorithm with my xgboost?</p>
<pre><code>import xgboost as xgb
import numpy as np
# Generate some random data
X = np.random.rand(100, 16)
# Generate random one-hot encoded target variable
y_one_hot = np.random.randint(0, 4, size=(100,))
y = np.eye(4)[y_one_hot]
# Convert one-hot encoded target variable to probabilities
y_proba = np.zeros((y.shape[0], y.shape[1]))
for i, row in enumerate(y):
y_proba[i] = row / np.sum(row)
# Define the XGBoost model
model = xgb.XGBClassifier(objective='multi:softprob', num_class=4)
# Train the model
model.fit(X, y_proba)
# Generate some test data
X_test = np.random.rand(10, 16)
# Predict the probabilities for each class
y_pred_proba = model.predict_proba(X_test)
# Get the predicted class for each sample
y_pred = np.argmax(y_pred_proba, axis=1)
</code></pre>
|
<python><machine-learning><xgboost><data-augmentation>
|
2023-03-17 00:12:02
| 1
| 939
|
Jeff Boker
|
75,762,631
| 10,284,437
|
Is there a way to really disable 'notification' popup with Selenium/Python? chromedriver=111.0.5563.64
|
<p>On facebook, if I go to main page, and I get this annoying notification:</p>
<p><a href="https://i.sstatic.net/DGmOY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DGmOY.png" alt="fb" /></a></p>
<p>I try to bypass (not working) with:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_experimental_option(
"prefs",
{
"credentials_enable_service": False,
"profile.password_manager_enabled": False,
"profile.default_content_setting_values.notifications": 2
# with 2 should disable notifications
},
)
# one more time
options.add_argument('--disable-notifications')
options.add_argument("--password-store=basic")
options.add_argument("--disable-infobars")
options.add_argument("--disable-extensions")
options.add_argument("start-maximized")
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-03-16 23:55:10
| 2
| 731
|
Mévatlavé Kraspek
|
75,762,593
| 8,655,468
|
How to split python dataframe rows in one to one mapping form?
|
<p>I'm trying to split python dataframe rows to one to one mapping format such that each new line represents the value in the same order as the column in front and if there is a single value on other columns copy their value in the new created row, like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>valueA1</td>
<td>valueB1 <br /> valueB2 <br /> valueB3</td>
<td>valueC1 <br /> valueC2 <br /> valueC3</td>
</tr>
<tr>
<td>valueA2</td>
<td>valueB4</td>
<td>valueC4</td>
</tr>
</tbody>
</table>
</div><br />
Should split like:
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>valueA1</td>
<td>valueB1</td>
<td>valueC1</td>
</tr>
<tr>
<td>valueA1</td>
<td>valueB2</td>
<td>valueC2</td>
</tr>
<tr>
<td>valueA1</td>
<td>valueB3</td>
<td>valueC3</td>
</tr>
<tr>
<td>valueA2</td>
<td>valueB4</td>
<td>valueC4</td>
</tr>
</tbody>
</table>
</div>
<p>Here all values are string type.</p>
<p>I did tried <code>explode()</code> but that was not splitting in one to one format, any help is really appreciated!</p>
|
<python><python-3.x><pandas><dataframe><split>
|
2023-03-16 23:45:50
| 2
| 550
|
Sakshi Sharma
|
75,762,565
| 4,212,875
|
Regex to match the specified word exactly, or the word with a space, or the word preceded by a specific letter
|
<p>I have a word <code>temp</code> and I would like a regex to match <code>temp</code> exactly, or <code>temp</code> preceded by an arbitrary string followed by either a space or the letter <code>g</code> (i.e. <code>blahblah temp</code> or <code>blahblahgtemp</code>). I thought something like <code>[^g ]temp$</code> would work but it doesn't.</p>
|
<python><regex>
|
2023-03-16 23:40:51
| 1
| 411
|
Yandle
|
75,762,486
| 13,296,497
|
How to assign a column to be True or False boolean in a pyspark dataframe
|
<p>My current code to assign a boolean value to my pyspark dataframe is:
<code>df = df.withColumn('my_column_name', True)</code></p>
<p>However, I get the error: "AssertionError: col should be Column"</p>
<p>Do I need to have "True" value wrapped with col(BooleanType(True))? I don't think I should be casting it as a lit string</p>
|
<python><dataframe><pyspark><boolean>
|
2023-03-16 23:22:45
| 2
| 1,092
|
DSolei
|
75,762,483
| 12,511,801
|
Handling special characters and getting specific text in large HTML file using lxml and Python
|
<p>I'm using Python in Google Colab for extract my YouTube video history and generate a DataFrame with the data obtained from the .html file - that contains the video history.</p>
<p>I'm using lxml for parsing the HTML data, but, I'm facing the following problems:</p>
<ul>
<li>The text obtained with lxml cannot decode the special characters - i.e. <em><code>"á", "é", "í"</code></em>, emojies, etc. <em><strong>EDIT (17/03/2023):</strong> I've set the <code><meta charset="utf-8" /></code> tag on the HTML file's content in order to solve this point, though, I'm open to alternatives for avoid edit the file.</em></li>
<li>I'm unable to get the date I've view the video. This text is inside a <code>div</code>, but, it doesn't have a clear or easy way to extract the date. The desired result for each <code>div</code> is to get the date - example: <code>10 feb 2023, 08:03:13 COT</code>, etc.</li>
</ul>
<hr />
<p>Here is the extract of the HTML content:</p>
<pre><code><div class="mdl-grid">
<div class="outer-cell mdl-cell mdl-cell--12-col mdl-shadow--2dp">
<div class="mdl-grid">
<div class="header-cell mdl-cell mdl-cell--12-col">
<p class="mdl-typography--title">
YouTube
<br>
</p>
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1">
Has visto <a href="https://www.youtube.com/watch?v=zj7kUzvqNBk">El BOCON que NINGUNEO a Chavez y fue HUMILLADO frente a 130 000 fanáticos | Chavez vs Haugen</a>
<br>
<a href="https://www.youtube.com/channel/UCivSw2EdxpUfH7vJ21Ob7NA">El Rayo Deportivo</a>
<br>
10 feb 2023, 08:03:13 COT
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1 mdl-typography--text-right"></div>
<div class="content-cell mdl-cell mdl-cell--12-col mdl-typography--caption">
<b>Productos:</b>
<br>
&emsp;YouTube
<br>
<b>¿Por qué se grabó esta actividad?</b>
<br>
&emsp;Se guardó esta actividad en tu cuenta de Google porque estaban habilitadas las siguientes opciones de configuración:&nbsp;Historial de reproducciones de YouTube.&nbsp;Puedes controlar estas opciones de configuración <a href="https://myaccount.google.com/activitycontrols">aquí</a>.
</div>
</div>
</div>
<div class="outer-cell mdl-cell mdl-cell--12-col mdl-shadow--2dp">
<div class="mdl-grid">
<div class="header-cell mdl-cell mdl-cell--12-col">
<p class="mdl-typography--title">
YouTube
<br>
</p>
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1">
Has visto <a href="https://www.youtube.com/watch?v=VArBMOvwiNE">No hay legado más grande, que el que dura para siempre. #LEOGACY</a>
<br>
Visto a las 08:02
<br>
10 feb 2023, 08:02:40 COT
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1 mdl-typography--text-right"></div>
<div class="content-cell mdl-cell mdl-cell--12-col mdl-typography--caption">
<b>Productos:</b>
<br>
&emsp;YouTube
<br>
<b>Detalles:</b>
<br>
&emsp;De los anuncios de Google
<br>
<b>¿Por qué se grabó esta actividad?</b>
<br>
&emsp;Se guardó esta actividad en tu cuenta de Google porque estaban habilitadas las siguientes opciones de configuración:&nbsp;Actividad en la Web y en Aplicaciones,&nbsp;Historial de reproducciones de YouTube,&nbsp;Historial de búsquedas de YouTube.&nbsp;Puedes controlar estas opciones de configuración <a href="https://myaccount.google.com/activitycontrols">aquí</a>.
</div>
</div>
</div>
<div class="outer-cell mdl-cell mdl-cell--12-col mdl-shadow--2dp">
<div class="mdl-grid">
<div class="header-cell mdl-cell mdl-cell--12-col">
<p class="mdl-typography--title">
YouTube
<br>
</p>
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1">
Has visto <a href="https://www.youtube.com/watch?v=VLr8hPtyrIU">Noraver Gripa Fast - Descongestiona las vías respiratorias y elimina los demás síntomas de la gripa</a>
<br>
Visto a las 08:02
<br>
10 feb 2023, 08:02:02 COT
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1 mdl-typography--text-right"></div>
<div class="content-cell mdl-cell mdl-cell--12-col mdl-typography--caption">
<b>Productos:</b>
<br>
&emsp;YouTube
<br>
<b>Detalles:</b>
<br>
&emsp;De los anuncios de Google
<br>
<b>¿Por qué se grabó esta actividad?</b>
<br>
&emsp;Se guardó esta actividad en tu cuenta de Google porque estaban habilitadas las siguientes opciones de configuración:&nbsp;Actividad en la Web y en Aplicaciones,&nbsp;Historial de reproducciones de YouTube,&nbsp;Historial de búsquedas de YouTube.&nbsp;Puedes controlar estas opciones de configuración <a href="https://myaccount.google.com/activitycontrols">aquí</a>.
</div>
</div>
</div>
<div class="outer-cell mdl-cell mdl-cell--12-col mdl-shadow--2dp">
<div class="mdl-grid">
<div class="header-cell mdl-cell mdl-cell--12-col">
<p class="mdl-typography--title">
YouTube
<br>
</p>
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1">
Has visto <a href="https://www.youtube.com/watch?v=YM9kkSFwmg0">Resident Evil 6 Mercenaries No Mercy Requiem for War 2757k Claire Redfield (Cowgirl) PC 1080p</a>
<br>
<a href="https://www.youtube.com/channel/UCqPkDBlhiVGe07S3CZOQGWA">Radical Dreamer</a>
<br>
10 feb 2023, 07:46:44 COT
</div>
<div class="content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1 mdl-typography--text-right"></div>
<div class="content-cell mdl-cell mdl-cell--12-col mdl-typography--caption">
<b>Productos:</b>
<br>
&emsp;YouTube
<br>
<b>¿Por qué se grabó esta actividad?</b>
<br>
&emsp;Se guardó esta actividad en tu cuenta de Google porque estaban habilitadas las siguientes opciones de configuración:&nbsp;Historial de reproducciones de YouTube.&nbsp;Puedes controlar estas opciones de configuración <a href="https://myaccount.google.com/activitycontrols">aquí</a>.
</div>
</div>
</div>
</div>
[...]
</code></pre>
<hr />
<p>This is the code I'm using:</p>
<pre><code>from lxml import etree, html
parser = etree.HTMLParser()
tree = etree.parse("/content/historial de reproducciones.html", parser)
# Get the divs that contains the entry:
divs = tree.xpath("//div[@class='content-cell mdl-cell mdl-cell--6-col mdl-typography--body-1']")
# Store the data here as JSON:
js_data = []
# Loop the divs with the data for extract the values:
# > URL of the video
# > Video title
# > URL of the channel
# > Channel title
# > Watched time: Date where I watched the said video.
# NOTE: In the takeout, there are removed/deleted videos that does not
# retrieve any links - in this case, the data is a empty row in the dataframe.
# If the "watched_time" is obtained, that row instead will contain ONLY the "watched_time" value, and that's the desired output for those cases.
for ind, div in enumerate(divs):
temp_links = div.xpath("a/@href")
temp_links_texts = div.xpath("a")
if (temp_links is not None):
# Here, I've omitted certain code for fill the "temp_links_texts" - it's not the problem
# for get the "watched_time".
video_link = temp_links[0]
channel_link = temp_links[1]
video_text = temp_links_texts[0].text
channel_name = temp_links_texts[1].text
# When I try to get the date, I'm unable to get the desired text = 10 feb 2023, 08:03:13 COT.
# watched_time = "" # I only get the (Has visto ) text - instead of (10 feb 2023, 08:03:13 COT).
# Append the data:
js_data.append({
"Video" : video_text, #OK
"Link": video_link, #OK
"Channel": channel_name,
"URL": channel_link,
"Watched" : watched_time
})
# Clear variables:
video_link = ""
video_text = ""
channel_link = ""
channel_name = ""
watched_time = ""
</code></pre>
<hr />
<p>This is the actual results - <em>shortened for demostrative purposes</em>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>Video</th>
<th>Link</th>
<th>Channel</th>
<th>URL</th>
<th>Watched</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>El BOCON que NINGUNEO a Chavez y fue HUMILLADO frente a 130 000 fanáticos | Chavez vs Haugen</td>
<td>https://www.youtube.com/watch?v=zj7kUzvqNBk</td>
<td>El Rayo Deportivo</td>
<td>https://www.youtube.com/channel/UCivSw2EdxpUfH7vJ21Ob7NA</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>No hay legado más grande, que el que dura para siempre. #LEOGACY</td>
<td>https://www.youtube.com/watch?v=VArBMOvwiNE</td>
<td>Gatorade Colombia</td>
<td>https://www.youtube.com/@gatoradecolombia9204</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>Noraver Gripa Fast - Descongestiona las vías respiratorias y elimina los demás síntomas de la gripa</td>
<td>https://www.youtube.com/watch?v=VLr8hPtyrIU</td>
<td>Noraver Colombia</td>
<td>https://www.youtube.com/@noravercolombia3979</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>Resident Evil 6 Mercenaries No Mercy Requiem for War 2757k Claire Redfield (Cowgirl) PC 1080p</td>
<td>https://www.youtube.com/watch?v=YM9kkSFwmg0</td>
<td>Radical Dreamer</td>
<td>https://www.youtube.com/channel/UCqPkDBlhiVGe07S3CZOQGWA</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>Mega Retrospectiva: Los Simpson Hit & Run (Parte 4)</td>
<td>https://www.youtube.com/watch?v=Xh9p5fbUgY8</td>
<td>Max Power</td>
<td>https://www.youtube.com/channel/UCZ99bYEb57kXEIaSALrj_6A</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>Gana más con Cabify</td>
<td>https://www.youtube.com/watch?v=gTwtHdKuozY</td>
<td>Cabify</td>
<td>https://www.youtube.com/@Cabifys</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>ESTOS DARKLORDS SI PEGAN DURO 🤤</td>
<td>https://www.youtube.com/watch?v=jwahSkgYhYY</td>
<td>Duel Random L 2.0</td>
<td>https://www.youtube.com/channel/UCkty7zBHmasLAwIBc9cgiCA</td>
<td></td>
</tr>
<tr>
<td>7</td>
<td>Office Space - fixed the glitch</td>
<td>https://www.youtube.com/watch?v=BUE0PPQI3is</td>
<td>Peter Ghosh</td>
<td>https://www.youtube.com/channel/UCqNUlAWLXWLf9ZwTRGX6Xqw</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>Llegamos a más de 800 destinos y a millones de hogares en Colombia</td>
<td>https://www.youtube.com/watch?v=Tq0LDrLZzEI</td>
<td>Homecenter Colombia</td>
<td>https://www.youtube.com/@homecentercolombia</td>
<td></td>
</tr>
<tr>
<td>9</td>
<td>Iron Man: Archivo de Monstruos | Mech Strike</td>
<td>https://www.youtube.com/watch?v=JHa0rLQh4cE</td>
<td>Marvel HQ LA</td>
<td>https://www.youtube.com/@MarvelHQLA</td>
<td></td>
</tr>
<tr>
<td>10</td>
<td>Review Temporada 31: Esto no necesitaba dos partes</td>
<td>https://www.youtube.com/watch?v=UUYebZN4Xv8</td>
<td>Max Power</td>
<td>https://www.youtube.com/channel/UCZ99bYEb57kXEIaSALrj_6A</td>
<td></td>
</tr>
<tr>
<td>11</td>
<td>Vuélvete Todo Claro y recibe más beneficios sin pagar más</td>
<td>https://www.youtube.com/watch?v=a9pverB6qAU</td>
<td>Claro Colombia</td>
<td>https://www.youtube.com/@ClaroColombia</td>
<td></td>
</tr>
<tr>
<td>12</td>
<td>Atomic Heart = Russian Propaganda?</td>
<td>https://www.youtube.com/watch?v=AzDCQzH1sO0</td>
<td>Setarko</td>
<td>https://www.youtube.com/channel/UCcznKF2NAgexfaZtsOz7lFw</td>
<td></td>
</tr>
<tr>
<td>13</td>
<td>Mortal Kombat as an 80's Dark Fantasy Film</td>
<td>https://www.youtube.com/watch?v=gms5pD-EbMk</td>
<td>SON</td>
<td>https://www.youtube.com/channel/UCr0lGe_WRgMxXEQtmmEdsCA</td>
<td></td>
</tr>
<tr>
<td>14</td>
<td>Pirates of the Caribbean as an 80's Dark Fantasy Film</td>
<td>https://www.youtube.com/watch?v=2z36d5eiCcs</td>
<td>SON</td>
<td>https://www.youtube.com/channel/UCr0lGe_WRgMxXEQtmmEdsCA</td>
<td></td>
</tr>
<tr>
<td>15</td>
<td>What Happens When You Are a Coomer</td>
<td>https://www.youtube.com/watch?v=DhXmO64ut4I</td>
<td>Lord Wojak</td>
<td>https://www.youtube.com/channel/UCULzpwL5TRDydBF_bWfJjrw</td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>Wagering our RAREST Yu-Gi-Oh Cards in a Duel! Rare Hunters 11 - Ancient Sanctuary</td>
<td>https://www.youtube.com/watch?v=x29ZRFdCpfE</td>
<td>Team APS</td>
<td>https://www.youtube.com/channel/UCaqlCjzSFmunjBOP4EmYpxg</td>
<td></td>
</tr>
<tr>
<td>17</td>
<td>Yu-Gi-Oh - 4,900 Attack Dark Paladin</td>
<td>https://www.youtube.com/watch?v=XUHJIVvgkVc</td>
<td>GrandMasterKaiba</td>
<td>https://www.youtube.com/channel/UCLj6N6_WjY8upsjJVS9Osug</td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>T̶E̵T̶R̸I̵S̷ B̶E̸A̶T̶B̸O̵X̵ [VOID] {2019}</td>
<td>https://www.youtube.com/watch?v=_xQgxk4OyKI</td>
<td>TimTam60.mp5</td>
<td>https://www.youtube.com/channel/UCsObTxoQ2g90OuxriztJyHg</td>
<td></td>
</tr>
<tr>
<td>19</td>
<td>Wojak's New Year New Me</td>
<td>https://www.youtube.com/watch?v=hlShmY1XrDo</td>
<td>Wojak Life</td>
<td>https://www.youtube.com/channel/UCTi2tQgA_yu2uu_9RiyT2yA</td>
<td></td>
</tr>
<tr>
<td>20</td>
<td>Evils of Instant Gratification</td>
<td>https://www.youtube.com/watch?v=pWQ3LH-DTvk</td>
<td>Wojak Life</td>
<td>https://www.youtube.com/channel/UCTi2tQgA_yu2uu_9RiyT2yA</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>With the information posted here, is there something missing? - how the mentioned issue can be solved?</p>
|
<python><lxml>
|
2023-03-16 23:22:36
| 1
| 2,471
|
Marco Aurelio Fernandez Reyes
|
75,762,459
| 9,403,794
|
Pass multiprocessing.managers.ListProxy to subprocess.Popen
|
<p>I have multiprocessing.managers.ListProxy object. Each ListProxy contain numpy ndarray.
After concatenate and save, the file is almost 700Mb.</p>
<p>To this time i made concatenate and save to file in child process but the parent join() has to wait for finishing child process. The child process which concatenate and save a file takes 5 times longer time then computation those lists.</p>
<p>I thing the subprocess.Popen() is solution for problem with long time execution.</p>
<p>How to pass large multiprocessing.managers.ListProxy (700Mb) to subprocess.Popen?</p>
<p>Is that a fast way to do it with json.load? Can I pass ListProxy to json.load?</p>
|
<python><multidimensional-array><multiprocessing><subprocess>
|
2023-03-16 23:17:37
| 1
| 309
|
luki
|
75,762,453
| 7,903,749
|
Unit test in PyCharm fails if imported a class
|
<p>With PyCharm, we are trying to create a unit test file <code>tests/test_poc.py</code> for a proof-of-concept class <code>PocFunctional</code>, and we have added <code>DJANGO_SETTINGS_MODULE=applicationproject.settings</code> into the "Python tests" section of the "Run/Debug Configurations".</p>
<p>However, if we uncomment <code>from app.poc import PocFunctional</code> in the below sample source code to import the class under test, the unit test will get a run-time error saying <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code>. Although the file runs OK if not imported the <code>PocFunctional</code> class.</p>
<p>We are new to the unit test of Python in PyCharm environment, so we highly appreciate any hints and suggestions.</p>
<p><strong>Technical Details:</strong></p>
<p><strong>Partial unit test source code:</strong></p>
<pre class="lang-py prettyprint-override"><code>from unittest import TestCase
# from app.poc import PocFunctional
class TestPocEav(TestCase):
def test_get_transaction(self):
# self. Fail()
return
# ... other test methods
</code></pre>
<p><strong>Exception:</strong></p>
<pre><code>/data/app-py3/venv3.7/bin/python /var/lib/snapd/snap/pycharm-professional/316/plugins/python/helpers/pycharm/_jb_unittest_runner.py --target test_poc.TestPocFunctional
Testing started at 5:03 PM ...
Launching unittests with arguments python -m unittest test_poc.TestPocFunctional in /data/app-py3/APPLICATION/tests
Traceback (most recent call last):
File "/var/lib/snapd/snap/pycharm-professional/316/plugins/python/helpers/pycharm/_jb_unittest_runner.py", line 35, in <module>
sys.exit(main(argv=args, module=None, testRunner=unittestpy.TeamcityTestRunner, buffer=not JB_DISABLE_BUFFERING))
File "/usr/local/lib/python3.7/unittest/main.py", line 100, in __init__
self.parseArgs(argv)
File "/usr/local/lib/python3.7/unittest/main.py", line 147, in parseArgs
self.createTests()
File "/usr/local/lib/python3.7/unittest/main.py", line 159, in createTests
self.module)
File "/usr/local/lib/python3.7/unittest/loader.py", line 220, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/local/lib/python3.7/unittest/loader.py", line 220, in <listcomp>
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/local/lib/python3.7/unittest/loader.py", line 154, in loadTestsFromName
module = __import__(module_name)
File "/data/app-py3/APPLICATION/tests/test_poc.py", line 3, in <module>
from applicationapp.poc import PocFunctional
File "/data/app-py3/APPLICATION/applicationapp/poc.py", line 2, in <module>
from django.contrib.contenttypes.models import ContentType
File "/data/app-py3/venv3.7/lib/python3.7/site-packages/django/contrib/contenttypes/models.py", line 133, in <module>
class ContentType(models.Model):
File "/data/app-py3/venv3.7/lib/python3.7/site-packages/django/db/models/base.py", line 108, in __new__
app_config = apps.get_containing_app_config(module)
File "/data/app-py3/venv3.7/lib/python3.7/site-packages/django/apps/registry.py", line 253, in get_containing_app_config
self.check_apps_ready()
File "/data/app-py3/venv3.7/lib/python3.7/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Process finished with exit code 1
Empty suite
</code></pre>
|
<python><python-3.x><django><pycharm>
|
2023-03-16 23:16:29
| 1
| 2,243
|
James
|
75,762,424
| 5,160,336
|
Libtorrent cannot find peer with trackers
|
<p>I am having some issues with the libtorrent library. Torrents are created successfully, however trying to download them with any number of torrent clients (bittorent, webtorrent, utorrent) does not work.</p>
<p>I am using libtorrent for python, version: 2.0.7</p>
<pre class="lang-python prettyprint-override"><code>import sys
import time
import libtorrent as lt
import os
fs = lt.file_storage()
lt.add_files(fs, "test")
t = lt.create_torrent(fs)
t.add_tracker("http://opensharing.org:2710/announce", 0)
t.add_tracker("http://open.acgnxtracker.com:80/announce",1)
t.set_creator('libtorrent %s' % lt.version)
t.set_comment("Test")
lt.set_piece_hashes(t, ".")
torrent = t.generate()
f = open("mytorrent2.torrent", "wb")
f.write(lt.bencode(torrent))
f.close()
print(lt.make_magnet_uri(lt.torrent_info("mytorrent2.torrent")))
#Seed torrent
ses = lt.session()
h = ses.add_torrent({'ti': lt.torrent_info('mytorrent2.torrent'), 'save_path': '.'})
print("Total size: " + str(h.status().total_wanted))
print("Name: " + h.name())
h.resume()
while True:
s = h.status()
state_str = ['queued', 'checking', 'downloading metadata', \
'downloading', 'finished', 'seeding', 'allocating', 'checking fastresume']
print('\r%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s' % \
(s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, s.num_peers, state_str[s.state]))
sys.stdout.flush()
time.sleep(1)
</code></pre>
|
<python><seeding><libtorrent>
|
2023-03-16 23:11:20
| 0
| 322
|
EnderNicky
|
75,762,379
| 10,335
|
Jupyter does not find my kernel python before the system python
|
<p>I've created a Python venv and created a <a href="https://queirozf.com/entries/jupyter-kernels-how-to-add-change-remove" rel="nofollow noreferrer">JupyterHub kernel</a> based in it.</p>
<p>Everything works fine, I can start the kernel from JupyterHub interface and it is using my desired Python interpreter and venv.</p>
<p>The problem is that the directory with the global <code>python</code> and <code>pip</code> are first in the PATH variable.</p>
<p>When I try to run <code>!pip install pandas</code> it finds the global <code>pip</code> instead of my venv pip. If I run <code>!which python</code> it finds my system python, instead of my venv python, so <code>!python -m pip install pandas</code> also <em>does not</em> work.</p>
<p>I know that I can open a terminal, activate the venv and install my packages, but my users are confused. They are used to just run pip install in a notebook cell.</p>
<p>How to I make the command below work in my venv based kernel notebook?</p>
<pre><code>!pip install some-package
</code></pre>
|
<python><pip><jupyter><python-venv><jupyterhub>
|
2023-03-16 23:02:07
| 1
| 40,291
|
neves
|
75,762,371
| 10,284,437
|
Run '$x("XPath")' in Selenium 'driver.execute_script' JavaScript, chromedriver=111.0.5563.64
|
<p>On facebook, if I go to main page, and run in chrome dev tools this command:</p>
<pre><code>$x('//span[contains(text(), "Marketplace")]')[0].click()
</code></pre>
<p>it works well.</p>
<p>If I try in Python/Selenium:</p>
<pre><code> driver.execute_script("""
$x('//span[contains(text(), "Marketplace")]')[0].click()
""")
</code></pre>
<p>I get:</p>
<pre><code>selenium.common.exceptions.JavascriptException: Message: javascript error: $x is not defined
(Session info: chrome=111.0.5563.64)
</code></pre>
<p>If I use:</p>
<pre><code>marketplace_button = self.driver.find_element(By.XPATH, '//span[contains(text(), "Marketplace")]')
marketplace_button.click()
</code></pre>
<p>I get:</p>
<pre><code>selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <span class="x1lliihq x6ikm8r x10wlt62 x1n2onr6" style="-webkit-box-orient:vertical;-webkit-line-clamp:2;display:-webkit-box">...</span> is not clickable at point (204, 306). Other element would receive the click: <div class="x1uvtmcs x4k7w5x x1h91t0o x1beo9mf xaigb6o x12ejxvf x3igimt xarpa2k xedcshv x1lytzrv x1t2pt76 x7ja8zs x1n2onr6 x1qrby5j x1jfb8zj" tabindex="-1">...</div>
</code></pre>
<p>What I missed?</p>
<p>Maybe it's because I have a 'show me notification?' popup that I try to bypass (not working) with:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_experimental_option(
"prefs",
{
"credentials_enable_service": False,
"profile.password_manager_enabled": False,
"profile.default_content_setting_values.notifications": 2
},
)
options.add_argument('--disable-notifications')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument("--password-store=basic")
options.add_argument("--disable-infobars")
options.add_argument("--disable-extensions")
options.add_argument("start-maximized")
</code></pre>
<p><a href="https://i.sstatic.net/DGmOY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DGmOY.png" alt="fb" /></a></p>
|
<python><selenium-webdriver><xpath><selenium-chromedriver>
|
2023-03-16 22:59:33
| 3
| 731
|
Mévatlavé Kraspek
|
75,762,342
| 7,346,393
|
TypeError: object MagicMock can't be used in 'await' expression
|
<p>I'm creating an application using <code>tortoise-orm</code> and <code>FastAPI</code>.</p>
<p>I wrote the following endpoint</p>
<pre><code>@app.get("/status/", response_model=list[Status])
async def get_status() -> list[Status]:
"""
Returns a list of Status objects representing the history/status of filtered messages.
Returns:
list[Status]: a list of Status objects representing the history of filtered messages.
"""
history = await History.all()
return list(map(Status.from_orm, history))
</code></pre>
<p>Nów here's my attempt to test it</p>
<pre><code>import unittest
from unittest.mock import AsyncMock, patch
from fastapi.testclient import TestClient
from source.craft.models import History
from source.main import app
class TestAPI(unittest.TestCase):
def setUp(self) -> None:
self.client = TestClient(app)
@patch("source.craft.models.History.all")
def test_get_status(self, history_mock):
object_mock = AsyncMock()
object_mock.return_value = [
History(id=1, text="Hello!", filter="lower", url=None, status="Done"),
]
history_mock.return_value.__aenter__.return_value = object_mock
response = self.client.get("/status")
self.assertEqual(response.status_code, 200)
</code></pre>
<p>Unfortunately my test does not work. I get the following error</p>
<pre><code>> history = await History.all()
E TypeError: object MagicMock can't be used in 'await' expression
</code></pre>
<p>What am I doing wrong? What is a correct way to test such functions?</p>
|
<python><fastapi><python-unittest>
|
2023-03-16 22:55:29
| 1
| 869
|
Hendrra
|
75,762,315
| 1,693,057
|
How to define a Python dictionary type hint with specific key-value type relationships
|
<p>Is this possible to tell python I have a <code>dict</code> which either has a <code>Fruit</code> key with <code>int</code> value type or <code>Car</code> key that has a <code>str</code> type value assigned?</p>
<pre class="lang-py prettyprint-override"><code>Fruit = NewType("fruit", int)
Car = NewType("car", str)
# NewDictType = Dict[Union[Fruit, Car], Union[int, str]]
NewDictType = Union[Dict[Fruit, int], Dict[Car, str]]
test:NewDictType = dict()
number: int = test[Fruit(0)]
# Diagnostics:
# 1. Argument of type "fruit" cannot be assigned to parameter "__key" of type "car" in function "__getitem__"
"fruit" is incompatible with "car"
# 2. Expression of type "int | str" cannot be assigned to declared type "int"
Type "int | str" cannot be assigned to type "int"
"str" is incompatible with "int"
</code></pre>
|
<python><dictionary><typing>
|
2023-03-16 22:51:37
| 1
| 2,837
|
Lajos
|
75,762,274
| 4,772,836
|
Unit test async method python
|
<p>Trying to learn about unit testing for async method</p>
<p>Let's say I have this that I want to test</p>
<pre><code>async def run(self):
while True:
async with DBHook(mssql_conn_id=self.db_conn).get_cursor() as cursor:
cursor.execute(self.sql)
rows = cursor.fetchone()
if rows[0] > 0:
yield TriggerEvent(True)
await asyncio.sleep(self.sleep_interval)
</code></pre>
<p>This is in a class of course.</p>
<p>Now, I would like to assert that</p>
<pre><code>asyncio is called when number of rows affected by fetchone is zero
</code></pre>
<p>So, I am trying to write to test like this (tried other variations as well)</p>
<pre><code>import aiounittest
import mock
import mymodule as st
class SQLTriggerTests(aiounittest.AsyncTestCase):
@mock.patch("mymodule.DBHook")
@mock.patch("mymodule.asyncio")
async def test_run(self, mock_asyncio, mock_MsSqlIntegratedHook):
my_obj = mymodule.classname(sql="select blabla", mssql_conn_id="db")
conn=mock_MsSqlIntegratedHook.return_value.get_cursor.return_value.__enter__.return_value
conn.fetchone.return_value= (0,)
res = my_obj.run()
mock_asyncio.sleep.assert_called()
</code></pre>
<p>The assertion fails saying asyncio is not called.</p>
<p>I also tried using await in the call to my_obj.run(), but that gives me an error saying</p>
<pre><code>can't be awaited on a generator object.
</code></pre>
<p>How should I test this properly. Python version:3.9</p>
<p>Update 1:
since last time, I have made some progress but still unable to successfully test the async method.
I now fixed the await problem with the generator by awaiting on the <strong>anext</strong> as the async generator can't be awaited on, but now I can't mock cursor execute or fetchone. I have tried mocking the coroutine like the following (following some S.O posts).</p>
<p>Basically now I can't seem to be able to mock <code>rows = cursor.fetchone()</code> and therefore the test fails when it encounters
<code>if rows[0] > 0</code></p>
<pre><code>def get_mock_coro(return_value):
m = mock.MagicMock()
@asyncio.coroutine
def mock_coro(*args, **kwargs):
return m(*args, **kwargs)
mock_coro.mock = m
mock_coro.execute = mock.MagicMock()
mock_coro.fetchone = mock.MagicMock()
mock_coro.fetchone.return_value = return_value
return Mock(wraps=mock_coro)
@mock.patch("plugin.triggers.sql_trigger.DBHook")
@mock.patch("plugin.triggers.sql_trigger.asyncio")
async def test_run(self, mock_asyncio, mock_MsSqlIntegratedHook):
sql_trigger_test_obj = st.SQLTrigger(sql="select blabla", mssql_conn_id="conn")
mock_record = (0)
mock_MsSqlIntegratedHook().get_cursor.return_value.__aenter__.return_value=get_mock_coro(mock_record)
gen = sql_trigger_test_obj.run()
res=await (gen.__anext__())
mock_asyncio.sleep.assert_called()
</code></pre>
|
<python><unit-testing><python-asyncio><python-unittest>
|
2023-03-16 22:43:54
| 2
| 1,060
|
Saugat Mukherjee
|
75,762,158
| 15,392,319
|
Using psycopg2 within AWS Lambda and AWS Amplify
|
<p>Before someone marks this as a duplicate, I've reviewed <a href="https://stackoverflow.com/questions/44855531/no-module-named-psycopg2-psycopg-modulenotfounderror-in-aws-lambda">No module named 'psycopg2._psycopg': ModuleNotFoundError in AWS Lambda</a></p>
<p>and attempted as best as I can the steps listed in this repo: <a href="https://github.com/jkehler/awslambda-psycopg2" rel="nofollow noreferrer">https://github.com/jkehler/awslambda-psycopg2</a></p>
<p>I'm building a web application in AWS Amplify which packages the lambda itself so regardless of using his precompiled options or generating it myself from scratch, I am unsure where I would even put the folder. I tried several places but I keep getting the error:</p>
<pre><code>Unable to import module 'index': No module named 'psycopg2._psycopg
</code></pre>
<p>I also tried a lambda layer from this repo with the same result: <a href="https://github.com/jetbridge/psycopg2-lambda-layer" rel="nofollow noreferrer">https://github.com/jetbridge/psycopg2-lambda-layer</a></p>
<p>So is there a specific place I should put the generated folder? Is there a lambda layer that works?</p>
|
<python><amazon-web-services><aws-lambda><aws-amplify><psycopg2>
|
2023-03-16 22:24:59
| 2
| 428
|
cmcnphp
|
75,762,144
| 4,980,705
|
After click() Selenium does not retrieve new page_source
|
<p>I have the following code. First loop works fine, then I'm able to get to the second page, but the results printed are the same as the first loop.</p>
<p>First loop works fine</p>
<pre><code>url_city = "https://www.tripadvisor.com/Restaurants-g189158-Lisbon_Lisbon_District_Central_Portugal.html"
# launch url
url = url_city
# create a new Firefox session
driver = webdriver.Firefox(service=FirefoxService(GeckoDriverManager().install()))
driver.get(url)
s_city = BeautifulSoup(driver.page_source, "lxml")
results = s_city.find(class_="YtrWs")
restaurants = results.find_all("a", class_="Lwqic Cj b")
page_number = s_city.find(class_="pageNum current")["data-page-number"]
print(page_number)
</code></pre>
<p>after first loop I get the click() to go onto the next page. Note: Tripadvisor does not update the link.</p>
<p>second loop</p>
<pre><code>#GO TO NEXT PAGE
python_button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.nav.next.rndBtn.ui_button.primary.taLnk[data-page-number][onclick*='STANDARD_PAGINATION']")))
python_button.click()
s_city = BeautifulSoup(driver.page_source, "lxml")
results = s_city.find(class_="YtrWs")
restaurants = results.find_all("a", class_="Lwqic Cj b")
page_number = s_city.find(class_="pageNum current")["data-page-number"]
print(page_number)
</code></pre>
|
<javascript><python><html><selenium-webdriver>
|
2023-03-16 22:22:24
| 1
| 717
|
peetman
|
75,762,121
| 9,927,519
|
How to generate methods to set and get properties at class instantiation with Python?
|
<p>I have a class containing tensors that have an unpredictable list of properties.</p>
<p>I would like it to generate a list of properties with getters and setter based on a list given at instantiation.</p>
<p>Do hou know how to do that ?</p>
<p>Thanks</p>
<pre class="lang-py prettyprint-override"><code>class Items:
def __init___(
self,
names: List[str]
):
#: Property names to be generated
self.names: List[str] = names
#: Tensors in use
self.tensors: Dict[str,Tensor] = {}
def get_tensor(self, name:str) -> Tensor | None:
if name in self.tensors.keys():
return self.tensors[name]
return None
def set_tensor(self, name:str, tensor: Tensor):
self.tensors[name] = tensor
# The exact same setter and getter functions have to be generated for several properties:
# entries, classes, scores, target classes, target scores, embeddings, heatmaps, losses...
# The list is provided by self.names at instantiation
@property
def entries(self) -> Tensor | None:
return self.get_tensor("entries")
@entries.setter
def entries(self, tensor: Tensor):
self.set_tensor("entries", tensor)
</code></pre>
|
<python><class><pytorch><instance-variables><python-class>
|
2023-03-16 22:19:25
| 2
| 3,287
|
Xiiryo
|
75,762,068
| 10,526,441
|
Selecting different columns by row for pandas dataframe
|
<p>The closest thing I could find for this question is <a href="https://stackoverflow.com/questions/24833130/how-can-i-select-a-specific-column-from-each-row-in-a-pandas-dataframe">this</a>.</p>
<p>What I'm trying to do is very similar, except I'm basically trying to extract one matrix from another. To use the same example from that link:</p>
<pre><code> a b c
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
4 13 14 15
</code></pre>
<p>Given the above, my extraction matrix would look like:</p>
<pre><code>[ ['a', 'a'],
['a', 'a'],
['a', 'b'],
['c', 'a'],
['b', 'b'] ]
</code></pre>
<p>The expected result would be either the following <code>pd.DataFrame</code> or <code>np.array</code>:</p>
<pre><code>[ [1, 1],
[4, 4],
[7, 8],
[12, 10],
[14, 14] ]
</code></pre>
<p>I feel like this is probably a common manipulation, I just don't know how to do it here. I want to rule out <code>pd.iterrows</code> because my parent matrix is really long, and really wide, and <code>pd.iterrows</code> is remarkably slow on even a fraction of the matrix. I have a decent amount of memory, so I'd like to lean on that a little bit if I can.</p>
|
<python><pandas><numpy>
|
2023-03-16 22:10:22
| 4
| 548
|
John Rouhana
|
75,762,061
| 19,583,053
|
How to add arrow labels to points in python geodataframe?
|
<p>My code currently outputs a visualization that looks something like this:</p>
<p><a href="https://i.sstatic.net/bYX9D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bYX9D.png" alt="enter image description here" /></a></p>
<p>And I would like to be able to add a feature which has arrows pointing to each location dot, in which each arrow has a label name. Basically, I want to get output that looks like this:</p>
<p><a href="https://i.sstatic.net/zjKWS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zjKWS.png" alt="enter image description here" /></a></p>
<p>I would want the program to automatically choose an arrow spacing so that the label names (like "A", "B:, etc) do not collide/ with each other. Is this possible to do automatically?</p>
<p>Here is my code so far which displays the first image:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from shapely.geometry import Point
import geopandas as gpd
from geopandas import GeoDataFrame
import json
#This function creates the pandas dataframe where each row is the (latitude, longitude) of a location. The first column is latitude, and the second column is longitude
def createLatitudeLongitudeDataFrame(inputJsonFolder, inputJsonFilename):
data = json.load(open(inputJsonFolder + "\\" + inputJsonFilename))
latitudes = []
longitudes = []
for location in data["locations"]:
latitudes.append(location["coordinates"]["latitude"])
longitudes.append(location["coordinates"]["longitude"])
data_tuples = list(zip(latitudes, longitudes))
df = pd.DataFrame(data_tuples, columns=["Latitudes", "Longitudes"])
return df
#This function creates the map of the locations based on the dataframe.
def createMapOfLocations(df):
geometry = [Point(yx) for yx in zip(df['Longitudes'], df['Latitudes'])]
gdf = GeoDataFrame(df, geometry=geometry)
#This is a simple map that goes with geopandas
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
gdf.plot(ax=world.plot(figsize=(10, 6)), marker='o', color='red', markersize=15)
plt.show()
def main():
inputJsonFolder = "\\inputFolder"
inputJsonFilename = "testing.json"
df = createLatitudeLongitudeDataFrame(inputJsonFolder, inputJsonFilename)
createMapOfLocations(df)
main()
</code></pre>
|
<python><json><pandas><matplotlib><geopandas>
|
2023-03-16 22:09:04
| 1
| 307
|
graphtheory123
|
75,762,034
| 13,178,155
|
Django signal creates UserProfile from listening to User, but it's empty?
|
<p>I'm new to Django and trying to create a signal that listens to a custom User model I've built. My intention is that when a new User is created, that a profile for them will also be automatically created.</p>
<p>To me it seems that my signal itself works, as when I use the /admin screen I can add a User and it automatically creates their profile. <strong>However</strong>, when I try to add a new User through the front end of the application, it appears that the signal does indeed detect a new User being created and creates a Profile for them. But when I go into the /admin screen to have a look at it, it's empty.</p>
<p>When I say empty, I mean that the Profile record is created, but none of the fields are populated. This is then causing further errors in my app, because I can't use the User to pull up the User's Profile and render it, because it's empty and not connected to a User.</p>
<p>What am I doing wrong? My hunch is that something is wrong with my view function. <code>registerUserParticipant</code> and <code>registerUserSupervisor</code> are called when creating a new user in the front end. <code>editAccount</code> currently throws an error too, as it is triggered after a user is registered, as <code>request.user.profileparticipant</code> can't find the user's profile.</p>
<p><strong>userprofiles/models.py</strong></p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
import uuid
# Create your models here.
class User(AbstractUser):
is_supervisor = models.BooleanField(default=False)
is_participant = models.BooleanField(default=False)
created = models.DateTimeField(auto_now_add=True) #auto_now_add, creates automatic timestamp
class ProfileParticipant(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True, blank=True) #when a user is deleted a profile is deleted
first_name = models.CharField(max_length=200, blank=True, null=True)
last_name = models.CharField(max_length=200, blank=True, null=True)
username = models.CharField(max_length=200, blank=True, null=True)
email = models.EmailField(max_length=500, blank=True, null=True)
profile_image = models.ImageField(null=True, blank=True, upload_to='profiles/', default='profiles/user-default.png')
created = models.DateTimeField(auto_now_add=True) #auto_now_add, creates automatic timestamp
id = models.UUIDField(default=uuid.uuid4, unique= True,
primary_key=True, editable=False) # UUID is a 16 character string of number and letters, helps with scalability
# organisation
# participant_group
def __str__(self):
return str(self.username)
@property
def imageURL(self):
try:
url = self.profile_image.url
except:
url = ''
return url
class ProfileSupervisor(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True, blank=True) #when a user is deleted a profile is deleted
first_name = models.CharField(max_length=200, blank=True, null=True)
last_name = models.CharField(max_length=200, blank=True, null=True)
username = models.CharField(max_length=200, blank=True, null=True)
email = models.EmailField(max_length=500, blank=True, null=True)
profile_image = models.ImageField(null=True, blank=True, upload_to='profiles/', default='profiles/user-default.png')
title = models.CharField(max_length=200, blank=True, null=True)
short_intro = models.CharField(max_length=200, blank=True, null=True)
description = models.TextField()
created = models.DateTimeField(auto_now_add=True) #auto_now_add, creates automatic timestamp
id = models.UUIDField(default=uuid.uuid4, unique= True,
primary_key=True, editable=False) # UUID is a 16 character string of number and letters, helps with scalability
# org
# supervisor_group
def __str__(self):
return str(self.username)
@property
def imageURL(self):
try:
url = self.profile_image.url
except:
url = ''
return url
</code></pre>
<p><strong>userprofiles/signals.py</strong></p>
<pre><code>from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from .models import User
from .models import ProfileParticipant, ProfileSupervisor
from django.core.mail import send_mail
from django.conf import settings
#don't forget to register your signals with apps.py
@receiver(post_save, sender=settings.AUTH_USER_MODEL) #listens to user model, we want it to notice if a new user is added and create a profile
def createProfile(sender, instance, created, **kwargs):
print('Profile signal triggered')
if created:
user = instance
if user.is_supervisor == True:
ProfileSupervisor.objects.create(user=user,username=user.username,email=user.email,first_name= user.first_name,last_name = user.last_name).save()
# profile.save()
elif user.is_participant == True:
profile = ProfileParticipant.objects.create(user=user,username=user.username,email=user.email,first_name= user.first_name,last_name = user.last_name)
profile.save()
# subject = 'Welcome to DevSearch'
# message = "We are glad you are here!"
# send_mail(
# subject,
# message,
# settings.EMAIL_HOST_USER,
# [profile.email],
# fail_silently=False,
# )
@receiver(post_save, sender=ProfileParticipant)
def updateUserParticipant(sender, instance, created, **kwargs):
profile = instance
user = profile.user
if created == False: #ensures consistency between user and profile (avoid recursion erorr)
# User.objects.create(username=profile.username, email=profile.username, first_name=profile.first_name, last_name=profile.last_name, is_participant=True)
# else:
user.first_name = profile.first_name
user.last_name = profile.last_name
user.username = profile.username
user.email = profile.email
user.is_participant = True
user.save()
@receiver(post_save, sender=ProfileSupervisor)
def updateUserSupervisor(sender, instance, created, **kwargs):
profile = instance
user = profile.user
if created == False: #ensures consistency between user and profile (avoid recursion erorr)
user.first_name = profile.name
user.username = profile.username
user.email = profile.email
user.save()
@receiver(post_delete, sender=ProfileSupervisor) #listens to profile, we want it to notice if a profile is delete and delete the user too
def deleteUserSupervisor(sender, instance, **kwargs):
try:
user = instance.user
user.delete()
except:
pass
@receiver(post_delete, sender=ProfileParticipant) #listens to profile, we want it to notice if a profile is delete and delete the user too
def deleteUserParticipant(sender, instance, **kwargs):
try:
user = instance.user
user.delete()
except:
pass
# post_save.connect(profileUpdated, sender=Profile) #you can do it like this if you don't want to use the decorator
# post_delete.connect(deleteUser, sender=Profile)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from django.shortcuts import render, redirect
from .models import ProfileParticipant, ProfileSupervisor
from django.contrib.auth.models import User
from django.contrib.auth import get_user_model
from django.contrib import messages
from django.contrib.auth import login, authenticate, logout
from .forms import CustomUserCreationForm, ProfileFormParticipant, ProfileFormSupervisor
from django.contrib.auth.decorators import login_required #this decorator can sit above any view we want to block
# Create your views here.
def profiles(request):
profilesParticipant = ProfileParticipant.objects.all()
profilesSupervisor = ProfileSupervisor.objects.all()
context = {'profilesParticipant':profilesParticipant, 'profilesSupervisor':profilesSupervisor}
return render(request,'userprofiles/profiles.html', context)
def userProfileParticipant(request, pk):
profile = ProfileParticipant.objects.get(id=pk)
context = {'profile':profile}
return render(request, 'userprofiles/user-profile.html', context)
def userProfileSupervisor(request, pk):
profile = ProfileSupervisor.objects.get(id=pk)
context = {'profile':profile}
return render(request, 'userprofiles/user-profile.html', context)
def loginUser(request):
page = 'login' #get rid of this?
if request.user.is_authenticated: #if the user if logged in, don't let them be seeing the log in page
return redirect('profiles')
if request.method == "POST":
username = request.POST['username'].lower()
password = request.POST['password']
try:
user = User.objects.get(username=username)
except:
messages.error(request, 'Username does not exist')
user = authenticate(request, username=username, password=password) #makes sure the username and password match, if they do it'll return the user instance, if not, it'll return none
if user is not None:
login(request, user) #this will create a session for theuser in the db, it'll add that session into the browser's cookies
return redirect(request.GET['next'] if 'next' in request.GET else 'account')
else:
messages.error(request, 'Username OR password is incorrect') #basically authenticate returned None
context = {'page':page}
return render(request, 'userprofiles/login_register.html', context)
def logoutUser(request):
logout(request)
messages.info(request, 'User was logged out')
return redirect('login')
def registerLanding(request):
return render(request, 'userprofiles/register-landing.html')
def registerUserParticpant(request):
page= "register"
usertype= "participant"
userform = CustomUserCreationForm()
profileform = ProfileFormParticipant()
if request.method == "POST":
userform = CustomUserCreationForm(request.POST) #send form info to back end
profileform = ProfileFormParticipant(request.POST)#send info to back end
if userform.is_valid():
user = userform.save(commit=False) #commit=False lets a temporary instance hang before processing.
user.username = user.username.lower() #could do this from the form directly and not use commit=false
user.save() #at this point the user will be added to the database and saved, the signals we set up will also take care of automatically setting up the profile
# if profileform.is_valid():
# profileform.save()
messages.success(request, 'User account was created!')
login(request, user)
return redirect('edit-account')
else:
messages.error(request, 'An error has occurred during registration')
context = {'userform':userform, 'profileform':profileform, 'page':page, 'usertype':usertype}
return render(request, 'userprofiles/login_register.html', context)
def registerUserSupervisor(request):
page= "register"
usertype= "supervisor"
userform = CustomUserCreationForm()
profileform = ProfileFormSupervisor()
if request.method == "POST":
userform = CustomUserCreationForm(request.POST) #send form info to back end
profileform = ProfileFormSupervisor(request.POST)#send info to back end
if userform.is_valid():
user = userform.save(commit=False) #commit=False lets a temporary instance hang before processing.
user.username = user.username.lower() #could do this from the form directly and not use commit=false
user.save() #at this point the user will be added to the database and saved, the signals we set up will also take care of automatically setting up the profile
if profileform.is_valid():
profileform.save()
messages.success(request, 'User account was created!')
login(request, user)
return redirect('edit-account')
else:
messages.error(request, 'An error has occurred during registration')
context = { 'userform':userform,'profileform':profileform, 'page':page, 'usertype':usertype}
return render(request, 'userprofiles/login_register.html', context)
@login_required(login_url='login')
def userAccount(request):
profile = request.user.profileparticipant #will get you the logged in user
records = profile.feelingsscalerecord_set.all()
context = {'profile':profile, 'records':records}
return render(request, 'userprofiles/account.html', context)
@login_required(login_url='login')
def editAccount(request):
profile = request.user.profileparticipant
form = ProfileFormParticipant(instance=profile)
if request.method == "POST":
form = ProfileFormParticipant(request.POST, request.FILES, instance=profile)
if form.is_valid():
form.save()
return redirect('account')
context = {'form':form}
return render(request, 'userprofiles/profile_form.html', context)
</code></pre>
|
<python><django><django-views><django-signals><django-custom-user>
|
2023-03-16 22:05:36
| 0
| 417
|
Jaimee-lee Lincoln
|
75,761,938
| 1,438,082
|
ValuerError time data does not match format
|
<p>The following code gives the error ValueError("time data '2023-03-16T21:30:00.0000000Z' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'")
What is wrong?</p>
<pre><code>date_string = '2023-03-16T21:30:00.0000000Z'
day_of_month_in_result = datetime.strptime(date_string, '%Y-%m-%dT%H:%M:%S.%fZ').day
print(day_of_month_in_result)
</code></pre>
|
<python>
|
2023-03-16 21:51:06
| 1
| 2,778
|
user1438082
|
75,761,911
| 19,325,656
|
NOT NULL constraint failed null=True blank=True
|
<p>despite having null and blank true in my models I always get</p>
<p><em>NOT NULL constraint failed: user_mgmt_profile.user_id</em></p>
<p>I've deleted the database, deleted migrations, run commands again, and the error still persist.</p>
<p>Where do you see the issue</p>
<p>Basically Im automatically connecting user to new profile on post request</p>
<p><strong>models</strong></p>
<pre><code>class Profile(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
user = models.OneToOneField(User, on_delete=models.CASCADE)
gym = models.OneToOneField(Gym, null=True, blank=True, on_delete=models.DO_NOTHING)
</code></pre>
<p><strong>create function in viewset</strong></p>
<pre><code>def create(self, request):
request.data['user'] = request.user.id
serializer = ProfileSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
data = serializer.data
return Response(data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p><strong>serializers</strong></p>
<pre><code>class LocationSerializer(serializers.ModelSerializer):
class Meta:
model = Location
fields = '__all__'
class GymSerializer(serializers.ModelSerializer):
place = LocationSerializer(read_only=True)
class Meta:
model = Gym
fields = '__all__'
class ProfileSerializer(WritableNestedModelSerializer, serializers.ModelSerializer):
user = UserSerializer(read_only=True)
gym = GymSerializer(read_only=True)
class Meta:
model = Profile
fields = '__all__'
depth = 1
</code></pre>
<p><strong>edit</strong>
I've printed out the</p>
<pre><code>request.data['user']
</code></pre>
<p>and i can see the UUID of the current user</p>
|
<python><django><django-rest-framework>
|
2023-03-16 21:46:17
| 1
| 471
|
rafaelHTML
|
75,761,838
| 6,354,590
|
Snowflake connectivity by pyodbc
|
<p>I am trying to connect to snowflake in linux by <code>pyodbc</code> in python2.7.
I have installed <code>snowflake-odbc-2.25.8.x86_64.rpm</code> on my RHEL7 box.
I have made changes in <code>odbc.ini</code> file like this:</p>
<pre><code>[testodbc2]
Driver = /usr/lib64/snowflake/odbc/lib/libSnowflake.so
Description =
server = <MY_ACCOUNT>.snowflakecomputing.com
#PRIV_KEY_FILE_PWD = <MY_PASSWORD>
AUTHENTICATOR = SNOWFLAKE_JWT
SSL=on
</code></pre>
<p>Now if uncomment <code>PRIV_KEY_FILE_PWD = <MY_PASSWORD> </code> in my <code>odbc.ini</code> and execute following lines</p>
<pre><code>conn_str_1 = "DSN={};UID={};PWD={};server={};schema={};role={};warehouse={};database={};PRIV_KEY_FILE={}".format(dsn_name,
user, password_str, "{}.snowflakecomputing.com".format(account), schema, role, warehouse, database, private_key_location)
conn = pyodbc.connect(conn_str_1)
</code></pre>
<p>It works. But as soon as I comment <code>PRIV_KEY_FILE_PWD = <MY_PASSWORD> </code> in my <code>odbc.ini</code> and try to pass password from my code, like this:</p>
<pre><code>conn_str_1 = "DSN={};UID={};PWD={};server={};schema={};role={};warehouse={};database={};PRIV_KEY_FILE={};PRIV_KEY_FILE_PWD ={}".format(dsn_name,
user, password_str, "{}.snowflakecomputing.com".format(account), schema, role, warehouse, database, private_key_location, MY_PASSWORD)
conn = pyodbc.connect(conn_str_1)
</code></pre>
<p>It starts failing with Error:</p>
<blockquote>
<p>"Error: ('HY000', '[HY000] [Snowflake][Snowflake] (44) \n Error finalize setting: Marshaling private key failed.\n (44) (SQLDriverConnect)')"</p>
</blockquote>
<p>MY_PASSWORD has ";" inside it. So I tried to escape it by<br />
<code>MY_PASSWORD = MY_PASSWORD .replace(';', '\\;')</code>
before passing in DSN string, still same error.</p>
<p>But what is surprising me that in odbc.ini - I m using password as it is and it is working but if I pass by code in dsn string - it is Not working.</p>
<p>Do you think - I am missing something here.</p>
|
<python><snowflake-cloud-data-platform><odbc><pyodbc>
|
2023-03-16 21:34:55
| 1
| 1,791
|
gaurav bharadwaj
|
75,761,833
| 5,144,885
|
python replace regex for dynamic string
|
<p>In Wikipedia pages are expressions like:</p>
<blockquote>
<p>[[File Transfer Protocol|FTP server]]
[[server (computing)|server]]</p>
</blockquote>
<p>I can remove it by</p>
<pre><code>pattern = '\[\[.*\|.*\]\]'
text = re.sub(pattern, '', text)
</code></pre>
<p>But I want replace it by second part after pipe char.</p>
<blockquote>
<p>FTP server
server</p>
</blockquote>
<p>or</p>
<blockquote>
<p>[[FTP server]]
[[server]]</p>
</blockquote>
<p>This is possible by regular expressions?</p>
|
<python><regex>
|
2023-03-16 21:34:09
| 3
| 395
|
Saku
|
75,761,773
| 4,104,728
|
Efficient way to parse a string with different index ranges
|
<p>Suppose I have a string <code>M001P000J0Z987</code> with embedded codes that I want to cut up based on specific indexes. For example,
the first 4 (<code>[0:4]</code>) are a specific code, the next 4 (<code>[4:8]</code>) are another code, and the last 6 are (<code>[8:14]</code>).</p>
<p>This is fairly straightforward by doing:</p>
<pre><code>codes = "M001P000J0Z987"
codes[0:4]
codes[4:8]
codes[8:14]
</code></pre>
<p>However, imagine I have dozens of these index combinations and need to parse thousands of different codes. Is there a more
efficient way to do this instead of writing a long function that does this line-by-line as shown above? The indexes don't change, it's just the codes that change. I need a way to pass in the index ranges (<code>[0:4]</code>), possibly as a list so I can use list compressions, but I'm not sure how to do this.</p>
|
<python><list><indexing>
|
2023-03-16 21:24:54
| 2
| 7,505
|
Vedda
|
75,761,768
| 8,119,664
|
AppConfig.ready() is running before the migrations on manage.py test
|
<p>I am trying to use Django's <code>AppConfig.ready()</code> method to run some some query on one of the models to retrieve some data.</p>
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>class NewsConfig(AppConfig):
name = "apps.news"
verbose_name = "News"
def ready(self):
NewsType = self.models.get("newstype")
NewsType.names = NewsType.objects.values_list("name", flat=True)
</code></pre>
<p>then, on <code>urls.py</code> I'm using them as follow:</p>
<pre><code>news_type_names_regex = generate_regex(NewsType.names)
router = DefaultRouter()
router.register(r'news/' + news_type_names_regex, NewsModelViewSet, basename='news')
</code></pre>
<p>This works fine when the application runs (using uvicorn or runserver), but when running tests, the <code>AppConfig.ready()</code> gets executed before the migrations are run, which results in the following error:</p>
<pre><code>...
django.db.utils.OperationalError: no such table: news_newstype
</code></pre>
<p>I've read <a href="https://docs.djangoproject.com/en/4.1/ref/applications/#django.apps.AppConfig.ready" rel="nofollow noreferrer">the warning on the docs</a>, but I don't think it is related to this issue. The reason why I'm doing this on the <code>AppConfig.ready()</code> is because it needs to be done somewhere after <code>django.setup()</code> but not in an async request context (as I'm using django channels and running the application ASGI).</p>
|
<python><django><django-rest-framework>
|
2023-03-16 21:24:28
| 1
| 480
|
CurlyError
|
75,761,597
| 4,980,705
|
Selenium does not find element using By.CLASS_NAME and raises NoSuchElementException
|
<p>I'm using the following code to identify the Next button, but Selenium does not find it</p>
<pre><code>driver = webdriver.Firefox(service=FirefoxService(GeckoDriverManager().install()))
driver.implicitly_wait(10)
driver.get(url)
python_button = driver.find_element(By.CLASS_NAME, "nav next rndBtn ui_button primary taLnk")
python_button.click()
</code></pre>
<p>The error is the following:</p>
<pre><code>selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .nav next rndBtn ui_button primary taLnk
</code></pre>
<p>Snapshot of the HTML:</p>
<p><a href="https://i.sstatic.net/vlzvm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vlzvm.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><xpath><css-selectors><webdriverwait>
|
2023-03-16 21:00:57
| 1
| 717
|
peetman
|
75,761,591
| 2,302,244
|
Disable menu items in tkinter
|
<p>This is the code snippet I use to create a menu in tkinter.</p>
<pre class="lang-py prettyprint-override"><code> self.fileMenu = Menu(self.menu)
self.fileMenu.add_cascade(label="Open...", menu=openMenu)
self.fileMenu.add_cascade(label="Save...")
self.fileMenu.add_command(label="Reset", command=self.model.reset)
self.fileMenu.add_separator()
self.fileMenu.add_command(label="Exit", state='disabled')
</code></pre>
<p>At a point in my program I want to disable this menu (or its items), then display a messageBox and finally re-instate the menu.</p>
<p>I know that I can do each item individually with <code>self.fileMenu.entryconfig("Open...", state='disabled')</code>.</p>
<p>How do I do it for all, without disabling each option individually?</p>
|
<python><tkinter>
|
2023-03-16 21:00:02
| 2
| 935
|
user2302244
|
75,761,413
| 19,299,757
|
How to show custom description for each test in pytest html report
|
<p>I am looking for a solution to include custom description for each test (I have about 15-20 tests in a test.py file). I want to show the description for each test in pytest html report. Something like "Login test" for test1, "User name input" for test2 etc.
I have a conftest.py in the folder where the .py tests are present.</p>
<pre><code>import pytest
from py.xml import html
@pytest.mark.optionalhook
def pytest_html_results_table_header(cells):
cells.insert(2, html.th('Custom Description'))
@pytest.mark.optionalhook
def pytest_html_results_table_row(report, cells):
custom_description = getattr(report, "custom_description", "")
cells.insert(2, html.td(custom_description))
@pytest.fixture(scope='function')
def custom_description(request):
return request.function.__doc__
</code></pre>
<p>My test function looks like this.
def test_001(custom_description, request):
print('This is test 1')
assert True
request.node.report_custom_description = "This is a login test"</p>
<pre><code>def test_002(custom_description, request):
print('This is test 2')
assert True
request.node.report_custom_description = "This is user name test"
</code></pre>
<p>The html report is showing the extra column (Custom Description) I added, but its not showing anything under that column. Its just empty.
I tried removing the empty quote in custom_description = getattr(report, "custom_description", "") but that throws error.</p>
<p>Any help is much appreciated.</p>
|
<python><pytest><pytest-html>
|
2023-03-16 20:37:13
| 2
| 433
|
Ram
|
75,761,334
| 979,242
|
Erros when using getting JWT headers jwt.get_unverified_header
|
<p>I am using Google Cloud function, my code:</p>
<pre><code>@functions_framework.http
def hello_http(event):
print('---------')
jwt_header = jwt.get_unverified_header(event)
print(jwt_header)
</code></pre>
<p>The error I am getting is :</p>
<pre><code>jwt.exceptions.DecodeError: Invalid token type. Token must be a <class 'bytes'>"
</code></pre>
<p>To debug, I tried this on my local(mac+PyCharm) and the same code works , I am able to retrieve the JWT header</p>
<p><a href="https://i.sstatic.net/LEGcV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LEGcV.png" alt="enter image description here" /></a></p>
<p>Has anyone seen this? Why is this failing on goolge cloud function ?</p>
|
<python><google-cloud-functions><jwt>
|
2023-03-16 20:25:42
| 1
| 505
|
PiaklA
|
75,761,268
| 11,968,226
|
DeepL API creating Glossary only creating with first entry
|
<p>I am using the <strong>DeepL API</strong> and would like to create a <code>glossary</code>. I read followed the <a href="https://www.deepl.com/de/docs-api/glossaries/formats/" rel="nofollow noreferrer">documentation</a> and got a function to create a glossary like this:</p>
<pre><code>def create_glossary():
url = 'https://api-free.deepl.com/v2/glossaries'
headers = {
'Authorization': f'DeepL-Auth-Key {DEEPL_APY_KEY}'
}
data = {
'name': 'Test Glossary',
'source_lang': 'en',
'target_lang': 'de',
'entries': ['Yes,ja', 'Bye', 'Tschuess'],
'entries_format': 'csv'
}
response = requests.post(url, headers=headers, data=data)
print(response)
if response.status_code == 201:
glossary_id = response.json()['glossary_id']
return glossary_id
else:
print('Error creating glossary:', response.json()['message'])
</code></pre>
<p>That function is creating a glossary but <strong>only with one entry</strong>.</p>
<p>What am I missing here?
I obviously have two entries here: <code>'entries': ['Yes,ja', 'Bye', 'Tschuess'],</code> but when checking the response, there is only the first one in the glossary.</p>
<p>What am I missing here? Couldn't find anything in the docs and support is not responding.</p>
|
<python><deepl>
|
2023-03-16 20:17:12
| 2
| 2,404
|
Chris
|
75,761,216
| 2,815,264
|
How can I create an Avro schema from a python class?
|
<p>How can I transform my simple python class like the following into a avro schema?</p>
<pre><code>class Testo(SQLModel):
name: str
mea: int
</code></pre>
<p>This is the <code>Testo.schema()</code> output</p>
<pre><code>{
"title": "Testo",
"type": "object",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"mea": {
"title": "Mea",
"type": "integer"
}
},
"required": [
"name",
"mea"
]
}
</code></pre>
<p>from here I would like to create an Avro record. This can be converted online on <a href="https://konbert.com/convert/json/to/avro" rel="nofollow noreferrer">konbert.com</a> (select JSON to AVRO Schema) and it results in the Avro schema below. (all valid despite the name field which should be "Testo" instead of "Record".)</p>
<pre><code>{
"type": "record",
"name": "Record",
"fields": [
{
"name": "title",
"type": "string"
},
{
"name": "type",
"type": "string"
},
{
"name": "properties.name.title",
"type": "string"
},
{
"name": "properties.name.type",
"type": "string"
},
{
"name": "properties.mea.title",
"type": "string"
},
{
"name": "properties.mea.type",
"type": "string"
},
{
"name": "required",
"type": {
"type": "array",
"items": "string"
}
}
]
}
</code></pre>
<p>Anyhow, if they can do it, there certainly must be a way to convert it with current python libraries. Which library can do a valid conversion (and also complex python models/classes?</p>
<p>If there is an opinion of that this is a wrong approach, that is also welcome - if - pointing out a better way how this translation process can be done.</p>
|
<python><avro><fastavro>
|
2023-03-16 20:11:00
| 2
| 2,068
|
feder
|
75,761,188
| 10,065,878
|
Pandas groupby columns and multiply two other columns in the aggregate function
|
<p>I have a hopefully easy problem for some help stack helpers! I have a dataframe:</p>
<pre><code>df = pd.DataFrame({'Quantity': [2, 3, 4, 1, 2, 1, 4, 5],
'User': ['A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'],
'Price': [5, 3, 2, 6, 2, 3, 4, 5],
'Shop': ['X', 'X', 'X', 'Y', 'Z', 'Z', 'X', 'Y'],
'Day': ['M', 'T', 'W', 'W', 'M', 'T', 'M', 'W']
})
| QuantityUser Price Shop Day
0 2 A 5 X M
1 3 A 3 X T
2 4 B 2 X W
3 1 B 6 Y W
4 2 B 2 Z M
5 1 C 3 Z T
6 4 C 4 X M
7 5 C 5 Y W
</code></pre>
<p>My trouble comes when I try and aggregate it by shop and day. I'm hoping for the users in a shop by day and the average spent in that shop on that day. So in SQL it would be: <code>AVG(Quantity*Price)</code></p>
<p>I have the first part:</p>
<pre><code>df.groupby(by=['Shop','Day']).agg({'User': 'count'})
</code></pre>
<p>But my only solution to the other aggregation is first create a column and then aggregate it.</p>
<pre><code>df['Spend'] = df['Price'] * df['Quantity']
df.groupby(by=['Shop','Day']).agg({'User': 'count' ,'Spend' :'mean' })
</code></pre>
<p>Is there a better method I am missing? Ideally I want the aggregation to happen alongside the <code>Count</code> aggregate without the need for a new column created.</p>
|
<python><pandas><dataframe><group-by>
|
2023-03-16 20:06:04
| 3
| 308
|
FiercestJim
|
75,761,177
| 13,039,962
|
Issues assigning colors to numeric values of columns in Excel in openpyxl
|
<p>I have this df:</p>
<pre><code> NOMBRE ESTACION DEPARTAMENTO ... APNoviembre APDiciembre
0 ARAMANGO AMAZONAS ... -15.463042 3.254973
1 BAGUA CHICA AMAZONAS ... -8.193005 7.855943
2 CHACHAPOYAS AMAZONAS ... -11.803714 -9.552846
3 CHIRIACO AMAZONAS ... NaN NaN
4 EL PINTOR AMAZONAS ... 8.99654 10.01199
.. ... ... ... ... ...
529 ITE TACNA ... 500.0 25.0
530 VILACOTA TACNA ... -14.666667 2.389078
531 LA CRUZ TUMBES ... -26.666667 78.125
532 LAS PALMERAS DE UCAYALI UCAYALI ... 8.700102 6.0981
533 SAN ALEJANDRO UCAYALI ... -23.723229 -24.59382
[534 rows x 20 columns]
</code></pre>
<p>I want to color only the numeric values in a determined range (columns from 9 to the final column) and export it to excel. So i did this code:</p>
<pre><code>#Export the df to excel
with pd.ExcelWriter("path/to/file/df.xlsx") as writer:
df.to_excel(writer,'ANOM', index=False)
import openpyxl
from openpyxl.styles import PatternFill
import pandas as pd
wb = openpyxl.load_workbook("path/to/file/df.xlsx")
ws = wb.active #Name of the working sheet
fill_cell1 = PatternFill(patternType='solid',
fgColor='C2523C')
fill_cell2 = PatternFill(patternType='solid',
fgColor='E8951A')
fill_cell3 = PatternFill(patternType='solid',
fgColor='FAEA05')
fill_cell4 = PatternFill(patternType='solid',
fgColor='FFFFFF')
fill_cell5 = PatternFill(patternType='solid',
fgColor='B8F500')
fill_cell6 = PatternFill(patternType='solid',
fgColor='400000')
fill_cell7 = PatternFill(patternType='solid',
fgColor='19B04D')
fill_cell8 = PatternFill(patternType='solid',
fgColor='198759')
fill_cell9 = PatternFill(patternType='solid',
fgColor='089EAD')
fill_cell10 = PatternFill(patternType='solid',
fgColor='2B61A8')
fill_cell11 = PatternFill(patternType='solid',
fgColor='0B2C7A')
fill_cell12 = PatternFill(patternType='solid',
fgColor='000000')
for row in ws.rows:
for i in range(8,8+int((ws.max_column-8)/2)):
if row[i].value > 800:
row[i].fill = fill_cell11
row[int(i+(ws.max_column-8)/2)].fill = fill_cell11
elif 400 <= row[i].value <= 800:
row[i].fill = fill_cell10
row[int(i+(ws.max_column-8)/2)].fill = fill_cell10
elif 200 <= row[i].value < 400:
row[i].fill = fill_cell9
row[int(i+(ws.max_column-8)/2)].fill = fill_cell9
elif 100 <= row[i].value < 200:
row[i].fill = fill_cell8
row[int(i+(ws.max_column-8)/2)].fill = fill_cell8
elif 60 <= row[i].value < 100:
row[i].fill = fill_cell7
row[int(i+(ws.max_column-8)/2)].fill = fill_cell7
elif 30 <= row[i].value < 60:
row[i].fill = fill_cell6
row[int(i+(ws.max_column-8)/2)].fill = fill_cell6
elif 15 <= row[i].value < 30:
row[i].fill = fill_cell5
row[int(i+(ws.max_column-8)/2)].fill = fill_cell5
elif -15 <= row[i].value < 15:
row[i].fill = fill_cell4
row[int(i+(ws.max_column-8)/2)].fill = fill_cell4
elif -30 <= row[i].value < -15:
row[i].fill = fill_cell3
row[int(i+(ws.max_column-8)/2)].fill = fill_cell3
elif -60 <= row[i].value < -30:
row[i].fill = fill_cell2
row[int(i+(ws.max_column-8)/2)].fill = fill_cell2
elif -100 <= row[i].value < -60:
row[i].fill = fill_cell1
row[int(i+(ws.max_column-8)/2)].fill = fill_cell1
elif row[i].value == "NaN" or row[i].value == "" or row[i].value == None:
row[i].fill = fill_cell12
row[int(i+(ws.max_column-8)/2)].fill = fill_cell12
wb.save("path/to/file/df_colored.xlsx")
</code></pre>
<p>but i got this error: <code>TypeError: '>' not supported between instances of 'str' and 'int'</code></p>
<p>I know the error is because openpyxl is taking <code>row[i].value</code> as a string header, not a numeric value but i don't understand why. I replaced the 8 to 9, 10 or even 11 in <code>range(8,8+int((ws.max_column-8)/2))</code> and it only works with 19, but i dont get the excel file with the colors. Can someone help me to understand well the rows in a ws and give some solution to this.
Thanks in advance.</p>
|
<python><openpyxl>
|
2023-03-16 20:04:29
| 1
| 523
|
Javier
|
75,761,128
| 4,470,365
|
What does a negative return value mean when calling to_sql() in Pandas?
|
<p>I'm sending various data frames to Microsoft SQL Server using the Pandas function <code>to_sql()</code> and a <code>mssql+pyodbc://</code> connection made with <code>sqlalchemy.create_engine</code>. Sometimes <code>to_sql()</code> returns the number of rows written, which is what I expect from the documentation on Returns:</p>
<blockquote>
<p>Number of rows affected by to_sql. None is returned if the callable passed into method does not return an integer number of rows.</p>
<p>The number of returned rows affected is the sum of the rowcount attribute of sqlite3.Cursor or SQLAlchemy connectable which may not reflect the exact number of written rows as stipulated in the sqlite3 or SQLAlchemy.</p>
</blockquote>
<p>But in some cases I'm seeing it return negative values like -1, 2, -11, -56. If I use <code>method="multi"</code> this behavior goes away. Here I'm writing a table with 325 records:</p>
<pre><code>>>> PLSUBMITTALTYPE.to_sql("PLSubmittalType", con=data_lake, if_exists="replace")
-1
>>> PLSUBMITTALTYPE.to_sql("PLSubmittalType", con=data_lake, if_exists="replace", method="multi", chunksize = 50)
325
</code></pre>
<p>What do those negative values mean? It appears to be succeeding in writing to the database in those cases.</p>
|
<python><pandas><pandas-to-sql>
|
2023-03-16 19:59:14
| 1
| 23,346
|
Sam Firke
|
75,760,906
| 19,325,656
|
Populate readonly nested serializer
|
<p>I'm using ModelViewSets and I have nested serializer for these models.</p>
<pre><code>class Profile(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
user = models.OneToOneField(User, on_delete=models.CASCADE)
place = models.OneToOneField(Place, on_delete=models.DO_NOTHING)
bio = models.TextField(null=True)
class User(AbstractBaseUser, PermissionsMixin):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
email = models.EmailField(_('email address'), unique=True)
first_name = models.CharField(max_length=150, blank=True)
is_staff = models.BooleanField(default=False)
is_active = models.BooleanField(default=False)
objects = CustomAccountManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['first_name']
def __str__(self):
return self.email
</code></pre>
<p><strong>update</strong>
How can I autocomplete a newly created profile with 1:1 relation to User that is sending the request?</p>
<p>So, you are registering on a website, now you are a user but you want to have profile, some bio so you are setting up your profile for that</p>
<pre><code>class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
exclude = ('password', 'is_superuser', 'is_staff', 'groups', 'user_permissions')
class ProfileSerializer(WritableNestedModelSerializer, serializers.ModelSerializer):
user = UserSerializer(read_only=True)
place = PlaceSerializer(read_only=True)
class Meta:
model = Profile
fields = '__all__'
depth = 1
</code></pre>
<p><strong>viewsets</strong></p>
<pre><code>class ProfileViewSet(viewsets.ModelViewSet):
serializer_class = ProfileSerializer
get_queryset = Profile.objects.all()
def create(self, request):
serializer = ProfileSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
data = serializer.data
return Response(data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>As you can see I'm using 'drf-writable-nested' and it works okay but I can't figure it out how to automatically connect profile to user on new profile creation.</p>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2023-03-16 19:33:29
| 2
| 471
|
rafaelHTML
|
75,760,905
| 12,945,785
|
How to determine the frequency of data
|
<p>I have a DataFrame which represents time series. I would to find a way to find the frequency of the data (daily, weekly, monthly…)
Is there a way to do it in pandas ?</p>
<p>I mean I have a time series with index equal to</p>
<pre><code>DatetimeIndex(['2011-07-07', '2011-07-14', '2011-07-21', '2011-07-28',
'2011-08-04', '2011-08-11', '2011-08-18', '2011-08-25',
'2011-09-01', '2011-09-08',
...
'2023-01-05', '2023-01-12', '2023-01-19', '2023-01-26',
'2023-02-02', '2023-02-09', '2023-02-16', '2023-02-23',
'2023-03-02', '2023-03-09'],
dtype='datetime64[ns]', length=582, freq=None)
</code></pre>
<p>from my point of view it is weekly data but I would like to get it from a code line (also it could be daily, monthly...</p>
|
<python><pandas><dataframe>
|
2023-03-16 19:33:20
| 1
| 315
|
Jacques Tebeka
|
75,760,379
| 2,355,903
|
Logging Python Errors, warnings, etc
|
<p>I have been trying to write a program that will basically save all available information about the running of my program to a text file as it is being run in a batch environment where I can't see what errors/warnings/info are being generated. I have been banging my head against the wall for hours on this. I have been referring to this primarily: <a href="https://docs.python.org/2/howto/logging.html#logging-to-a-file" rel="nofollow noreferrer">https://docs.python.org/2/howto/logging.html#logging-to-a-file</a>
along with some stack overflow questions but nothing seems to solve my issue.</p>
<p>Below is what I have.</p>
<pre><code>import logging
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(filename = "directory/logs.txt", level = logging.DEBUG, force = True)
import pandas as pd
#import other packages used in full program
#Code with error to test capture
pprint("test")
</code></pre>
<p>When I do this, it creates the file but does not capture anything about the error. What am I missing here?</p>
|
<python><logging><python-logging>
|
2023-03-16 18:36:14
| 1
| 663
|
user2355903
|
75,760,365
| 607,846
|
Apply an update to a queryset involving a foreign key
|
<p>Lets say I have something like the code below, where I can nicely apply the update method to change the <code>engine</code> type of the cars contained in the query set <code>vintage_cars</code>. Is it possible to use update in a similar fashion for the code using the for loop, where a foreign key is involved?</p>
<pre class="lang-py prettyprint-override"><code>class Driver(Model):
name = CharField()
licence = CharField()
class Car(Model):
driver = models.ForeignKey(Driver)
status = CharField()
type = CharField()
engine = CharField()
vintage_cars = Car.objects.filter(type="vintage")
vintage_cars.update(engine="gas")
for c in vintage_cars:
driver = c.driver
if driver and driver.licence not in VALID_LICENCES:
c.driver = None
c.status = "IMPOUNDED"
d.save()
</code></pre>
<p>I'm thinking I need to apply a second filter involving this clause:</p>
<pre><code>if driver and driver.licence not in VALID_LICENCES:
</code></pre>
<p>to <code>vintage_cars</code>, but I'm not sure if that makes sense in terms of efficiency.</p>
|
<python><django><django-models>
|
2023-03-16 18:35:29
| 1
| 13,283
|
Baz
|
75,760,325
| 11,553,066
|
why is there tensor.dtype and tensor.type()?
|
<p>Quick question.
Why does a tensor have a type and a dtype attribute?</p>
<pre><code>t = torch.tensor([1,2,3,4])
print(t.dtype, t.type())
</code></pre>
<p>is it possible to change one of them without changing the other? if not why is there this redundancy? Does it serve any purpose?</p>
|
<python><pytorch>
|
2023-03-16 18:31:24
| 1
| 362
|
Still a learner
|
75,760,257
| 6,372,859
|
Create pandas column based on names of two others
|
<p>I have a long pandas dataframe with many columns, I want to take two of them and build a new one as follow</p>
<pre><code> A B
0 1 1
1 0 1
2 1 0
3 0 0
Aa Bb A_B
0 1 1 'Both'
1 0 1 'B'
2 1 0 'A'
3 0 0 'None'
</code></pre>
<p>The values of the new column are: 'Both' if both the values of Aa and Bb are 1; 'B' is the value of column Bb is 1 and Aa is 0, and so on. I tried converting to string columns Aa and Bb, then concatenating them and replace the corresponding values, which works, but: is there a more pythonic way to do this?</p>
<p>Thanks in advance!</p>
|
<python><pandas><string>
|
2023-03-16 18:25:22
| 5
| 583
|
Ernesto Lopez Fune
|
75,760,127
| 1,623,731
|
Django HTML Select not loading correct option
|
<p>Im having some trouble at my first Django project.
I have a ListView in which I have several comboboxes (select) to filter data in a table.
Im using Foundation for my site
The problem I´m having is that once I establish the "filters" for the queary in my comboboxes, data is presenting fine, but after loading the table, one of the selects is not loading the correct option(last lookup)</p>
<p>This is my ListView code:</p>
<pre><code>class ListSales(LoginRequiredMixin, ListView):
template_name = "sales/list.html"
context_object_name = 'sales'
login_url = reverse_lazy('users_app:user-login')
paginate_by = 5
def get_queryset(self):
client = self.request.GET.get("clientselect", "")
payment_method = self.request.GET.get("paymentselect", "")
date1 = self.request.GET.get("date1", '')
date2 = self.request.GET.get("date2", '')
product = self.request.GET.get("productselect", '')
if date1 == '':
date1 = datetime.date.today()
if date2 == '':
date2 = datetime.date.today()
queryset = Sale.objects.get_sales_list(
date1, date2, client, payment_method)
qty_total_product = 0
if product != '0' and product != '':
print(Product.objects.get(id=product).full_name)
print(queryset)
for querysale in queryset:
print(querysale)
for query_sale_detail in querysale.sale_details():
print(query_sale_detail.product.full_name +
" " + str(query_sale_detail.count))
if str(query_sale_detail.product.id) == product:
qty_total_product += query_sale_detail.count
print(qty_total_product)
obj_sales_list_filter, created_obj_sales_list_filter = SalesListFilters.objects.get_or_create(
id=1,
defaults={
'datefrom': date1,
'dateTo': date2,
'client': Client.objects.search_id(client),
'payment_method': PaymentMethod.objects.search_id(payment_method)
})
if not created_obj_sales_list_filter:
obj_sales_list_filter.datefrom = date1
obj_sales_list_filter.dateTo = date2
obj_sales_list_filter.client = Client.objects.search_id(client)
obj_sales_list_filter.payment_method = PaymentMethod.objects.search_id(
payment_method)
obj_sales_list_filter.save()
return queryset
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["clients"] = Client.objects.all().order_by('full_name')
context["payment_methods"] = PaymentMethod.objects.all()
context["products"] = Product.objects.all().order_by('full_name')
sales_list_filters = SalesListFilters.objects.all()
if sales_list_filters is None or sales_list_filters == {} or not sales_list_filters:
context["date1"] = str(datetime.date.today())
context["date2"] = str(datetime.date.today())
else:
sales_list_filters = SalesListFilters.objects.get(id=1)
context["date1"] = str(sales_list_filters.datefrom)
context["date2"] = str(sales_list_filters.dateTo)
if(sales_list_filters.client is None):
context["clientselect"] = 0
else:
context["clientselect"] = sales_list_filters.client.id
if(sales_list_filters.payment_method is None):
context["paymentselect"] = 0
else:
context["paymentselect"] = sales_list_filters.payment_method.id
current_sale_info = CurrentSaleInfo.objects.get(id=1)
if current_sale_info is None or current_sale_info == {} or current_sale_info.invoice_number is None:
context["current_sale_info"] = False
else:
context["current_sale_info"] = True
product = self.request.GET.get("productselect", '')
print("el producto seleccionado es:"+product+" "+str(type(product)))
if(product == '' or product == '0'):
print("Entro por defecto")
context["product_select"] = 0
else:
print("Entro por valor")
context["product_select"] = int(product)
return context
</code></pre>
<p>I can see in the **kwargs that the variable established fot productselect is loading fine. However, when I set context["product_select"] variable, the html always load option 0.</p>
<p>This is my html:</p>
<pre><code>{% extends "panel.html" %}
{% load static %}
{% block panel-content %}
<div class="grid-x medium-10">
<h3 class="cell medium-12" style="text-align: center;">Ventas</h3>
<div class="cell medium-12">&nbsp </div>
<form class="cell medium-12" method="GET">{% csrf_token %}
<div class="input-group grid-x medium-12">
<div class="input-group cell medium-2 grid-x">
<label class="cell medium-12">Desde:</label>
<span class="input-group-label"><i class="fi-calendar"></i></span>
<input type="date" id="date1" name="date1" class="input-group-field" type="date" value={{date1}}>
</div>
&nbsp&nbsp&nbsp
<div class="input-group cell medium-2 grid-x">
<label class="cell medium-12">Hasta:</label>
<span class="input-group-label"><i class="fi-calendar"></i></span>
<input type="date" id="date2" name="date2" class="input-group-field" type="date" value={{date2}}>
</div>
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp
<div class="input-group cell medium-2 grid-x">
<label class="cell medium-12">Cliente:</label>
<span class="input-group-label"><i class="fi-torso"></i></span>
<select id="clientselect" name="clientselect" class="input-group-field">
<option value="0" id="option0" name="option0" {% if clientselect is 0 %} selected {% endif %}>Todos</option>
<option value="-1" id="option-1" name="option-1" {% if clientselect is -1 %} selected {% endif %}>Sin Cliente</option>
{% for client in clients %}
<option value="{{client.pk}}" id="option{{client.pk}} " name="option{{client.pk}}" {% if clientselect is client.pk %} selected {% endif %}>{{client.full_name}}</option>
{% endfor %}
</select>
</div>
&nbsp&nbsp&nbsp
<div class="input-group cell medium-2 grid-x">
<label class="cell medium-12">Método de Pago:</label>
<span class="input-group-label"><i class="fi-dollar-bill"></i></span>
<select id="paymentselect" name="paymentselect" class="input-group-field">
<option value="0" id="option0" name="option0" {% if paymentselect is 0 %} selected {% endif %}>Todos</option>
{% for pm in payment_methods %}
<option value="{{pm.pk}}" id="option{{pm.pk}} " name="option{{pm.pk}}" {% if paymentselect is pm.pk %} selected {% endif %}>{{pm.full_name}}</option>
{% endfor %}
</select>
</div>
&nbsp&nbsp&nbsp
<div class="input-group cell medium-3 grid-x">
<label class="cell medium-12">Producto:</label>
<span class="input-group-label"><i class="fi-list"></i></span>
<select id="productselect" name="productselect" class="input-group-field">
<option value="0" id="option0" name="option0" {% if product_select is 0 %} selected {% endif %}>Todos</option>
{% for prod in products %}
<option value="{{prod.pk}}" id="option{{prod.pk}} " name="option{{prod.pk}}" {% if product_select is prod.pk %} selected {% endif %}>{{prod.full_name}}</option>
{% endfor %}
</select>
</div>
<div class="cell medium-1">
<button type="submit" class="button cell medium-4"><i class="fi-filter"></i>&nbsp Filtrar</button>
</div>
<div class="cell medium-1">
<h5>{{product_text}}</h5>
</div>
</div>
</form>
<div class="cell medium-12"> </div>
<table class="cell medium-12">
<thead>
<th>Fecha</th>
<th>Nro Factura</th>
<th>Cliente</a></th>
<th>Método de Pago</a></th>
<th>Monto</th>
<th>Acciones</th>
</thead>
<tbody>
{% for sale in sales %}
<tr>
<td>{{ sale.show_date }}</td>
<td>{{ sale.invoice_number }}</td>
<td>{{ sale.client.full_name }}</td>
<td>{{ sale.payment_method }}</td>
<td>${{ sale.amount_with_discount}}</td>
<td>
<div class="button-group">
<a href="#" class="button warning tiny" data-toggle="modalView{{sale.pk}}"><i class="fi-eye"></i></a>
</td>
</div>
</tr>
<div class="tiny reveal" id="modalView{{sale.pk}}" style="background-color:rgb(51,51,51);" data-reveal data-close-on-click="true" data-animation-in="spin-in" data-animation-out="spin-out">
<h4 class="cell" style="text-align: center;">Detalle de Venta</h4>
<br>
{% for sale_detail in sale.sale_details %}
<h5 style="text-align: center;">{{sale_detail.product.full_name}} X {{sale_detail.count}} : $ {{sale_detail.sale_detail_total}}</h5>
{% endfor %}
<h5 style="text-align: center;">--------------------------------------------------</h5>
<h5 style="text-align: center;">SUB TOTAL: $ {{sale.amount}}</h5>
<h5 style="text-align: center;">DESCUENTO: $ {{sale.discount}}</h5>
<h4 style="text-align: center;" class="detailtag">TOTAL: $ {{sale.amount_with_discount}}</h5>
<button class="close-button" data-close aria-label="Close modal" type="button">
<span aria-hidden="true">&times;</span>
</button>
</div>
{% endfor %}
</tbody>
</table>
{% if is_paginated %}
<ul class="pagination cell medium-12">
{% if page_obj.has_previous %}
<li><a href="?page={{ page_obj.previous_page_number }}&{{ view.urlencode_filter }}" style="color: wheat;">&laquo;</a></li>
{% else %}
<li class="disabled"><span style="color: wheat;">&laquo;</span></li>
{% endif %} {% for i in paginator.page_range %} {% if page_obj.number == i %}
<li class="active">
<span style="color: wheat;">{{ i }} <span class="sr-only">(actual)</span></span>
</li>
{% else %}
<li><a href="?page={{ i }}&{{ view.urlencode_filter }}" style="color: wheat;">{{ i }}</a></li>
{% endif %} {% endfor %} {% if page_obj.has_next %}
<li><a href="?page={{ page_obj.next_page_number }}&{{ view.urlencode_filter }}" style="color: wheat;">&raquo;</a></li>
{% else %}
<li class="disabled"><span style="color: wheat;">&raquo;</span></li>
{% endif %}
</ul>
{% endif %}
</div>
{% endblock panel-content %}
</code></pre>
<p>This is a picture showing that is loading every select, but no the one for products. It´s always loading product "Todos" that is first option:</p>
<p><a href="https://i.sstatic.net/A0THI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A0THI.jpg" alt="Before filter" /></a></p>
<p><a href="https://i.sstatic.net/bcfLc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bcfLc.jpg" alt="After filter" /></a></p>
|
<python><django><django-views><django-templates><django-queryset>
|
2023-03-16 18:12:05
| 2
| 622
|
Guillermo Zooby
|
75,760,086
| 18,313,588
|
Compare list in each row of pandas dataframe to a specific list and output to a separate pandas dataframe column
|
<p>I have a list named <code>fruit_list</code></p>
<pre><code>[apple,orange,pineapple,banana,tomato]
</code></pre>
<p>I want to match this list with the pandas dataframe column named <code>fruits</code> and output the intersection.</p>
<p>And a pandas dataframe with column named <code>fruits</code></p>
<pre><code>fruits
[apple,mango]
[mango,cabbage,pineapple,carrot]
[egg,sandwich,orange,carrot,tomato]
</code></pre>
<p>Expected output</p>
<pre><code>output
[apple]
[pineapple]
[orange,tomato]
</code></pre>
<p>Any idea on how to accomplish this efficiently. Thanks!</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-16 18:07:25
| 3
| 493
|
nerd
|
75,760,004
| 2,896,976
|
Using a DRF JSONField as a Serializer
|
<p>I have a custom <code>JSONField</code> which I would like to use as a serializer. Serializers provide a bunch of extra functionality that you don't get with a Field. Serializers are Fields but not the other way around.</p>
<p>As an example, consider a field which has some complex dynamic data and needs to be validated manually:</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework.serializers import JSONField, Serializer
class MetadataField(JSONField):
def to_internal_value(self, raw_data):
data = super().to_internal_value(raw_data)
# lengthy validation logic which is calling other methods on the class
return data
class ItemSerializer(Serializer):
metadata = MetadataField()
</code></pre>
<p>To keep this field reusable and to prevent bloating the <code>ItemSerializer</code> class we move the logic out into its own <code>MetadataField</code>. However, as a consequence, we no longer have access to things like <code>self.instance</code>, nor can we directly instantiate this class like a serializer and pass it some data.</p>
<p>Generating a dynamic schema is not an option, and I would like if at all possible to keep the validation logic out of <code>ItemSerializer</code>.</p>
|
<python><django><validation><django-rest-framework>
|
2023-03-16 17:59:38
| 0
| 2,525
|
Jessie
|
75,759,971
| 10,192,593
|
Turn numpy array into an xarray in Python
|
<p>I have the following numpy array of shape (3,2,3)</p>
<pre><code>a = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]]])
a.shape
</code></pre>
<p>I would like to turn this into an xarray and give it names: income (values = 0,1,2), sex (values=0,1) and BMI (values = ('low', 'med', 'high')</p>
<p>How would I go about doing this?</p>
<p>Thx</p>
|
<python><numpy><python-xarray>
|
2023-03-16 17:55:52
| 1
| 564
|
Stata_user
|
75,759,892
| 871,617
|
AWS CDK Codepipeline build action with golang app with multiple commands targeting multiple Lambdas
|
<p>So my team and I have been happily pushing code to our pipeline for a while now but today we've been getting errors because because the asset the pipeline builds is too large to deploy to a lambda (which happens to be 250mb)</p>
<p>The reason I think we're getting this error is because in the buildspec for the build action of the pipeline (written in Python as part of a CDK app), I run</p>
<p><code>f"go build -o ./build -ldflags=\"-X 'main.CommitID={commit} -s -w'\" ./..."</code> which successfully builds all the binaries to the build folder but then the artefact is zipped up and encrypted before moving onto the deploy stage and deployed to each lambda. The storage here is somewhat wasteful because it deploys all binaries to all lambdas and the handler just picks the right binary to run.</p>
<p>However, I only want a single binary file deployed to the lambda but the deploy stage is actually a cdk synth followed by a <code>CloudFormationCreateUpdateStackAction</code> and in order to pass the artefact to the lambda I use <code>Code.from_cfn_parameters</code> and pass the params in as overrides in the final deploy stage.</p>
<p>My question is, given the build pipeline creates individual binaries for each <code>cmd</code> in the build, how do I then extract the binary from the artefact to pass as the parameter value?</p>
<p>I'm currently passing the whole artefact as a parameter to the synth stage and I don't seem to be able to select a single binary from the artefact as the code property for the lambda.</p>
<p>[edit]
I've been using <a href="https://constructs.dev/packages/aws-cdk-lib/v/2.61.1?submodule=aws_lambda&lang=python" rel="nofollow noreferrer">this source</a> for my documentation as well as scouring the internet for others whom might have had similar issues but I haven't found anything that works yet.</p>
|
<python><amazon-web-services><aws-lambda><aws-cdk><aws-codepipeline>
|
2023-03-16 17:48:24
| 1
| 2,796
|
Dave Mackintosh
|
75,759,891
| 528,369
|
Print Python pandas dataframe with aligned decimal points
|
<p>For a pandas dataframe <code>df</code> the output of</p>
<pre><code>print(df.round(decimals=3))
</code></pre>
<p>was</p>
<pre><code> mean sd skew ... min max count
est mean and sd 1.58895 0.494215 0.055769 ... -0.255506 3.210586 1000
est mean 1.59015 0.034379 0.092728 ... 1.485435 1.706782 1000
est sd 1.586303 0.492995 0.065619 ... -0.254994 3.336123 1000
</code></pre>
<p>How can I align the decimal points in the "mean" column?</p>
|
<python><pandas><formatting>
|
2023-03-16 17:48:11
| 1
| 2,605
|
Fortranner
|
75,759,649
| 10,216,028
|
How to force python to write a file before exiting the program?
|
<p>I already checked <a href="https://stackoverflow.com/a/9824936/12469912">this answer</a> but it didn't work.
I am looking for a way to force python to write a file (let's call it <code>file_A.py</code>) before exiting the program. I tried the suggested approach in the above answer, but it only worked when the destination file existed and we add more content. In my situation, the file does not exist and it is supposed to be generated before the program exits normally. The reason is that the generated file has to be used in the following steps of the program.</p>
<p>What I've coded is as follows:</p>
<pre><code>with open('file_A.py', 'w') as f:
f.write(content)
f.flush()
os.fsync(f.fileno())
</code></pre>
<p>But <code>file_A.py</code> was not generated before the program finishes execution normally. How do I fix it?</p>
|
<python><python-3.x>
|
2023-03-16 17:24:57
| 3
| 455
|
Coder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.