QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,884,313
1,802,726
How to filter a many-to-many field (not primary key) in a Django queryset for serialization with Django REST framework
<p>Given two models with a many-to-many relationship between them.</p> <pre><code>class Collection(models.Model): title = models.CharField(max_length=200, unique=True, verbose_name=&quot;Title&quot;) class Content(models.Model): author = models.CharField(max_length=200, unique=True, verbose_name=&quot;Author&quot;) collection = models.ManyToManyField(Collection, verbose_name=&quot;Collection&quot;) public = models.BooleanField(default=True, verbose_name=&quot;Public&quot;) </code></pre> <p>How can I filter the <code>Collection</code> to only list <code>Content</code> where <code>public == True</code>?</p> <p><code>serializers.py</code>:</p> <pre><code>class ContentSerializer(serializers.ModelSerializer): class Meta: model = Content fields = [&quot;author&quot;] class CollectionSerializer(serializers.ModelSerializer): content_set = ContentSerializer(many=True, read_only=True) class Meta: model = Collection fields = [ &quot;title&quot;, &quot;content_set&quot;, ] </code></pre> <p><code>view.py</code>:</p> <pre><code>class CollectionViewSet(viewsets.ReadOnlyModelViewSet): queryset = Collection.objects.all() serializer_class = CollectionSerializer permission_classes = [permissions.AllowAny] def get_queryset(self): return Collection.objects.filter(content_set__enable=True) # Not working as expected </code></pre> <p>Desired output:</p> <pre><code>[ { &quot;title&quot;: &quot;Foo&quot;, &quot;content_set&quot;: [ { &quot;author&quot;: &quot;Jane Doe&quot; } ] } ] </code></pre>
<python><django><django-models><django-rest-framework>
2023-08-11 14:09:31
2
2,735
Raniere Silva
76,884,275
16,436,095
Can an object not have a method __dir__()?
<p>From <a href="https://docs.python.org/3/library/functions.html#dir" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>If the object has a method named <code>__dir__()</code>, this method will be called and must return the list of attributes.</p> </blockquote> <p>Since all classes are somehow descendants of the <code>object</code> class, which has the <code>__dir__()</code> method, how can any instance of any class not have this method?</p> <p>I know I can override this method, but I'm confused by the phrase. What exactly does this mean?</p>
<python>
2023-08-11 14:03:46
0
370
maskalev
76,884,096
17,160,160
Conditional Constraint Assignment Over Indexed Variables in Pyomo
<p>Below is an (over)simplified Pyomo model:</p> <pre><code>model.WEEKS = Set(initialize = [1,2,3], ordered = True) model.PRODS = Set(initialize = ['Q24','J24','F24'], ordered = True) model.volume = Var(model.WEEKS,model.PRODS, within = NonNegativeIntegers) </code></pre> <p>Which gives:</p> <pre><code>--- 3 Set Declarations PRODS : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 1 : Any : 3 : {'Q24', 'J24', 'F24'} WEEKS : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 1 : Any : 3 : {1, 2, 3} volume_index : Size=1, Index=None, Ordered=True Key : Dimen : Domain : Size : Members None : 2 : WEEKS*PRODS : 9 : {(1, 'Q24'), (1, 'J24'), (1, 'F24'), (2, 'Q24'), (2, 'J24'), (2, 'F24'), (3, 'Q24'), (3, 'J24'), (3, 'F24')} 1 Var Declarations volume : Size=9, Index=volume_index Key : Lower : Value : Upper : Fixed : Stale : Domain (1, 'F24') : 0 : None : None : False : True : NonNegativeIntegers (1, 'J24') : 0 : None : None : False : True : NonNegativeIntegers (1, 'Q24') : 0 : None : None : False : True : NonNegativeIntegers (2, 'F24') : 0 : None : None : False : True : NonNegativeIntegers (2, 'J24') : 0 : None : None : False : True : NonNegativeIntegers (2, 'Q24') : 0 : None : None : False : True : NonNegativeIntegers (3, 'F24') : 0 : None : None : False : True : NonNegativeIntegers (3, 'J24') : 0 : None : None : False : True : NonNegativeIntegers (3, 'Q24') : 0 : None : None : False : True : NonNegativeIntegers --- </code></pre> <p>I'm attempting to write a constraint that limits the sum volume per <code>PROD</code> over all <code>WEEKS</code> to i.e. 300. This is simple enough using:</p> <pre><code>def volMax_rule(model,j): return sum(model.volume[i,j] for i in model.WEEKS) &lt;= 300 model.volMax_calc = Constraint(model.PRODS, rule = volMax_rule) </code></pre> <p>Which creates:</p> <pre><code>--- 1 Constraint Declarations volMax_calc : Size=3, Index=PRODS, Active=True Key : Lower : Body : Upper : Active F24 : -Inf : volume[1,F24] + volume[2,F24] + volume[3,F24] : 300.0 : True J24 : -Inf : volume[1,J24] + volume[2,J24] + volume[3,J24] : 300.0 : True Q24 : -Inf : volume[1,Q24] + volume[2,Q24] + volume[3,Q24] : 300.0 : True --- </code></pre> <p>However, the challenge I am facing is that I don't want <code>Q24</code> to be summed up separately but rather summed within both <code>J24</code> and <code>F24</code>. As in:</p> <pre><code> Key : Lower : Body : Upper : Active F : -Inf : volume[1,F24] + volume[2,F24] + volume[3,F24] + volume[1,Q24] + volume[2,Q24] + volume[3,Q24]: 300.0 : True J : -Inf : volume[1,J24] + volume[2,J24] + volume[3,J24] + volume[1,Q24] + volume[2,Q24] + volume[3,Q24]: 300.0 : True --- </code></pre> <p>I'm unsure how to achieve this. My guess is to perhaps create an additional parameter containing `['JAN', 'FEB'] for each week and then creating a conditional constraint to assign values? However, I'm pretty inexperienced and am probably off the mark. Some guidance would be appreciated please.</p> <p>Many thanks!</p>
<python><pyomo>
2023-08-11 13:43:07
1
609
r0bt
76,884,047
2,722,968
How to type-hint a classmethod's `Self` return type
<p>Let's say I have a class that has class-/staticmethod which are akin to constructors, something to the tune of</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self): ... # something @classmethod def from_bar(cls, bar): if bar.is_foobar(): return cls('foobar') else: return cls() </code></pre> <p>My question is how to correctly type-hint the <code>from_bar</code>-method. As far as I can see there is a dilemma here:</p> <ul> <li>Either <code>from_bar</code> is a <code>classmethod</code> that returns <code>typing.Self</code>; <code>def from_bar(cls, bar: Bar) -&gt; typing.Self</code>. However this is not actually correct, because a subclass <code>Foobar(Foo)</code> would return a <code>Foobar</code> (by means of <code>return cls(...)</code>) while hinting a <code>Foo</code> via <code>typing.Self</code>. This is a type-error.</li> <li>Or <code>from_bar</code> is a <code>staticmethod</code> that returns <code>'Foo'</code> via late-binding; <code>def from_bar(bar: Bar) -&gt; 'Foo'</code>. This however is not possible because a <code>staticmethod</code> defined on <code>Foo</code> can't refer to <code>Foo</code> (by means of <code>return Foo(...)</code> as the class is currently being defined - unlike the type-hint there is not late binding to identifiers.</li> </ul> <p>Am I overcomplicating things?</p>
<python>
2023-08-11 13:37:51
0
17,346
user2722968
76,884,016
14,707,253
Snowpark - Create multiple sessions in Snowpark (Snowflake)
<p>I'm currently learning how to use snowpark and I wanted to know if this was possible to create multiple sessions.</p> <p>I have a preconfigured session with the handler like this :</p> <p><a href="https://i.sstatic.net/jd8WF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jd8WF.png" alt="enter image description here" /></a></p> <p>And I tried to create another session using the Session.builder of the snowpark package like this :</p> <pre><code>class connection(): def __init__(self, user, password, account, role, warehouse, database, schema): self.connection_parameters = { &quot;user&quot; : user, &quot;password&quot; : password, &quot;account&quot;: account, &quot;role&quot;: role, &quot;warehouse&quot;: warehouse, &quot;database&quot;: database, &quot;schema&quot;: schema } self.session = Session.builder.configs(self.connection_parameters).create() def get_table(self, tablename): df = self.session(tablename) return df </code></pre> <p>And after running the worksheet (with the filled connection_parameters) I got this error :</p> <pre><code>Traceback (most recent call last): Worksheet, line 28, in main Worksheet, line 19, in __init__ File &quot;snowflake/snowpark/session.py&quot;, line 232, in create return self._create_internal(conn=None) File &quot;snowflake/snowpark/session.py&quot;, line 238, in _create_internal ServerConnection({}, conn) if conn else ServerConnection(self._options) File &quot;snowflake/snowpark/_internal/server_connection.py&quot;, line 138, in __init__ self._conn = conn if conn else connect(**self._lower_case_parameters) File &quot;snowflake/connector/__init__.py&quot;, line 50, in Connect return StoredProcConnection(**kwargs) File &quot;snowflake/connector/connection.py&quot;, line 178, in __init__ raise ProgrammingError( snowflake.connector.errors.ProgrammingError: Connection was already created. We don't allow user to create their own connection inside a stored procedure. Please use the connection in the provided session from the handler. </code></pre> <p>So I wanted to know if this was possible to create multiple sessions in Snowpark, or maybe is it because the Snowflake App environment and not a local one (jupyter, conda, ...), if not how can I create multiple sessions/connections ?</p>
<python><snowflake-cloud-data-platform>
2023-08-11 13:33:52
1
554
Emile Dadou
76,883,750
1,583,566
Generating numbers in log-normal distribution not giving right parameters in return?
<p>So I am trying to implement a function to generate random number following log-normal distribution myself. (Despite I am testing it in numpy, I need it for somewhere else where out of the box function is unavailable).</p> <p>Using Box-Muller transform to generate Normally distributed random number and taking exponential of it (suggested in <a href="https://stats.stackexchange.com/questions/481638/generating-random-numbers-that-are-log-normally-distributed">this question</a>), and referring to <a href="https://en.wikipedia.org/wiki/Log-normal_distribution" rel="nofollow noreferrer">Wikipedia page</a> I come up with this code:</p> <pre><code> import matplotlib.pyplot as plt import numpy as np rng = np.random.default_rng() def f(): mean = 100 sigma = 50 u = np.log((mean * mean) / np.sqrt(mean * mean + sigma * sigma)) d = np.log(1 + (sigma * sigma) / (mean * mean)) u1 = 0 while u1 &lt;= 0.0000001: u1 = rng.random() u2 = rng.random() mag = d * np.sqrt(-2 * np.log(u1)) z0 = mag * np.cos(2 * np.pi * u2) + u z1 = mag * np.sin(2 * np.pi * u2) + u return (np.exp(z0), np.exp(z1)) iteration = 100000 data = [] for i in range(0, iteration): x = f() data.append(x[0]) data.append(x[1]) fig, ax = plt.subplots() ax.hist(data, bins=100) print('mean: ' + str(np.mean(data))) print('median: ' + str(np.median(data))) print('stdev: ' + str(np.std(data))) </code></pre> <p>However, the generated values does not have the right mean and stdev, even though the shape seems to be vaguely correct. Does I miss anything? Or is it just some floating point shenanigans?</p> <p><a href="https://i.sstatic.net/a86Eo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a86Eo.png" alt="matplotlib histogram plot" /></a></p>
<python><numpy><random>
2023-08-11 12:58:21
1
4,616
luiges90
76,883,679
9,359,102
Convert a Named tuple into a dictionary in Python 3.8
<p>I'm using a package that downloads favicons from websites.</p> <p>It has 4 components (Not key-value pairs) and its corresponding values for each element in the list.</p> <pre><code>url='https://www.python.org/', width=200, height=200, format='png' </code></pre> <p>My purpose is to convert each element in the list(containing the 4 components) same into a key:value pair as in a dictionary in this way:</p> <pre><code>'url':'https://www.python.org/', 'width':200, 'height':200, 'format':'png', </code></pre> <p>My following attempt didnt yield the correct result.</p> <p>Here is my attempt :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>import favicon user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36' headers = {'User-Agent': user_agent} url = 'https://www.python.org/' icons = favicon.get(url, headers=headers, timeout=2) print(f"Before:{icons}") my_dict = {} for index,element in enumerate(icons): my_dict[index] = element print(f"After:{my_dict}") </code></pre> </div> </div> </p> <p>The result of</p> <blockquote> <p>print(f&quot;Before:{icons}) is</p> </blockquote> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> [Icon(url='https://www.python.org/static/opengraph-icon-200x200.png', width=200, height=200, format='png'), Icon(url='https://www.python.org/static/metro-icon-144x144-precomposed.png', width=144, height=144, format='png'), Icon(url='https://www.python.org/static/apple-touch-icon-144x144-precomposed.png', width=144, height=144, format='png'), Icon(url='https://www.python.org/static/apple-touch-icon-114x114-precomposed.png', width=114, height=114, format='png'), Icon(url='https://www.python.org/static/apple-touch-icon-72x72-precomposed.png', width=72, height=72, format='png'), Icon(url='https://www.python.org/favicon.ico', width=0, height=0, format='ico'), Icon(url='https://www.python.org/static/apple-touch-icon-precomposed.png', width=0, height=0, format='png'), Icon(url='https://www.python.org/static/favicon.ico', width=0, height=0, format='ico')]</code></pre> </div> </div> </p> <p>And the result after enumerating through it and building a dictionary:</p> <blockquote> <p>print(f&quot;After:{my_dict}&quot;) is</p> </blockquote> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>After:{0: Icon(url='https://www.python.org/static/opengraph-icon-200x200.png', width=200, height=200, format='png'), 1: Icon(url='https://www.python.org/static/metro-icon-144x144-precomposed.png', width=144, height=144, format='png'), 2: Icon(url='https://www.python.org/static/apple-touch-icon-144x144-precomposed.png', width=144, height=144, format='png'), 3: Icon(url='https://www.python.org/static/apple-touch-icon-114x114-precomposed.png', width=114, height=114, format='png'), 4: Icon(url='https://www.python.org/static/apple-touch-icon-72x72-precomposed.png', width=72, height=72, format='png'), 5: Icon(url='https://www.python.org/static/favicon.ico', width=0, height=0, format='ico'), 6: Icon(url='https://www.python.org/static/apple-touch-icon-precomposed.png', width=0, height=0, format='png'), 7: Icon(url='https://www.python.org/favicon.ico', width=0, height=0, format='ico')}</code></pre> </div> </div> </p> <p>How I Want it to look as for only the 1st object :</p> <pre><code>icon :{ &quot;url&quot;: &quot;python.org/static/opengraph-icon-200x200.png&quot;, &quot;width&quot;: 200, &quot;height&quot;: 200, &quot;format&quot;: &quot;png&quot;, } </code></pre> <p>If possible, I would be happy to see it for all the objects too in the format listed above.</p> <p>Kindly assist !</p>
<python><python-3.x><list><dictionary>
2023-08-11 12:48:02
2
489
Earthling
76,883,655
6,293,961
How to update fill_between?
<p>In <code>matplotlib</code>, the data of <code>plot</code> can be updated by using <code>set_xdata</code> and <code>set_ydata</code>:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt line, = plt.plot([1, 2, 3], [4, 5, 6]) line.set_ydata([6, 5, 4]) plt.show() </code></pre> <p>How is the data of <code>fill_between</code> updated? The closest method I could find is <code>set_offsets</code>.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fill = plt.fill_between([1, 2, 3], [5, 5, 5], [4, 5, 6]) fill.set_offsets([[3, 4], [4, 5], [5, 6]]) plt.show() </code></pre> <p>I tried to change <code>y1</code> from <code>[5, 5, 5]</code> to <code>[3, 4, 5]</code> in the above code, but it did not work.</p>
<python><matplotlib>
2023-08-11 12:44:04
2
835
W. Zhu
76,883,555
12,390,973
How to apply a constraint so that a generator should run for atleast 4 intervals?
<p>I was writing an optimization problem statement using Pyomo. It is a general problem statement:</p> <ul> <li>There is a demand profile of <strong>96 blocks</strong>.</li> <li>There are 2 main generators of <strong>700kW</strong> <strong>(gen 1 and gen 2)</strong> and one backup generator of <strong>1000kW</strong>.</li> <li>Two separate price lists <strong>price 1 and price 2</strong> for generators and the backup generator is the costliest one.</li> <li>Generators are exclusive in nature which means only 1 generator ll be active at a time and the rest of the energy ll come from a backup generator.</li> <li>Last and the most important thing is if the generator is active then it should remain active for the <strong>next 4 blocks</strong>. This is where I am facing the issue.</li> </ul> <p>I have created a concrete model for this in Pyomo. Here is the code for it.</p> <pre><code>import numpy as np from pyomo.environ import * from pyomo.opt import SolverFactory import pandas as pd np.random.seed(96) prices = { 'price 1': np.random.randint(1,20,96), 'price 2': np.random.randint(1,20,96), } load_profile = np.random.randint(500,1000,96) generator_capacities = { 'gen 1': 700, 'gen 2': 700, 'backup gen': 1000, } backup_cost = 10000 model = ConcreteModel() # Define decision variables model.m_index = Set(initialize=list(range(len(load_profile)))) model.gen1_output = Var(model.m_index, domain=NonNegativeReals) model.gen2_output = Var(model.m_index, domain=NonNegativeReals) model.backup_gen_output = Var(model.m_index, domain=NonNegativeReals) # Define binary variables for gen1 and gen2 model.gen1_active = Var(model.m_index, within=Binary, initialize=1) model.gen2_active = Var(model.m_index, within=Binary, initialize=0) model.gen1_continuous_duration = Var(model.m_index, within=Binary) model.gen2_continuous_duration = Var(model.m_index, within=Binary) model.load_data = Param(model.m_index, initialize=dict(zip(model.m_index, load_profile))) def cost(model): gen1_cost = sum(model.gen1_output[m] * prices['price 1'][m] for m in model.m_index) gen2_cost = sum(model.gen2_output[m] * prices['price 2'][m] for m in model.m_index) lost_load = sum(model.backup_gen_output[m] for m in model.m_index) total_cost = gen1_cost + gen2_cost + lost_load * backup_cost return total_cost model.obj = Objective(rule=cost, sense=minimize) def load_constraint_rule(model, m): return ( model.gen1_output[m] + model.gen2_output[m] + model.backup_gen_output[m] == load_profile[m] ) model.load_constraint = Constraint(model.m_index, rule=load_constraint_rule) def gen1_max(model, m): eq = model.gen1_output[m] &lt;= 700 * model.gen1_active[m] return eq model.gen1_max = Constraint(model.m_index, rule=gen1_max) def gen2_max(model, m): eq = model.gen2_output[m] &lt;= 700 * model.gen2_active[m] return eq model.gen2_max = Constraint(model.m_index, rule=gen2_max) def backup_max(model, m): eq = model.backup_gen_output[m] &lt;= 1000 return eq model.backup_max = Constraint(model.m_index, rule=backup_max) def exclusive_gen_operation_rule(model, m): return model.gen1_active[m] + model.gen2_active[m] &lt;= 1 model.exclusive_gen_operation = Constraint(model.m_index, rule=exclusive_gen_operation_rule) def gen1_continuous(model, m): interval = 4 if m == 0: return Constraint.Skip elif m == 1: if (model.gen1_active[m-1] == 1): if m &lt;= 96 - interval: eq = sum([model.gen1_active[i] for i in range(m, m + interval)]) == interval return eq else: return Constraint.Skip else: return Constraint.Skip elif m &gt;= 2: if (model.gen1_active[m-1] == 1 and model.gen1_active[m-2] == 0): if m &lt;= 96 - interval: eq = sum([model.gen1_active[i] for i in range(m, m + interval)]) == interval return eq else: return Constraint.Skip else: return Constraint.Skip model.gen1_continuous = Constraint(model.m_index, rule=gen1_continuous) def gen2_continuous(model, m): interval = 4 if m == 0: return Constraint.Skip elif m == 1: if (model.gen2_active[m-1] == 1): if m &lt;= 96 - interval: eq = sum([model.gen2_active[i] for i in range(m, m + interval)]) == interval return eq else: return Constraint.Skip else: return Constraint.Skip elif m &gt;= 2: if (model.gen2_active[m-1] == 1 and model.gen2_active[m-2] == 0): if m &lt;= 96 - interval: eq = sum([model.gen2_active[i] for i in range(m, m + interval)]) == interval return eq else: return Constraint.Skip else: return Constraint.Skip model.gen2_continuous = Constraint(model.m_index, rule=gen2_continuous) # def duration_constraint1_rule(model, m): # T = 4 # Desired minimum duration # if m &lt;= 96 - T: # return sum(model.gen1_active[i] for i in range(m, m + T)) == T # return Constraint.Skip # # model.duration_constraint1 = Constraint(model.m_index, rule=duration_constraint1_rule) # # def duration_constraint2_rule(model, m): # T = 4 # Desired minimum duration # if m &lt;= 96 - T: # return sum(model.gen2_active[i] for i in range(m, m + T)) == T # return Constraint.Skip # # model.duration_constraint2 = Constraint(model.m_index, rule=duration_constraint2_rule) # # # def continuous_sync_rule(model, m): # return model.gen1_active[m] == model.gen1_continuous_duration[m] # # model.continuous_sync1 = Constraint(model.m_index, rule=continuous_sync_rule) # # def continuous_sync_rule(model, m): # return model.gen2_active[m] == model.gen2_continuous_duration[m] # # model.continuous_sync2 = Constraint(model.m_index, rule=continuous_sync_rule) opt = SolverFactory('gurobi') # Use a solver of your choice results = opt.solve(model) if results.solver.termination_condition == TerminationCondition.optimal: print(&quot;Optimal solution found&quot;) else: print(&quot;Solver did not find an optimal solution&quot;) output={} for gen in ['gen1', 'gen2', 'backup_gen']: values = [model.__getattribute__(gen + '_output')[hour].value for hour in model.m_index] output[gen] = values # print(f&quot;{gen} output: {values}&quot;) for gen in ['gen1', 'gen2']: values = [model.__getattribute__(gen + '_active')[hour].value for hour in model.m_index] output[gen + ' switch'] = values # print(f&quot;{gen} active:&quot;, values) output['price 1'] = prices['price 1'] output['price 2'] = prices['price 2'] output['Load profile'] = load_profile pd.DataFrame(output).to_excel('gen switches.xlsx') # print(prices) </code></pre> <p><strong>gen1_continuous</strong> and <strong>gen2_continuous</strong> are the constraints where I am trying the apply this condition where if gen1 is active then it should remain active for <strong>next 4 blocks</strong> but it is not working properly. Can anyone please help? The expected output should be like this. This is the output from the model( Since this I am using random data you might get different output): <a href="https://i.sstatic.net/DJ76V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DJ76V.png" alt="enter image description here" /></a></p> <p>However, expected output in <strong>gen1 switch</strong> and <strong>gen2 switch</strong> should be like this based on the prices that you are seeing in the above image. <a href="https://i.sstatic.net/tma2S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tma2S.png" alt="enter image description here" /></a></p>
<python><pandas><numpy><pyomo>
2023-08-11 12:29:00
2
845
Vesper
76,883,509
20,999,380
paste end of variable name based on i in for loop
<p>I am trying to save each variable save_i (up to length(counter)) as a row in a csv file. Each save variable is defined in a while loop so I have save_1:save_i where i= length(counter). A portion of the code is below:</p> <pre><code># Packages import msvcrt import time import sounddevice as sd import soundfile as sf import csv # Settings PLAY_TIME = 5 # in seconds # Sound sound_file_1 = 'C:/Users/am650/Desktop/Audio/aaa_1.mp3' sound_file_2 = 'C:/Users/am650/Desktop/Audio/aaa_2.mp3' blast_file_1, fs1 = sf.read(sound_file_1, dtype='float32') blast_file_2, fs2 = sf.read(sound_file_2, dtype='float32') # Set Up time_start = time.perf_counter() time_stop = time.perf_counter() time_elapsed = time_stop - time_start counter = 0 # Basic Logic while time_elapsed &lt; PLAY_TIME: blast = msvcrt.getch() if blast == b'1': blast = 1 time_in = time.perf_counter() time_to_button_press = time_in - time_start sd.play(blast_file_1, device=4) status = sd.wait() time_in = time.perf_counter() time_to_blast_end = time_in - time_start counter += 1 save = &quot;save_&quot;+str(counter) globals()[save] = (counter, blast, time_to_button_press, time_to_blast_end) print(eval(&quot;save_&quot;+str(counter))) if time_to_blast_end &gt; PLAY_TIME: break elif blast == b'2': blast = 2 time_in = time.perf_counter() time_to_button_press = time_in - time_start sd.play(blast_file_2, device=4) status = sd.wait() time_in = time.perf_counter() time_to_blast_end = time_in - time_start counter += 1 save = &quot;save_&quot;+str(counter) globals()[save] = (counter, blast, time_to_button_press, time_to_blast_end) print(eval(&quot;save_&quot;+str(counter))) if time_to_blast_end &gt; PLAY_TIME: break else: print(&quot;Invalid Entry&quot;) with open('C:/Users/am650/Desktop/testdata.csv', 'w', newline='') as csvfile: datawriter = csv.writer(csvfile, delimiter=' ') for i in range(counter): datawriter.writerow(eval(&quot;save_&quot;+str(i))) </code></pre> <p>I have tried several variations of that last line of code (<code>datawriter.writerow(eval(&quot;save_&quot;+str(i)))</code>), but none of my attempts have worked as expected. The above returns an empty excel sheet. <code>datawriter.writerow(&quot;save_&quot;+str(i) </code>returns &quot;save_1&quot;, &quot;save_2&quot;, etc. up to length(counter) as there own rows instead of the data saved in those variables. <code>datawriter.writerow(&quot;save_&quot;+str(counter))</code>gives me the last round of data repeated for i rows (which tells me that the for loop is working okay); additionally, this gives me the last save variable with all components (counter, blast, time_to_button_press, and time_to_blast_end) in one column separated by spaces rather than each component in its own column.</p> <p>I expect the code to give me an excel sheet that looks something like:</p> <p>r1: 1, 4, 0.5, 1</p> <p>r2: 2, 2, 1.7, 2.3</p> <p>r3: 3, 5, 4, 4.5</p> <p>etc. where each row has four columns and contains the data from save_1:save_i</p> <p>Any help would be great :) Please forgive my code's inelegance, I almost exclusively work in r. I have tried to use a bunch of the already answered similar questions and they have gotten me closer to my goal, but not all the way.</p>
<python><csv><for-loop><while-loop>
2023-08-11 12:20:16
1
345
grace.cutler
76,883,274
6,289,601
Can I get LIME to work with my BERT regression model?
<p>I am getting a somewhat inscrutable error every time I true to run the <code>LIME</code> text explainer on my <code>BERT</code> regression model. Basically, the <code>BERT</code> model produces a numerical prediction just fine for any text I supply it with, but when the <code>LIME</code> text explainer uses the model it causes it to generate this error:</p> <pre><code>ValueError: only one element tensors can be converted to Python scalars </code></pre> <p>I import my libraries and model as follows:</p> <pre><code>import lime import torch from lime.lime_text import LimeTextExplainer explainer = LimeTextExplainer() from transformers import AutoModelForSequenceClassification, AutoTokenizer # Load the saved BERT model and tokenizer loaded_model = AutoModelForSequenceClassification.from_pretrained(&quot;/path/to/model&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;bert-base-uncased&quot;) </code></pre> <p>I then define a <code>prediction</code> function for my <code>BERT</code> model:</p> <pre><code>def predict(text): inputs = tokenizer(text, padding=True, truncation=True, return_tensors=&quot;pt&quot;, max_length=128) loaded_model.eval() with torch.no_grad(): outputs = loaded_model(**inputs) predicted_value = outputs.logits.item() return predicted_value </code></pre> <p>I test it and it works fine:</p> <pre><code>text_to_interpret = &quot;We're flying high, watching the world pass us by.&quot; predict(text_to_interpret) &gt;0.01548099610954523 </code></pre> <p>However, when I instantiate a <code>LIME</code> text explainer, I get the error below:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/folders/zp/g2cg0s7d3vn0y092tw5x90rc0000gn/T/ipykernel_43525/1810001145.py in &lt;module&gt; ----&gt; 1 explanation = explainer.explain_instance(text_to_interpret, predict) ~/opt/anaconda3/lib/python3.9/site-packages/lime/lime_text.py in explain_instance(self, text_instance, classifier_fn, labels, top_labels, num_features, num_samples, distance_metric, model_regressor) 411 mask_string=self.mask_string)) 412 domain_mapper = TextDomainMapper(indexed_string) --&gt; 413 data, yss, distances = self.__data_labels_distances( 414 indexed_string, classifier_fn, num_samples, 415 distance_metric=distance_metric) ~/opt/anaconda3/lib/python3.9/site-packages/lime/lime_text.py in __data_labels_distances(self, indexed_string, classifier_fn, num_samples, distance_metric) 480 data[i, inactive] = 0 481 inverse_data.append(indexed_string.inverse_removing(inactive)) --&gt; 482 labels = classifier_fn(inverse_data) 483 distances = distance_fn(sp.sparse.csr_matrix(data)) 484 return data, labels, distances /var/folders/zp/g2cg0s7d3vn0y092tw5x90rc0000gn/T/ipykernel_43525/1417496559.py in predict(text) 4 with torch.no_grad(): 5 outputs = loaded_model(**inputs) ----&gt; 6 predicted_value = outputs.logits.item() 7 return predicted_value 8 ValueError: only one element tensors can be converted to Python scalars </code></pre> <p>I don't know whether this is an issue with <code>LIME</code> or with my <code>BERT</code> model. The model seems to work just fine on its own, so that makes me think it's <code>LIME</code>. But then the error is clearly a <code>Torch</code> tensor that can't be converted to a scalar.</p> <p>I've tried wrapping the <code>predict</code> function in a <code>try except</code>, and converting dud outputs into <code>np.nan</code> values, but that's giving me other errors.</p> <p>Can anyone help with this?</p>
<python><pytorch><regression><bert-language-model><lime>
2023-08-11 11:46:54
0
1,205
Lodore66
76,883,258
17,471,060
Pythonic way to update a column of a Polars data frame based on matching condition from another column
<p>In Polars, what is an one-liner way to update items of a column based on matching condition from another column, maybe by applying lambda?</p> <p>For example, I would like to multiply items in <code>col1</code> with <code>1000</code> if items in <code>col2</code> are equal to <code>'a'</code>. Here's a crude way.</p> <pre><code>import polars as pl df = pl.DataFrame({ 'col1':[1,2,3,4], 'col2':['a', 'a', 'b', 'b'], 'col3':[10.9, 12.0, 33.3, 34.4] }) y_updated = [] for i in range(df.shape[0]): row = df[i] if row['col2'][0]=='a': y_updated.append(row['col1'][0]*1e3) else: y_updated.append(row['col1'][0]) df = df.with_columns(pl.Series(y_updated).alias('col1')) print(df) </code></pre> <p>Outputs -</p> <pre><code>shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ 1000.0 ┆ a ┆ 10.9 β”‚ β”‚ 2000.0 ┆ a ┆ 12.0 β”‚ β”‚ 3.0 ┆ b ┆ 33.3 β”‚ β”‚ 4.0 ┆ b ┆ 34.4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><python-polars>
2023-08-11 11:45:02
1
344
beta green
76,883,181
2,725,810
Running sentence transformers at PythonAnywhere
<p>I am trying to run a HuggingFace model for computing vector embeddings as explained <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" rel="nofollow noreferrer">here</a> at PythonAnywhere (it worked just fine locally on my laptop under Ubuntu under WSL2).</p> <p>The installation went fine:</p> <pre class="lang-bash prettyprint-override"><code>pip install -U sentence-transformers </code></pre> <p>However, when I run the following code:</p> <pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer import time def ms_now(): return int(time.time_ns() / 1000000) class Timer(): def __init__(self): self.start = ms_now() def stop(self): return ms_now() - self.start sentences = [&quot;This is an example sentence each sentence is converted&quot;] * 10 timer = Timer() model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') print(&quot;Model initialized&quot;, timer.stop()) for _ in range(10): timer = Timer() embeddings = model.encode(sentences) print(timer.stop()) </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/DrMeir/test/test.py&quot;, line 17, in &lt;module&gt; model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py&quot;, line 95, in __init__ modules = self._load_sbert_model(model_path) File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py&quot;, line 840, in _load_sbert_model module = module_class.load(os.path.join(model_path, module_config['path'])) File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py&quot;, line 137, in load return Transformer(model_name_or_path=input_path, **config) File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py&quot;, line 29, in __init__ self._load_model(model_name_or_path, config, cache_dir) File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py&quot;, line 49, in _load_model self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir) File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py&quot;, line 493, in from_pretrained return model_class.from_pretrained( File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py&quot;, line 2903, in from_pretrained ) = cls._load_pretrained_model( File &quot;/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py&quot;, line 3061, in _load_pretrained_model id_tensor = id_tensor_storage(tensor) if tensor.device != torch.device(&quot;meta&quot;) else id(tensor) RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta </code></pre> <p>They have <code>torch</code> 1.8.1+cpu at PythonAnywhere. On my laptop, it's 2.0.1.</p> <p>What is the reason for the error and how can I get this to work?</p>
<python><pytorch><huggingface-transformers><pythonanywhere><sentence-transformers>
2023-08-11 11:34:09
1
8,211
AlwaysLearning
76,883,144
12,695,210
Typing Tuple when using as default argument
<p>I have a case where I would like users of our package to be able to select from a list of options, for example</p> <pre><code>run_package(some_settings=(&quot;setting_1&quot;, &quot;session_2&quot;, &quot;setting_3&quot;)) </code></pre> <p>I have used a Tuple as the default argument on this function</p> <pre><code>def run_package(some_settings=(&quot;setting_1&quot;,): ... </code></pre> <p>To avoid the issue with having a mutable list as default argument. I prefer this form to the alternative:</p> <pre><code>def run_package(some_setting=None): if some_setting is None: # set default some_setting = [&quot;setting_1&quot;] </code></pre> <p>as it is more explicit.</p> <p>However, now I want to type this argument, e.g.</p> <pre><code>def run_package(some-settings: Tuple[Union[Literal[&quot;setting_1&quot;], Literal[&quot;setting_2&quot;], Literal[&quot;setting_3&quot;]] = (&quot;setting_1&quot;,)): ... </code></pre> <p>I am running into problems because, if I understand correctly, <code>Tuple</code> type must be of fixed length as discussed <a href="https://stackoverflow.com/questions/4210981/python-variable-length-tuples">here</a>.</p> <p>This makes sense as <code>Tuple</code> is immutable. I guess this precludes its use as a default argument for a function as I have done? Are there any workarounds except for using <code>None</code> as default?</p> <p>Many Thanks</p>
<python><function><tuples><default><typing>
2023-08-11 11:28:46
0
695
Joseph
76,882,997
6,213,939
Getting Unauthenticated warnings despite using permissions.AllowAny
<p>This is my APIView:</p> <pre><code>class VerifyEmail(APIView): serializer_class = EmailVerificationSerializer token_param_config = openapi.Parameter( 'token', in_=openapi.IN_QUERY, description='Description', type=openapi.TYPE_STRING ) @permission_classes([permissions.AllowAny]) @swagger_auto_schema(manual_parameters=[token_param_config]) def get(self, request): token = request.GET.get('token') try: payload = get_payload(request) user = User.objects.get(id=payload['user_id']) if not user.is_verified: user.is_verified = True user.save() return Response({'email': 'Successfully activated'}, status=status.HTTP_200_OK) except jwt.ExpiredSignatureError as identifier: return Response({'error': 'Activation Expired'}, status=status.HTTP_400_BAD_REQUEST) except jwt.exceptions.DecodeError as identifier: return Response({'error': 'Invalid token'}, status=status.HTTP_400_BAD_REQUEST) </code></pre> <p>It is asking for authentication despite me mentioning <code>AllowAny</code>. I don't want this apiview to require authentication. The complete code is hosted <a href="https://github.com/eadaradhiraj/DjangoJWT" rel="nofollow noreferrer">here</a></p>
<python><django><django-rest-framework-simplejwt><drf-yasg>
2023-08-11 11:06:11
1
943
Echchama Nayak
76,882,958
11,942,410
Storing data from API in JSON format in incremental manner
<p>I am trying to request this API and store data in incremental manner in datalake.</p> <p>This is the API I am trying to source : <a href="https://docs.roambee.com/#tag/Bee/operation/Get%20Bee%20Messages" rel="nofollow noreferrer">Get Bee messages</a></p> <p>Code:</p> <pre><code>import requests import json import os url = &quot;https://api.roambee.com/bee/bee_messages&quot; dir_path = &quot;/dbfs/jhgfd/adata/dfs/roambee/raw/api_responses_get_bee_messages/&quot; headers = { &quot;apikey&quot;: &quot;rtyrewrERTYUIOLIKJHGFRDSASDERFTGYHUJKILJHGFtyujhtrewsadfgh&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } bid_list = [&quot;3456,12345,12345,2345,2345,2345,2345,2345,345,3456&quot;] for bid in bid_list: params = { &quot;bid&quot;: bid, &quot;start&quot;: &quot;1690848000000&quot;, &quot;days&quot;: 10 } response = requests.get(url, headers=headers, params=params) bee_data = response.json() response_file_path = os.path.join(dir_path, f&quot;response_data_{bid}.json&quot;) if os.path.exists(response_file_path): # If the file exists, load existing data and append new data with open(response_file_path, &quot;r&quot;) as response_file: existing_data = json.load(response_file) existing_data.extend(bee_data) bee_data = existing_data if not os.path.exists(dir_path): os.makedirs(dir_path) with open(response_file_path, &quot;w&quot;) as response_file: json.dump(bee_data, response_file) print(f&quot;Response data for BID {bid} saved in:&quot;, response_file_path) delta_table_path = &quot;/mnt/analyticsdata/rms/roambee/raw/api_responses_get_bee_messages/response_data_868199058105959.json&quot; first_bee_test = spark.read.format(&quot;json&quot;).load(delta_table_path) display(first_bee_test) </code></pre> <p>this code returns a data like this <a href="https://docs.google.com/spreadsheets/d/1oZ-VOa9iomFGisMHUbZhLmq3mc2VPO_35vV3vtXmLaw/edit?usp=sharing" rel="nofollow noreferrer">google sheet link</a> date: <code>10,9,8,7,6,5,4,3,2,1,10,11</code> date, ideally it should be date: <code>11,10,9,8,7,6,5,4,3,2,1</code> any idea how to get data in required format? goal is to refresh above code every day and get latest 10 day of data EXTENDED TO EXISTING FILE.</p> <p>i have converted epoc time to human readable date</p>
<python><json><python-requests><databricks>
2023-08-11 11:00:58
1
326
vish
76,882,941
14,114,654
Combine columns ignoring nan
<p>I want to create a concatenated column, separated by commas, in the most efficient and generalisable way:</p> <pre><code>df fruit colour 0 apple green 1 NaN orange 2 NaN NaN listy= [&quot;fruit&quot;, &quot;colour&quot;] df[listy].apply(lambda x:&quot;,&quot;.join(x.dropna()),axis=1)) Type error: sequence item 3: expected str instance, float found </code></pre> <p>Excepted Output</p> <pre><code>0 apple, green 1 Orange 2 NaN </code></pre>
<python><pandas><string-concatenation>
2023-08-11 10:59:28
4
1,309
asd
76,882,912
4,489,998
Using np.isclose when lambdifying sympy equality conditions in Piecewise
<p>Consider the following piecewise expression:</p> <pre><code>from sympy import Piecewise, Eq from sympy.abc import x expr = Piecewise( (1, Eq(x, 0.0)), (x, True) ) </code></pre> <p>Lambdifying this expression with numpy gives the following result:</p> <pre><code>f = lambdify(x, expr, modules=&quot;numpy&quot;) print(''.join(inspect.findsource(f)[0])) # def _lambdifygenerated(x): # return select([equal(x, 0),True], [1,x], default=nan) </code></pre> <p>However, this function is designed to be used with a numerical solver, so I have small floating point differences that can give wrong results in the piecewise condition. <strong>Is it possible to make <code>lambdify</code> use <code>np.isclose</code> instead of <code>np.equal</code> ?</strong></p> <p>I've tried adding it to <code>lambdify</code>'s <code>modules</code> argument, but I don't know what to put as a key as I'm not replacing a function but a relation. This for example doesn't work: <code>modules={&quot;==&quot;: np.isclose}</code>.</p> <p>I've also tried creating an <code>implemented_function</code>:</p> <pre><code>my_eq = implemented_function('my_eq', lambda x, y: np.isclose(x, y)) expr = Piecewise( (1, my_eq(x, 0.0)), (x, True) ) </code></pre> <p>but this gives a <code>TypeError: Second argument must be a Boolean, not my_eq</code> when lambdifying, which is understandable if Piecewise expects a boolean relation.</p>
<python><sympy><symbolic-math>
2023-08-11 10:56:22
2
2,185
TrakJohnson
76,882,797
22,221,987
QTableWidget::section:first doesn't work when only one column exists
<p>I've tried to create a table header, which will have a minimalistic design. To do this, i hide all borders, except bottom border and left border for each item in header. To avoid first section left border stacking with the table border, i added <code>QTableWidget::section:first</code> setup. Here is the code:</p> <pre><code>QTableWidget::item { border: none; padding-left: 5px; padding-right: 5px; } QTableView { border: 2px solid #CCCCCC; border-radius: 7px; color: #3B3B3B; background: transparent; selection-background-color: #D3E3ED; selection-color: #3B3B3B; margin: 1px; alternate-background-color: #EBEBEB } QHeaderView { font-family:'Segoe UI'; font-size:12pt bold; background: #EBEBEB; border-top-left-radius: 5px; border-top-right-radius: 5px; } QHeaderView::section { font-weight: bold; border: none; background: transparent; padding: 2px; padding-left: 7px; padding-right: 7px; border-bottom: 1px solid #CCCCCC; border-left: 1px solid #CCCCCC; } QHeaderView::section:first{ border-left: 0px; } QTableView::corner { background-color: transparent; } </code></pre> <p>All works well, except the situation with only one column in the table (which should be the first, the last and the only one column in the table). Than, it looks like <code>section:first</code> stops working and the section appears with left border.</p> <p><a href="https://i.sstatic.net/esAQB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/esAQB.png" alt="" /></a></p> <p>But, when i add two or more columns, all normalizes again.</p> <p><a href="https://i.sstatic.net/6EKmj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6EKmj.png" alt="" /></a></p> <p>What's the source of this behaviour?</p> <p>Is it possible to fix this behaviour with pure css, without recalling setStyleSheet in code every time the amount of columns changes?</p>
<python><css><python-3.x><pyqt><pyside>
2023-08-11 10:40:16
1
309
Mika
76,882,658
21,346,793
How to shuffle my questions in django quiz?
<p>I need to shuffle questions un mt Django quiz. There is my view:</p> <pre class="lang-py prettyprint-override"><code>@login_required def test(request, test_id): test_set = get_object_or_404(TestSet, id=test_id) questions = test_set.question_set.all() user_test, created = UserTest.objects.get_or_create(user=request.user, test_set=test_set) user_answers = user_test.useranswer_set.all() correct_answers = sum(user_answer.selected_answer.is_correct for user_answer in user_answers) current_question_index = user_answers.count() find_percent = int(int(correct_answers) / len(questions) * 100) num_questions = questions.count() if current_question_index &lt; num_questions: current_question = questions[current_question_index] if request.method == 'POST': form = QuestionForm(request.POST, question=current_question) if form.is_valid(): user_answer = form.cleaned_data['selected_answer'] user_test.useranswer_set.update_or_create(question=current_question, defaults={'selected_answer': user_answer}) return redirect('test', test_id=test_id) else: form = QuestionForm(question=current_question) else: return render(request, 'test_view.html', {'test_set': test_set, 'user_test': user_test, 'questions': questions, 'show_result': True, 'current_question': None, 'correct_answers': correct_answers, 'find_percent': find_percent}) return render(request, 'test_view.html', {'test_set': test_set, 'form': form, 'current_question_index': current_question_index, 'current_question': current_question, 'correct_answers': correct_answers}) </code></pre> <p>I try to make random questions like:</p> <pre><code> questions = test_set.question_set.all().order_by('?') </code></pre> <p>But it doesn't work because questions can be repeated. How to solve this problem?</p>
<python><django><django-views>
2023-08-11 10:20:53
1
400
Ubuty_programmist_7
76,882,620
886,357
Unable to plot on the secondary y-axis after using invert_xaxis() using Matplotlib
<p>I am trying to have a scatter chart and a bar chart on the same plot.</p> <p>I am able to draw them both but I am unable to position the charts properly.</p> <p>Here is the code that I will create the scatter plot</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt rawdata = {'levels' : [0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100], 'series1': [0, 0, 0, 0, 0, 0, 0, 0, 0.543, 0.894, 1.673, 1.943, 2.2345, 2.893, 3.578, 4.875, 5.432, 6.904, 7.8903, 8.940, 10.423], 'series2': [0, 0, 0, 0, 0, 0, 0, 0, 0.435, 0.784, 1.563, 1.934, 2.1345, 2.645, 3.456, 4.653, 5.324, 6.435, 7.5432, 8.843, 10.432], 'bar1': [0.32, 0.32, 0.453, 0.5342, 0.5432, 0.32, 0.54, 0.534, 1.9423, 0.43, 0.543, 0.32, 0.42, 0.8, 0.73, 0.54, 0.73, 0.534, 0.532, 0.8, 0.6], 'bar2': [0.00, 1.00, 0.00, 0.00, 1.00, 0.00, 1.00, 1.00, 0.00, 1.00, 1.00, 0.00, 0.00, 0.00, 0.00, 1.00, 1.00, 0.00, 0.00, 0.00, 1.00]} data = pd.DataFrame(data=rawdata) fig, ax1 = plt.subplots(figsize=(10,6)) ax1.scatter(data[&quot;series1&quot;], data[&quot;levels&quot;], s=10, c='r') ax1.plot(data['series1'], data['levels'], linewidth=1, c='r') ax1.scatter(data['series2'], data['levels'], s=10,c='b') ax1.plot (data[&quot;series2&quot;], data['levels'], linewidth=1, c='b') plt.show() </code></pre> <p><a href="https://i.sstatic.net/YQr61.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YQr61.png" alt="scatter plot" /></a></p> <p>Now I want to add some horizontal bar charts to the other axis and I did them adding the code,</p> <pre><code>ax2 = ax1.twinx() ax2.barh(data['levels'], data['bar1']) </code></pre> <p><a href="https://i.sstatic.net/J3sz3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J3sz3.png" alt="scatter plot with bars" /></a></p> <p>but I want the bars on the other Yaxis.</p> <p>So tried to use the <code>invert_xaxis()</code> option</p> <pre><code>ax2.invert_xaxis() </code></pre> <p><a href="https://i.sstatic.net/af5rq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/af5rq.png" alt="inverted x axis" /></a></p> <p>But even the scatter plot gets inverted. Is there a way to just invert the bar chart? so that it is against the second y axis.</p>
<python><matplotlib>
2023-08-11 10:15:38
1
3,593
Morpheus
76,882,555
779,158
Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input'
<pre><code>from langchain.llms import LlamaCpp from langchain import PromptTemplate, LLMChain from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = &quot;&quot;&quot;Question: {question} Answer: Let's work this out in a step by step way to be sure we have the right answer.&quot;&quot;&quot; prompt = PromptTemplate(template=template, input_variables=[&quot;question&quot;]) callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) llm = LlamaCpp( model_path=&quot;./Models/llama-7b.ggmlv3.q2_K.bin&quot;, input={&quot;temperature&quot;: 0.75, &quot;max_length&quot;: 2000, &quot;top_p&quot;: 1}, callback_manager=callback_manager, verbose=True, ) llm_chain = LLMChain(prompt=prompt, llm=llm) </code></pre> <p><a href="https://i.sstatic.net/Nu2v8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nu2v8.png" alt="current folder structure " /></a></p> <pre><code>(llm) C:\llm&gt;python app1.py C:\llm\lib\site-packages\langchain\utils\utils.py:155: UserWarning: WARNING! input is not default parameter. input was transferred to model_kwargs. Please confirm that input is what you intended. warnings.warn( Exception ignored in: &lt;function Llama.__del__ at 0x000001923B3AE680&gt; Traceback (most recent call last): File &quot;C:\llm\lib\site-packages\llama_cpp\llama.py&quot;, line 1507, in __del__ if self.model is not None: AttributeError: 'Llama' object has no attribute 'model' Traceback (most recent call last): File &quot;C:\llm\app1.py&quot;, line 14, in &lt;module&gt; llm = LlamaCpp( File &quot;C:\llm\lib\site-packages\langchain\load\serializable.py&quot;, line 74, in __init__ super().__init__(**kwargs) File &quot;pydantic\main.py&quot;, line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp __root__ Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input' (type=value_error) </code></pre>
<python><py-langchain><llamacpp>
2023-08-11 10:06:23
3
10,787
rahularyansharma
76,882,222
7,053,357
python test imports from tests/resources instead of src/resources
<p>My proj structure is like this:</p> <pre><code>app/ src resources __init__.py (contains foo method) tests resources __init__.py sometest.py </code></pre> <p>when I start sometest.py, as part of the imports, it eventually gets to this line located in src/core/some_module.py:</p> <pre><code>from resources import foo </code></pre> <p>when I run the test with pytest i get:</p> <pre><code>ImportError: cannot import name 'foo' from 'resources' (app/tests/resources/__init__.py) </code></pre> <p>If I explicitly change the import statement to:</p> <pre><code>from src.resources import foo </code></pre> <p>than it works, but for some reason ONLY when I run it via pycharm's UI, and if I try running:</p> <pre><code>poetry run pytest -ra --cov=src --cov-fail-under=80 --cov-config=.coveragerc </code></pre> <p>it fails with:</p> <pre><code>ModuleNotFoundError: No module named 'src' </code></pre> <p>can anyone explain what's happening here and how can I solve it so that the tests imports the foo that's inside /app/src ?</p>
<python><pytest><python-import>
2023-08-11 09:22:29
1
364
felisimo
76,882,111
1,722,380
Importing modules from other directory
<p>Is there any recommended/opinionated way of importing modules within the same projects without hacks like <code>sys.path.append</code> or exporting <code>PYTHONPATH</code> with paths to neighbour directories?</p> <p>I am trying to add some tests to my app, nothing extraordinary.</p> <pre><code>app/ - __init__.py - app_to_be_tested.py tests/ - test_app_to_be_tested.py </code></pre> <p>How am I supposed to do that in most 'pythonic' way - either one the hacks above, build a package for my own code that will never be imported by any other app or restructure directory topology?</p> <p>Any advice appreciated.</p> <p>edit</p> <p><code>app.py</code></p> <pre><code>def some_func(): return 'abcd' </code></pre> <p><code>test_app.py</code></p> <pre><code>from ..app import app def test_some_func(): assert app.some_func() == 'abcd' </code></pre> <p><code>ImportError: attempted relative import with no known parent package</code></p> <p>edit 2</p> <p>had a change to get back to it</p> <pre><code>. β”œβ”€β”€ app β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ app.py β”‚Β Β  └── inner β”‚Β Β  └── __init__.py └── tests └── test_app.py </code></pre> <p>content of app/inner/__init__.py</p> <pre><code>def func_from_inner(): return 'from inner' </code></pre> <p>app/app.py</p> <pre><code>from inner import func_from_inner def some_func(): return func_from_inner() </code></pre> <p>running the code from root dir results in ModuleNotFound (how come?)</p> <pre><code> python3 -m app.app ... File &quot;/redacted/app/app.py&quot;, line 11, in &lt;module&gt; from inner import func_from_inner ModuleNotFoundError: No module named 'inner' </code></pre> <p>running it the same way from the app directory works</p> <p>when I change the import in app/app.py to</p> <pre><code>from .inner import func_from_inner </code></pre> <p>it works from the root dir</p> <p>sys.path does contain the root project dir, doesn't it mean it should import everything recursively?</p> <p>in addition to that, running python3 app/app.py results in</p> <pre><code>Traceback (most recent call last): File &quot;/redacted/app/app.py&quot;, line 11, in &lt;module&gt; from .inner import func_from_inner ImportError: attempted relative import with no known parent package </code></pre>
<python><import><module><modulenotfounderror>
2023-08-11 09:06:52
1
2,192
Łukasz
76,882,047
10,232,932
Drop_duplicates in a dataframe and keep the one with a specific column value
<p>I am having a dataframe df:</p> <pre><code>columnA columnB columnC columnD columnE A B 10 C C A B 10 D A B C 20 A A B A 20 D A B A 20 D C </code></pre> <p>I want to drop the duplicates if there are duplicates entries for <code>columnA, columnB, columnC</code> in my case the duplicates are:</p> <pre><code>columnA columnB columnC columnD columnE A B 10 C C A B 10 D A B A 20 D A B A 20 D C </code></pre> <p>How can I keep the one of the duplicate rows, where <code>columnE</code> is equal to <code>C</code> ? So that the output for the full dataframe is:</p> <pre><code>columnA columnB columnC columnD columnE A B 10 C C B C 20 A A B A 20 D C </code></pre>
<python><pandas>
2023-08-11 08:57:56
1
6,338
PV8
76,882,001
2,590,119
Pipenv could not resolve dependencies, seemingly without indicating which dependencies couldn't be resolved
<p>I'm trying to install the dependencies for a project that's working on a colleague's machine.</p> <p>The Pipfile looks like:</p> <p><strong>Pipfile</strong></p> <pre><code>[[source]] url = &quot;https://pypi.org/simple&quot; verify_ssl = true name = &quot;pypi&quot; [packages] aio-pika = &quot;==7.2.0&quot; aiohttp = &quot;==3.8.1&quot; aiormq = &quot;==6.2.3&quot; aiosignal = &quot;==1.2.0&quot; async-timeout = &quot;==4.0.2&quot; attrs = &quot;==21.4.0&quot; beautifulsoup4 = &quot;==4.11.1&quot; bokeh = &quot;==2.4.2&quot; bs4 = &quot;==0.0.1&quot; charset-normalizer = &quot;==2.0.12&quot; click = &quot;==8.1.3&quot; cycler = &quot;==0.11.0&quot; et-xmlfile = &quot;==1.1.0&quot; fonttools = &quot;==4.33.3&quot; frozenlist = &quot;==1.3.0&quot; idna = &quot;==3.3&quot; jinja2 = &quot;==3.1.2&quot; joblib = &quot;==1.1.0&quot; kiwisolver = &quot;==1.4.2&quot; markupsafe = &quot;==2.1.1&quot; matplotlib = &quot;==3.5.2&quot; motor = &quot;==3.0.0&quot; multidict = &quot;==6.0.2&quot; nltk = &quot;==3.7&quot; numpy = &quot;==1.22.3&quot; openpyxl = &quot;==3.0.10&quot; pamqp = &quot;==3.1.0&quot; pandas = &quot;==1.4.2&quot; pillow = &quot;==9.1.0&quot; pymongo = &quot;==4.3.3&quot; pyparsing = &quot;==3.0.8&quot; python-dateutil = &quot;==2.8.2&quot; pytz = &quot;==2022.1&quot; pyyaml = &quot;==6.0&quot; regex = &quot;==2022.4.24&quot; scikit-learn = &quot;==1.0.2&quot; scipy = &quot;==1.8.0&quot; seaborn = &quot;==0.11.2&quot; six = &quot;==1.16.0&quot; sklearn = &quot;==0.0&quot; soupsieve = &quot;==2.3.2.post1&quot; threadpoolctl = &quot;==3.1.0&quot; tornado = &quot;==6.1&quot; tqdm = &quot;==4.64.0&quot; typing-extensions = &quot;==4.2.0&quot; xlsxwriter = &quot;==3.0.3&quot; yarl = &quot;==1.7.2&quot; jwt = &quot;*&quot; argon2-cffi = &quot;*&quot; requests = &quot;*&quot; [dev-packages] [requires] python_version = &quot;*&quot; </code></pre> <p>I try to install the dependencies with</p> <p><code>pipenv install</code></p> <p>On doing so, I get the following error:</p> <pre><code>[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv run pip install &lt;requirement_name&gt; to bypass this mechanism, then run $ pipenv graph to inspect the versions actually installed in the virtualenv. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: metadata generation failed </code></pre> <p>The stacktrace doesn't seem to indicate which package failed, and only points to lines from files within pipenv. I can include the entire stacktrace if necessary.</p> <p>I'm not sure which dependency to try, so I can' do <code>pipenv run pip install &lt;requirement_name&gt;</code>, so instead I run <code>pipenv graph</code>. Nothing results. So, I try to manually install every package with <code>pipenv run pip install &lt;all packages&gt;</code>. That seems to do something. I again run <code>pipenv graph</code></p> <pre><code>aio-pika==9.2.1 β”œβ”€β”€ aiormq [required: &gt;=6.7.7,&lt;6.8.0, installed: 6.7.7] β”‚ β”œβ”€β”€ pamqp [required: ==3.2.1, installed: 3.2.1] β”‚ └── yarl [required: Any, installed: 1.9.2] β”‚ β”œβ”€β”€ idna [required: &gt;=2.0, installed: 3.4] β”‚ └── multidict [required: &gt;=4.0, installed: 6.0.4] └── yarl [required: Any, installed: 1.9.2] β”œβ”€β”€ idna [required: &gt;=2.0, installed: 3.4] └── multidict [required: &gt;=4.0, installed: 6.0.4] aiohttp==3.8.5 β”œβ”€β”€ aiosignal [required: &gt;=1.1.2, installed: 1.3.1] β”‚ └── frozenlist [required: &gt;=1.1.0, installed: 1.4.0] β”œβ”€β”€ async-timeout [required: &gt;=4.0.0a3,&lt;5.0, installed: 4.0.3] β”œβ”€β”€ attrs [required: &gt;=17.3.0, installed: 23.1.0] β”œβ”€β”€ charset-normalizer [required: &gt;=2.0,&lt;4.0, installed: 3.2.0] β”œβ”€β”€ frozenlist [required: &gt;=1.1.1, installed: 1.4.0] β”œβ”€β”€ multidict [required: &gt;=4.5,&lt;7.0, installed: 6.0.4] └── yarl [required: &gt;=1.0,&lt;2.0, installed: 1.9.2] β”œβ”€β”€ idna [required: &gt;=2.0, installed: 3.4] └── multidict [required: &gt;=4.0, installed: 6.0.4] argon2-cffi==21.3.0 └── argon2-cffi-bindings [required: Any, installed: 21.2.0] └── cffi [required: &gt;=1.0.1, installed: 1.15.1] └── pycparser [required: Any, installed: 2.21] bokeh==3.2.1 β”œβ”€β”€ contourpy [required: &gt;=1, installed: 1.1.0] β”‚ └── numpy [required: &gt;=1.16, installed: 1.25.2] β”œβ”€β”€ Jinja2 [required: &gt;=2.9, installed: 3.1.2] β”‚ └── MarkupSafe [required: &gt;=2.0, installed: 2.1.3] β”œβ”€β”€ numpy [required: &gt;=1.16, installed: 1.25.2] β”œβ”€β”€ packaging [required: &gt;=16.8, installed: 23.1] β”œβ”€β”€ pandas [required: &gt;=1.2, installed: 2.0.3] β”‚ β”œβ”€β”€ numpy [required: &gt;=1.21.0, installed: 1.25.2] β”‚ β”œβ”€β”€ numpy [required: &gt;=1.23.2, installed: 1.25.2] β”‚ β”œβ”€β”€ python-dateutil [required: &gt;=2.8.2, installed: 2.8.2] β”‚ β”‚ └── six [required: &gt;=1.5, installed: 1.16.0] β”‚ β”œβ”€β”€ pytz [required: &gt;=2020.1, installed: 2023.3] β”‚ └── tzdata [required: &gt;=2022.1, installed: 2023.3] β”œβ”€β”€ pillow [required: &gt;=7.1.0, installed: 10.0.0] β”œβ”€β”€ PyYAML [required: &gt;=3.10, installed: 6.0.1] β”œβ”€β”€ tornado [required: &gt;=5.1, installed: 6.3.2] └── xyzservices [required: &gt;=2021.09.1, installed: 2023.7.0] bs4==0.0.1 └── beautifulsoup4 [required: Any, installed: 4.12.2] └── soupsieve [required: &gt;1.2, installed: 2.4.1] jwt==1.3.1 └── cryptography [required: &gt;=3.1,!=3.4.0, installed: 41.0.3] └── cffi [required: &gt;=1.12, installed: 1.15.1] └── pycparser [required: Any, installed: 2.21] motor==3.2.0 └── pymongo [required: &gt;=4.4,&lt;5, installed: 4.4.1] └── dnspython [required: &gt;=1.16.0,&lt;3.0.0, installed: 2.4.2] nltk==3.8.1 β”œβ”€β”€ click [required: Any, installed: 8.1.6] β”œβ”€β”€ joblib [required: Any, installed: 1.3.2] β”œβ”€β”€ regex [required: &gt;=2021.8.3, installed: 2023.8.8] └── tqdm [required: Any, installed: 4.66.1] openpyxl==3.1.2 └── et-xmlfile [required: Any, installed: 1.1.0] requests==2.31.0 β”œβ”€β”€ certifi [required: &gt;=2017.4.17, installed: 2023.7.22] β”œβ”€β”€ charset-normalizer [required: &gt;=2,&lt;4, installed: 3.2.0] β”œβ”€β”€ idna [required: &gt;=2.5,&lt;4, installed: 3.4] └── urllib3 [required: &gt;=1.21.1,&lt;3, installed: 2.0.4] scikit-learn==1.3.0 β”œβ”€β”€ joblib [required: &gt;=1.1.1, installed: 1.3.2] β”œβ”€β”€ numpy [required: &gt;=1.17.3, installed: 1.25.2] β”œβ”€β”€ scipy [required: &gt;=1.5.0, installed: 1.11.1] β”‚ └── numpy [required: &gt;=1.21.6,&lt;1.28.0, installed: 1.25.2] └── threadpoolctl [required: &gt;=2.0.0, installed: 3.2.0] seaborn==0.12.2 β”œβ”€β”€ matplotlib [required: &gt;=3.1,!=3.6.1, installed: 3.7.2] β”‚ β”œβ”€β”€ contourpy [required: &gt;=1.0.1, installed: 1.1.0] β”‚ β”‚ └── numpy [required: &gt;=1.16, installed: 1.25.2] β”‚ β”œβ”€β”€ cycler [required: &gt;=0.10, installed: 0.11.0] β”‚ β”œβ”€β”€ fonttools [required: &gt;=4.22.0, installed: 4.42.0] β”‚ β”œβ”€β”€ kiwisolver [required: &gt;=1.0.1, installed: 1.4.4] β”‚ β”œβ”€β”€ numpy [required: &gt;=1.20, installed: 1.25.2] β”‚ β”œβ”€β”€ packaging [required: &gt;=20.0, installed: 23.1] β”‚ β”œβ”€β”€ pillow [required: &gt;=6.2.0, installed: 10.0.0] β”‚ β”œβ”€β”€ pyparsing [required: &gt;=2.3.1,&lt;3.1, installed: 3.0.9] β”‚ └── python-dateutil [required: &gt;=2.7, installed: 2.8.2] β”‚ └── six [required: &gt;=1.5, installed: 1.16.0] β”œβ”€β”€ numpy [required: &gt;=1.17,!=1.24.0, installed: 1.25.2] └── pandas [required: &gt;=0.25, installed: 2.0.3] β”œβ”€β”€ numpy [required: &gt;=1.21.0, installed: 1.25.2] β”œβ”€β”€ numpy [required: &gt;=1.23.2, installed: 1.25.2] β”œβ”€β”€ python-dateutil [required: &gt;=2.8.2, installed: 2.8.2] β”‚ └── six [required: &gt;=1.5, installed: 1.16.0] β”œβ”€β”€ pytz [required: &gt;=2020.1, installed: 2023.3] └── tzdata [required: &gt;=2022.1, installed: 2023.3] sklearn==0.0.post7 typing-extensions==4.7.1 XlsxWriter==3.1.2 </code></pre> <p>I search online for automatic dependency issue checkers for a pipenv graph output, but can't find one, so manually search eat package subdependency for any conflicts. By my eye, there aren't any. Not being entirely clear on what to do next, I try <code>pipenv lock</code>, because I believe that should generate a pipenv lockfile based on the actual packages installed in the VM.</p> <p>I get the same error as before.</p> <p>I ask my colleague to run <code>pipenv graph</code> and send me the output. He does:</p> <pre><code>aio-pika==7.2.0 - aiormq [required: ~=6.2.3, installed: 6.2.3] - pamqp [required: ==3.1.0, installed: 3.1.0] - yarl [required: Any, installed: 1.7.2] - idna [required: &gt;=2.0, installed: 3.3] - multidict [required: &gt;=4.0, installed: 6.0.2] - yarl [required: Any, installed: 1.7.2] - idna [required: &gt;=2.0, installed: 3.3] - multidict [required: &gt;=4.0, installed: 6.0.2] aiohttp==3.8.1 - aiosignal [required: &gt;=1.1.2, installed: 1.2.0] - frozenlist [required: &gt;=1.1.0, installed: 1.3.0] - async-timeout [required: &gt;=4.0.0a3,&lt;5.0, installed: 4.0.2] - attrs [required: &gt;=17.3.0, installed: 21.4.0] - charset-normalizer [required: &gt;=2.0,&lt;3.0, installed: 2.0.12] - frozenlist [required: &gt;=1.1.1, installed: 1.3.0] - multidict [required: &gt;=4.5,&lt;7.0, installed: 6.0.2] - yarl [required: &gt;=1.0,&lt;2.0, installed: 1.7.2] - idna [required: &gt;=2.0, installed: 3.3] - multidict [required: &gt;=4.0, installed: 6.0.2] argon2-cffi==21.3.0 - argon2-cffi-bindings [required: Any, installed: 21.2.0] - cffi [required: &gt;=1.0.1, installed: 1.15.1] - pycparser [required: Any, installed: 2.21] bokeh==2.4.2 - Jinja2 [required: &gt;=2.9, installed: 3.1.2] - MarkupSafe [required: &gt;=2.0, installed: 2.1.1] - numpy [required: &gt;=1.11.3, installed: 1.22.3] - packaging [required: &gt;=16.8, installed: 23.0] - pillow [required: &gt;=7.1.0, installed: 9.1.0] - PyYAML [required: &gt;=3.10, installed: 6.0] - tornado [required: &gt;=5.1, installed: 6.1] - typing-extensions [required: &gt;=3.10.0, installed: 4.2.0] bs4==0.0.1 - beautifulsoup4 [required: Any, installed: 4.11.1] - soupsieve [required: &gt;1.2, installed: 2.3.2.post1] jwt==1.3.1 - cryptography [required: !=3.4.0,&gt;=3.1, installed: 40.0.1] - cffi [required: &gt;=1.12, installed: 1.15.1] - pycparser [required: Any, installed: 2.21] motor==3.0.0 - pymongo [required: &gt;=4.1,&lt;5, installed: 4.3.3] - dnspython [required: &lt;3.0.0,&gt;=1.16.0, installed: 2.3.0] nltk==3.7 - click [required: Any, installed: 8.1.3] - joblib [required: Any, installed: 1.1.0] - regex [required: &gt;=2021.8.3, installed: 2022.4.24] - tqdm [required: Any, installed: 4.64.0] openpyxl==3.0.10 - et-xmlfile [required: Any, installed: 1.1.0] pkg-resources==0.0.0 requests==2.28.2 - certifi [required: &gt;=2017.4.17, installed: 2022.12.7] - charset-normalizer [required: &gt;=2,&lt;4, installed: 2.0.12] - idna [required: &lt;4,&gt;=2.5, installed: 3.3] - urllib3 [required: &lt;1.27,&gt;=1.21.1, installed: 1.26.15] seaborn==0.11.2 - matplotlib [required: &gt;=2.2, installed: 3.5.2] - cycler [required: &gt;=0.10, installed: 0.11.0] - fonttools [required: &gt;=4.22.0, installed: 4.33.3] - kiwisolver [required: &gt;=1.0.1, installed: 1.4.2] - numpy [required: &gt;=1.17, installed: 1.22.3] - packaging [required: &gt;=20.0, installed: 23.0] - pillow [required: &gt;=6.2.0, installed: 9.1.0] - pyparsing [required: &gt;=2.2.1, installed: 3.0.8] - python-dateutil [required: &gt;=2.7, installed: 2.8.2] - six [required: &gt;=1.5, installed: 1.16.0] - numpy [required: &gt;=1.15, installed: 1.22.3] - pandas [required: &gt;=0.23, installed: 1.4.2] - numpy [required: &gt;=1.18.5, installed: 1.22.3] - python-dateutil [required: &gt;=2.8.1, installed: 2.8.2] - six [required: &gt;=1.5, installed: 1.16.0] - pytz [required: &gt;=2020.1, installed: 2022.1] - scipy [required: &gt;=1.0, installed: 1.8.0] - numpy [required: &lt;1.25.0,&gt;=1.17.3, installed: 1.22.3] sklearn==0.0 - scikit-learn [required: Any, installed: 1.0.2] - joblib [required: &gt;=0.11, installed: 1.1.0] - numpy [required: &gt;=1.14.6, installed: 1.22.3] - scipy [required: &gt;=1.1.0, installed: 1.8.0] - numpy [required: &lt;1.25.0,&gt;=1.17.3, installed: 1.22.3] - threadpoolctl [required: &gt;=2.0.0, installed: 3.1.0] XlsxWriter==3.0.3 </code></pre> <p>I manually update the Pipfile to match the package versions he sent (changing jwt, argon2-cffi, and requests to hardcoded version numbers). I believe I clear the vm by doing <code>pipenv --rm</code>, then do <code>pipenv install</code>. I get the same error again.</p> <p><strong>How can I resolve a pipenv dependency issue when apparently no packages have dependency issues?</strong> Alternatively, are there dependency issues I'm just not seeing somehow?</p> <pre><code>python --version Python 3.11.3 pipenv --version pipenv, version 2023.7.23 </code></pre>
<python><python-3.x><pipenv><pipenv-install>
2023-08-11 08:48:47
1
2,209
Caleb Jay
76,881,945
11,197,796
Plotly - use trace marker labels as legend
<p>geojson map downloaded from here: <a href="https://github.com/codeforgermany/click_that_hood/blob/main/public/data/ireland-counties.geojson" rel="nofollow noreferrer">https://github.com/codeforgermany/click_that_hood/blob/main/public/data/ireland-counties.geojson</a></p> <p>I want to create a legend which consists of the labels from the marker tract overlay. This is the code so far:</p> <pre><code>import geojson import pandas as pd import plotly.graph_objects as go with open(&quot;data/ireland-with-counties_.geojson&quot;) as f: gj = geojson.load(f) # example df metadata = pd.DataFrame({'ID':['AH1','AT10B','BA1','BC1','BG18'], 'Site':['Athea','Athenry','Ballyhaise','Belcoo','Ballynagree'], 'Latitude':[52.457079,53.287512,54.050665,54.280660,51.988472], 'Longitude':[-9.309839,-8.768810,-7.324268,-7.684888,-8.926938]}) # generating random hex colours import random r = lambda: random.randint(0,255) hex_codes = [] for x in range(0,len(metadata['Site'])): hex_codes.append('#%02X%02X%02X' % (r(),r(),r())) colors = dict(zip(metadata['Site'].to_list(),hex_codes)) pts = [] for feature in gj['features']: if feature['geometry']['type'] == 'Polygon': pts.extend(feature['geometry']['coordinates'][0]) pts.append([None, None])#mark the end of a polygon elif feature['geometry']['type'] == 'MultiPolygon': for polyg in feature['geometry']['coordinates']: pts.extend(polyg[0]) pts.append([None, None])#end of polygon elif feature['geometry']['type'] == 'LineString': points.extend(feature['geometry']['coordinates']) points.append([None, None]) else: pass #else: raise ValueError(&quot;geometry type irrelevant for map&quot;) x, y = zip(*pts) fig = go.Figure() fig.add_scatter(x=x, y=y, mode='lines', line_color='#999999', line_width=1.5) fig.update_layout(width=600, height=800) fig.add_trace( go.Scatter( x=metadata['Longitude'], y=metadata['Latitude'], text = metadata['Site'], hoverinfo = 'text', mode = 'markers', name = &quot;location&quot;, marker=dict(size=8, color='rgba(255,255,255,0)', line=dict(color=metadata['Site'].map(colors), colorscale='plasma', width=2) )) ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/ZVOeA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZVOeA.png" alt="Current output" /></a></p> <p>Currently the labels only show on hover - how can I get the marker labels to be in the legend? E.g:</p> <p><a href="https://i.sstatic.net/mlRZr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mlRZr.png" alt="enter image description here" /></a></p>
<python><plotly><geojson>
2023-08-11 08:41:17
1
440
skiventist
76,881,788
7,026,806
How do I replace a Python property without a setter for an instance?
<p>Given the existing source code (that I cannot modify), how do I dynamically replace the property at <em>instance</em> level (not class, because that breaks stateless assumptions everywhere)?</p> <p>The following naive approach raises <code>AttributeError: property 'my_property' of 'MyClass' object has no setter </code></p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod, ABC class Parent(ABC): @property @abstractmethod def my_property(self) -&gt; int: ... @abstractmethod def my_method(self) -&gt; int: ... class MyClass(Parent): @property def my_property(self) -&gt; int: return 1 def my_method(self) -&gt; int: return 1 def new_property_impl(self): return 2 # Create an instance of MyClass obj = MyClass() # Dynamically replace the my_property method with new_property_impl obj.my_property = property(new_property_impl) # Expected result print(obj.my_property) # 2 </code></pre>
<python><properties>
2023-08-11 08:17:53
2
2,020
komodovaran_
76,881,655
3,423,825
How to get boolean value from a JSONfield in Django?
<p>When entering data in a <code>JSONField</code> from the Django admin it's not allowed to enter boolean, only string or float or integers. So how should I enter a boolean value so it can be retrieve later one ?</p> <pre><code>obj.params = {&quot;fast&quot;: 3, &quot;slow&quot;: 6, &quot;shift&quot;: &quot;True&quot;, &quot;signal&quot;: 2} obj.save() &gt; obj.params['shift'] 'True' </code></pre> <p>I could do :</p> <pre><code>params = {k: True if v == 'True' else v for k, v in obj.params.items()} params = {k: False if v == 'False' else v for k, v in obj.params.items()} </code></pre> <p>But isn't a simpler way to store and fetch booleans from a <code>JSONfield</code> ?</p>
<python><django>
2023-08-11 07:56:13
2
1,948
Florent
76,881,483
12,164,800
What's the correct way to type hint an empty list as a literal in python?
<p>I have a function that always returns an empty list (it's a long story why), I could type hint as usual with just &quot;list&quot;, but it would be useful to indicate that the list is always going to be the same.</p> <p>My first thought was to use literal like this:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal def get_empty_list() -&gt; Literal[[]]: return [] </code></pre> <p>Mypy flags this as an invalid type, is there a correct way to type hint an <em>always empty list</em>? (obviously, I could type hint just a list, but this is less helpful)</p> <p>To be explicit, this is a list that is <strong>always empty</strong> and doesn't expect to have any elements. (As seperate for example, from a list that is currently empty, but might have elements of some type added later on).</p>
<python><python-typing>
2023-08-11 07:26:47
2
457
houseofleft
76,881,196
1,664,664
Where is the best place to put external service client in Django?
<p>Is there any best practice to put external service client code in Django?</p> <p>I want the client to be initialized only once and can be used globally in Django?</p> <p>For example I want to integrate ClickHouse and Meilisearch into Django, where I can put the clients?</p>
<python><django><client>
2023-08-11 06:33:52
0
1,569
Kevin
76,881,106
8,353,711
How to fix - UserWarning: Pydantic serializer warnings in Pydantic V2?
<p>Using <code>datamodel-codegen</code> command with <code>JSON</code> data as input type and generating <code>Pydantic</code> schema as output, during this process I was seeing warnings.</p> <p>What is the meaning of these warnings and how to fix them? What kind of issues it can create(or why this is a warning)?</p> <p><strong>UserWarning:</strong></p> <pre><code>Expected `Union[list[definition-ref], definition-ref, bool]` but got `JsonSchemaObject` - serialized value may not be as expected Expected `Union[definition-ref, bool]` but got `JsonSchemaObject` - serialized value may not be as expected Expected `Union[definition-ref, bool]` but got `JsonSchemaObject` - serialized value may not be as expected Expected `Union[definition-ref, bool]` but got `JsonSchemaObject` - serialized value may not be as expected return self.__pydantic_serializer__.to_python( </code></pre> <p>To reproduce:</p> <p><strong>pets.json</strong></p> <pre class="lang-json prettyprint-override"><code>{ &quot;pets&quot;: [ { &quot;name&quot;: &quot;dog&quot;, &quot;age&quot;: 2 }, { &quot;name&quot;: &quot;cat&quot;, &quot;age&quot;: 1 }, { &quot;name&quot;: &quot;snake&quot;, &quot;age&quot;: 3, &quot;nickname&quot;: &quot;python&quot; } ], &quot;status&quot;: 200 } </code></pre> <p><strong>Command:</strong></p> <pre class="lang-bash prettyprint-override"><code>datamodel-codegen --input pets.json --input-file-type json --output model.py </code></pre> <p><strong>Versions:</strong></p> <pre><code>python - 3.11.4 pydantic==2.1.1 datamodel-code-generator==0.21.4 genson==1.2.2 </code></pre>
<python><code-generation><pydantic><genson>
2023-08-11 06:16:30
3
5,588
shaik moeed
76,880,952
2,739,700
python blur detection using opencv is not working
<p>I am using below code and attached blurred image, but is not detecting image has blurred</p> <pre><code>import cv2 import numpy as np def detect_blur_fft(image, threshold=100): # Convert image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Compute the FFT fft = np.fft.fftshift(np.fft.fft2(gray)) # Calculate the magnitude spectrum magnitude_spectrum = np.abs(fft) # Calculate the mean magnitude mean_magnitude = np.mean(magnitude_spectrum) # If the mean magnitude is below the threshold, image is considered blurred if mean_magnitude &lt; threshold: return True else: return False # For images image_path = '/Users/tamcv/Downloads/blurred-background-vintage-filter-customer-coffee-shop-blur-background-with-bokeh.jpg' image = cv2.imread(image_path) if detect_blur_fft(image): print(&quot;Image is blurred.&quot;) else: print(&quot;Image is not blurred.&quot;) </code></pre> <p>Expected output: Image is blurred.</p> <p>Actual output: Image is not blurred.</p> <p>Ask: If any issue with code please help to fix or is there any better tooling to detect blur of images and videos</p>
<python><numpy><opencv>
2023-08-11 05:44:25
1
404
GoneCase123
76,880,837
7,959,614
How do I communicate using gql with Apollo websocket protocol
<p>I want to connect to the websocket. When I inspect the traffic between the client and the server I see that the first message is the handshake:</p> <pre><code>{&quot;type&quot;: &quot;connection_init&quot;, &quot;payload&quot;: { &quot;accept-language&quot;: &quot;en&quot;, &quot;ec-version&quot;: &quot;5.1.88&quot;, &quot;referrer&quot;: URL, } } </code></pre> <p>Based on the format (the keys of the dict) I conclude that the websocket uses an Apollo websocket transport prototcol.</p> <p>Next, I am following the websocket-<a href="https://gql.readthedocs.io/en/latest/transports/websockets.html" rel="nofollow noreferrer">example</a> of <code>gql</code>'s documentation.</p> <pre><code>import asyncio import logging from gql import gql, Client from gql.transport.websockets import WebsocketsTransport logging.basicConfig(level=logging.INFO) async def main(): transport = WebsocketsTransport(url=URL, init_payload={'accept-language': 'en', 'ec-version': '5.1.88'}) async with Client(transport=transport, fetch_schema_from_transport=False) as session: # do something asyncio.run(main()) </code></pre> <p>After reading more about the protocol <a href="https://github.com/apollographql/subscriptions-transport-ws/blob/master/PROTOCOL.md" rel="nofollow noreferrer">here</a> I still don't understand how I can send messages to the server within my <code>Python</code>-script How do I send the the below message to the websocket?</p> <pre><code>{ &quot;id&quot;: &quot;1073897396&quot;, &quot;type&quot;: &quot;start&quot;, &quot;operationName&quot;: operation, &quot;eventId&quot;: 488 } </code></pre>
<python><apollo><gql>
2023-08-11 05:14:35
1
406
HJA24
76,880,780
9,525,238
pyqtgraph/numpy ScatterPlotItem big matrix ignoring masked (np.nan) values
<p>Basically i have a huge matrix that i want to plot as ScatterPlotItems I am masking it in different ways and replacing masked values with np.nan</p> <p>It looks like this:</p> <p><a href="https://i.sstatic.net/SVI5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SVI5q.png" alt="enter image description here" /></a></p> <p>Is there a fast way to pyqtgraph.ScatterPlotItem this?</p> <p>Or maybe a numpy magical way to create a [[x,y][x,y]...] from this.</p> <p>ex: [[1, 2.626][1, 2.6744][2, 6744][3, 1.058]... etc]</p> <p>First dimension would be the x-axis and the 2nd dimension would be the y-axis.</p>
<python><numpy><pyqtgraph>
2023-08-11 04:56:31
1
413
Andrei M.
76,880,750
10,970,202
How to check if my tensorflow is compiled with AVX
<p>There are many posts that explains what it is, how to compile it however couldn't find any that explains ways to check for already installed tensorflow.</p> <p>I cannot uninstall then re-install with AVX due to permission issue,</p> <p>I've been training on different servers and some outputs following warning message <code>This Tensorflow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:AVX2 FMA</code> while some don't but I'm not sure it is right and sure way of concluding whether installed tensorflow supports AVX.</p> <p>versions: python3.7, tensorflow2.9.3</p>
<python><tensorflow>
2023-08-11 04:50:31
1
5,008
haneulkim
76,880,512
3,821,009
polars.when(cond).then().otherwise() evaluates .then() when cond is false
<p>Say I have this:</p> <pre><code>u = 2 df = (polars.DataFrame(dict( j=numpy.random.randint(10, 99, 10) )) .with_row_count() .with_columns(k=(polars.col('j') % 10) == u) .with_columns(l=polars.col('k').any()) ) print(df) print(df .with_columns(i=polars .when(polars.col('k').any()) .then(polars.col('row_nr') &gt; polars.col('row_nr').where(polars.col('k')).first()) .otherwise(None) ) ) </code></pre> <p>This produces this:</p> <pre><code> row_nr (u32) j (i64) k (bool) l (bool) 0 47 false true 1 22 true true 2 82 true true 3 19 false true 4 85 false true 5 15 false true 6 89 false true 7 74 false true 8 26 false true 9 11 false true shape: (10, 4) row_nr (u32) j (i64) k (bool) l (bool) i (bool) 0 47 false true false 1 22 true true false 2 82 true true true 3 19 false true true 4 85 false true true 5 15 false true true 6 89 false true true 7 74 false true true 8 26 false true true 9 11 false true true shape: (10, 5) </code></pre> <p>However, when no conditions match - e.g. in the above with <code>u = 0</code>:</p> <pre><code> row_nr (u32) j (i64) k (bool) l (bool) 0 47 false false 1 22 false false 2 82 false false 3 19 false false 4 85 false false 5 15 false false 6 89 false false 7 74 false false 8 26 false false 9 11 false false shape: (10, 4) </code></pre> <p>I get this exception:</p> <pre><code>exceptions.ComputeError: cannot evaluate two series of different lengths (10 and 0) Error originated in expression: '[(col(&quot;row_nr&quot;)) &gt; (col(&quot;row_nr&quot;).filter(col(&quot;k&quot;)).first())]' </code></pre> <p>I know I can check this beforehand and then do something else, but I was wondering:</p> <ul> <li>Why doesn't <code>polars.when().then().otherwise()</code> work in this case, given that <code>.then()</code> should <em>not</em> even be evaluated in this case (since <code>.when(polars.col('k').any())</code> is <code>false</code>)?</li> <li>Is there a way to do this within one expression (without going &quot;outside&quot; of the expression, i.e. reaching for pure python <code>if</code>/<code>else</code>, using <code>pipe</code> and such)?</li> </ul>
<python><dataframe><python-polars>
2023-08-11 03:30:02
1
4,641
levant pied
76,880,332
1,903,852
dataframe concatenation retain first value for duplicates
<p>I have 2 dataframes (say <code>d1</code> and <code>d2</code>) both with columns <code>c1</code> and <code>c2</code>. I'd like to concatenate them. However, for all values that occur in column <code>c1</code> in both tables I'd like to retain only the row from <code>d1</code>.</p> <p>Example:</p> <pre><code>df1 = pd.DataFrame({&quot;Customer&quot;:[&quot;Alice&quot;, &quot;Bob&quot;, &quot;John&quot;], &quot;Status&quot;:[&quot;closed&quot;,&quot;in-progress&quot;,&quot;closed&quot;]}) df2 = pd.DataFrame({&quot;Customer&quot;:[&quot;Alice&quot;, &quot;Lara&quot;, &quot;Santa&quot;], &quot;Status&quot;:[&quot;in-progress&quot;,&quot;in-progress&quot;,&quot;closed&quot;]}) desired_result = pd.DataFrame({&quot;Customer&quot;:[&quot;Alice&quot;, &quot;Bob&quot;, &quot;John&quot;, &quot;Lara&quot;, &quot;Santa&quot;], &quot;Status&quot;:[&quot;closed&quot;,&quot;in-progress&quot;,&quot;closed&quot;,&quot;in-progress&quot;,&quot;closed&quot;]}) </code></pre> <p>d1:</p> <pre><code> Customer Status 0 Alice closed 1 Bob in-progress 2 John closed </code></pre> <p>d2:</p> <pre><code> Customer Status 0 Alice in-progress 1 Lara in-progress 2 Santa closed </code></pre> <p>desired_result:</p> <pre><code> Customer Status 0 Alice closed 1 Bob in-progress 2 John closed 3 Lara in-progress 4 Santa closed </code></pre> <p>Notice Customer Alice. She occurs in both d1.Customer and d2.Customer, so only the corresponding row from d1 needs to be retained. All other customers in d1 and d2 are unique so their corresponding rows end up in the final table. How can I accomplish this?</p>
<python><python-3.x><pandas><dataframe><concatenation>
2023-08-11 02:20:39
1
2,431
Joris Kinable
76,880,298
11,974,969
Shell script β€œdvc pull” not working at Streamlit server
<p>In my Streamlit app.py file, I used the code <code>os.system(&quot;dvc pull&quot;)</code> to load a .csv data file (<code>labeled_projects.csv</code>) from my Google service account (Google Drive), and it has been working well since I deployed it a few months ago. The code itself is loaded from my GitHub account.</p> <p>But it appears that the code suddenly stopped working and I got the error message <code>FileNotFoundError: [Errno 2] No such file or directory: '/mount/src/mlops/data/labeled_projects.csv'</code>.</p> <p>The Streamlit server provides no error message regarding the execution of <code>os.system(&quot;dvc pull&quot;)</code>.</p> <p>Attempting to replace <code>os.system(&quot;dvc pull&quot;)</code> by using the <code>tempfile</code> package to create a .sh file and executing it using the <code>subprocess</code> package does not help. Got the same <code>FileNotFoundError</code> message with no error message about <code>dvc pull</code>.</p> <p>Also, executing the command <code>find . -name 'labeled_projects.csv'</code> at the streamlit server could not find any matching return, which seems to indicate that the file is not downloaded.</p> <p>The code <code>dvc pull</code> in the Stremlit app.py file works fine if executed locally.</p> <p>Thanks for your help!</p>
<python><bash><streamlit><dvc>
2023-08-11 02:08:29
2
779
Tony Peng
76,880,224
959,788
Error using Using DocArrayInMemorySearch in Langchain: Could not import docarray python package
<p>Here is the full code. It runs perfectly fine on <a href="https://learn.deeplearning.ai/" rel="noreferrer">https://learn.deeplearning.ai/</a> notebook. But when I run it on my local machine, I get an error about</p> <blockquote> <p>ImportError: Could not import docarray python package</p> </blockquote> <p>I have tried reinstalling/force installing langchain and lanchain[docarray] (both pip and pip3). I use mini conda virtual environment. python version 3.11.4</p> <pre><code>from langchain.vectorstores import DocArrayInMemorySearch from langchain.schema import Document from langchain.indexes import VectorstoreIndexCreator import openai import os os.environ['OPENAI_API_KEY'] = &quot;xxxxxx&quot; #not needed in DLAI docs = [ Document( page_content=&quot;&quot;&quot;[{&quot;API_Name&quot;:&quot;get_invoice_transactions&quot;,&quot;API_Description&quot;:&quot;This API when called will provide the list of transactions&quot;,&quot;API_Inputs&quot;:[],&quot;API_Outputs&quot;:[]}]&quot;&quot;&quot; ), Document( page_content=&quot;&quot;&quot;[{&quot;API_Name&quot;:&quot;get_invoice_summary_year&quot;,&quot;API_Description&quot;:&quot;this api summarizes the invoices by vendor, product and year&quot;,&quot;API_Inputs&quot;:[{&quot;API_Input&quot;:&quot;Year&quot;,&quot;API_Input_Type&quot;:&quot;Text&quot;}],&quot;API_Outputs&quot;:[{&quot;API_Output&quot;:&quot;Purchase Volume&quot;,&quot;API_Output_Type&quot;:&quot;Float&quot;},{&quot;API_Output&quot;:&quot;Vendor Name&quot;,&quot;API_Output_Type&quot;:&quot;Text&quot;},{&quot;API_Output&quot;:&quot;Year&quot;,&quot;API_Output_Type&quot;:&quot;Text&quot;},{&quot;API_Output&quot;:&quot;Item&quot;,&quot;API_Output_Type&quot;:&quot;Text&quot;}]}]&quot;&quot;&quot; ), Document( page_content=&quot;&quot;&quot;[{&quot;API_Name&quot;:&quot;loan_payment&quot;,&quot;API_Description&quot;:&quot;This API calculates the monthly payment for a loan&quot;,&quot;API_Inputs&quot;:[{&quot;API_Input&quot;:&quot;Loan_Amount&quot;,&quot;API_Input_Type&quot;:&quot;Float&quot;},{&quot;API_Input&quot;:&quot;Interest_Rate&quot;,&quot;API_Input_Type&quot;:&quot;Float&quot;},{&quot;API_Input&quot;:&quot;Loan_Term&quot;,&quot;API_Input_Type&quot;:&quot;Integer&quot;}],&quot;API_Outputs&quot;:[{&quot;API_Output&quot;:&quot;Monthly_Payment&quot;,&quot;API_Output_Type&quot;:&quot;Float&quot;},{&quot;API_Output&quot;:&quot;Total_Interest&quot;,&quot;API_Output_Type&quot;:&quot;Float&quot;}]}]&quot;&quot;&quot; ), Document( page_content=&quot;&quot;&quot;[{&quot;API_Name&quot;:&quot;image_processing&quot;,&quot;API_Description&quot;:&quot;This API processes an image and applies specified filters&quot;,&quot;API_Inputs&quot;:[{&quot;API_Input&quot;:&quot;Image_URL&quot;,&quot;API_Input_Type&quot;:&quot;URL&quot;},{&quot;API_Input&quot;:&quot;Filters&quot;,&quot;API_Input_Type&quot;:&quot;List&quot;}],&quot;API_Outputs&quot;:[{&quot;API_Output&quot;:&quot;Processed_Image_URL&quot;,&quot;API_Output_Type&quot;:&quot;URL&quot;}]}]&quot;&quot;&quot; ), Document( page_content=&quot;&quot;&quot;[{&quot;API_Name&quot;:&quot;movies_catalog&quot;,&quot;API_Description&quot;:&quot;This API provides a catalog of movies based on user preferences&quot;,&quot;API_Inputs&quot;:[{&quot;API_Input&quot;:&quot;Genre&quot;,&quot;API_Input_Type&quot;:&quot;Text&quot;},{&quot;API_Input&quot;:&quot;Release_Year&quot;,&quot;API_Input_Type&quot;:&quot;Integer&quot;}],&quot;API_Outputs&quot;:[{&quot;API_Output&quot;:&quot;Movie_Title&quot;,&quot;API_Output_Type&quot;:&quot;Text&quot;},{&quot;API_Output&quot;:&quot;Genre&quot;,&quot;API_Output_Type&quot;:&quot;Text&quot;},{&quot;API_Output&quot;:&quot;Release_Year&quot;,&quot;API_Output_Type&quot;:&quot;Integer&quot;},{&quot;API_Output&quot;:&quot;Rating&quot;,&quot;API_Output_Type&quot;:&quot;Float&quot;}]}]&quot;&quot;&quot; ), # Add more documents here ] index = VectorstoreIndexCreator( vectorstore_cls=DocArrayInMemorySearch ).from_documents(docs) api_desc = &quot;do analytics about movies&quot; query = f&quot;Search for related APIs based on following API Description: {api_desc}\ Return list of API page_contents as JSON objects.&quot; print(index.query(query)) </code></pre> <p>Here is the error:</p> <pre><code>(streamlit) C02Z8202LVDQ:sage_response praneeth.gadam$ /Users/praneeth.gadam/opt/miniconda3/envs/streamlit/bin/python /Users/praneeth.gadam/sage_response/docsearch_copy.py Traceback (most recent call last): File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py&quot;, line 19, in _check_docarray_import import docarray ModuleNotFoundError: No module named 'docarray' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/praneeth.gadam/sage_response/docsearch_copy.py&quot;, line 30, in &lt;module&gt; ).from_documents(docs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/indexes/vectorstore.py&quot;, line 88, in from_documents vectorstore = self.vectorstore_cls.from_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/base.py&quot;, line 420, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py&quot;, line 67, in from_texts store = cls.from_params(embedding, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py&quot;, line 38, in from_params _check_docarray_import() File &quot;/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py&quot;, line 29, in _check_docarray_import raise ImportError( ImportError: Could not import docarray python package. Please install it with `pip install &quot;langchain[docarray]&quot;`. </code></pre>
<python><openai-api><python-packaging><langchain>
2023-08-11 01:41:08
5
3,044
Gadam
76,880,185
6,657,314
cannot static file through static url using django.test.client
<h1>1.env</h1> <ul> <li>python : 3.8.14</li> <li>django version : '4.2.4'</li> </ul> <h1>2.Purpose</h1> <ul> <li>Make sure that the static file is saved</li> <li>Make sure that the static file can be accessed from the web browser</li> </ul> <h1>3.Issue</h1> <p>The problem is that after running <code>django server</code>, the <code>static file</code> can be accessed through <code>url</code>, but not in the <code>unitest environment</code>.</p> <p>For example, suppose a file is stored in the following path</p> <pre><code>'MAPPER #&lt;- project dir |-datasets #&lt;-app name |-static |-datasets |-images |-Recursion Cellular Image Classification.jpeg </code></pre> <h2>3.1 Access the browser after running the server</h2> <ul> <li>enter the following path into the browser to confirm that it works well. http://localhost:8000/static/datasets/static/datasets/images/Recursion Cellular Image Classification.jpeg</li> </ul> <h2>3.2 unittest</h2> <ul> <li>get a 404 status code. and cannot get the image file</li> </ul> <pre class="lang-py prettyprint-override"><code>from django.test import Client client = Client() res_redirect = self.client.get(res.url) </code></pre> <h1>4. try</h1> <p>I tried using <code>requests.get</code> , but i didn't work <code>requests.get(http://localhost:8000/static/datasets/static/datasets/images/Recursion Cellular Image Classification.jpeg)</code></p> <h1>5. Question</h1> <p>SO this is my question, how can i access file through url in <code>unittest env</code></p>
<python><django><python-unittest><django-unittest>
2023-08-11 01:23:38
1
625
Soulduck
76,879,889
213,525
Conda package not found / How to install conda packages on Apple M1/M2 chips which don't have ARM builds?
<p>Let's say I want to install <code>pybox2d</code> (but this applies to other packages as well), and I can see on the <a href="https://anaconda.org/conda-forge/pybox2d" rel="nofollow noreferrer">Anaconda website</a> that this package obviously exists, but it cannot be found when trying to install it on my new Macbook (one of the ones with the new Apple M1 or M2 CPUs). What should I do?</p> <pre><code>conda search pybox2d -c conda-forge Loading channels: done No match found for: pybox2d. Search: *pybox2d* PackagesNotFoundError: The following packages are not available from current channels: - pybox2d Current channels: - https://conda.anaconda.org/conda-forge/osx-arm64 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/osx-arm64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/osx-arm64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre> <p>Note: There are other Stack Overflow questions which relate to this but don't quite answer the question as I have asked it here, so I wanted to make it more direct.</p> <ul> <li><a href="https://stackoverflow.com/q/65415996">How to specify the architecture or platform for a new conda environment? (Apple Silicon)</a></li> <li><a href="https://stackoverflow.com/q/71515117">How to set up a conda osx-64 environment on ARM mac?</a></li> <li><a href="https://stackoverflow.com/q/70205633">Cannot install Python 3.7 on osx-arm64</a></li> </ul>
<python><anaconda><conda><apple-m1>
2023-08-10 23:28:34
1
19,835
Neil Traft
76,879,872
1,601,580
How to use huggingface HF trainer train with custom collate function?
<p>I have some custom data set with custom table entries and wanted to deal with it with a custom collate. But it didn't work when I pass a collate function I wrote (that DOES work on a individual dataloader e.g., see <a href="https://stackoverflow.com/questions/76872115/how-does-one-create-a-pytorch-data-loader-with-a-custom-hugging-face-data-set-wi">How does one create a pytorch data loader with a custom hugging face data set without having errors?</a> or <a href="https://stackoverflow.com/questions/76878387/how-does-one-create-a-pytoch-data-loader-using-an-interleaved-hugging-face-datas?noredirect=1&amp;lq=1">How does one create a pytoch data loader using an interleaved hugging face dataset?</a>) . It just doesn't work with HF trianer.</p> <p>Code</p> <pre><code>from pathlib import Path # token = open(Path('~/data/hf_token.txt').expanduser()).read().strip() token = None batch_size = 8 # -- AF now from datasets import load_dataset import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(&quot;gpt2&quot;) if tokenizer.pad_token_id is None: tokenizer.pad_token = tokenizer.eos_token model = GPT2LMHeadModel.from_pretrained(&quot;gpt2&quot;) device = torch.device(f&quot;cuda:{0}&quot; if torch.cuda.is_available() else &quot;cpu&quot;) model = model.to(device) # -- Get batch from dataset from datasets import load_dataset # path, name = 'brando/debug1_af', 'debug1_af' path, name = 'brando/debug0_af', 'debug0_af' # train_dataset = load_dataset(path, name, streaming=True, split=&quot;train&quot;, token=token).with_format(type=&quot;torch&quot;) # eval_dataset = load_dataset(path, name, streaming=True, split=&quot;test&quot;, token=token).with_format(type=&quot;torch&quot;) # batch = dataset.take(1) # column_names = next(iterbatch).keys() # print(f'{column_names=}') # -- Compute max steps (I think we should try to do this for real experiments such that the number of tokens is the same in all training runs for fair experiments, todo: ask Sudharsan or online, for now just make streaming=False) train_dataset = load_dataset(path, name, streaming=False, split=&quot;train&quot;, token=token).with_format(type=&quot;torch&quot;) # hack to get dataset size eval_dataset = load_dataset(path, name, streaming=False, split=&quot;test&quot;, token=token).with_format(type=&quot;torch&quot;) # hack to get dataset size print(f'{len(train_dataset)=}') print(f'{len(eval_dataset)=}') per_device_train_batch_size = batch_size num_epochs = 1 max_steps = (len(train_dataset) // per_device_train_batch_size) * num_epochs print(f'{max_steps=}') # -- Get trainer def collate_tokenize(data): text_batch = [f'informal statement {example[&quot;generated informal statement&quot;]} formal statement {example[&quot;formal statement&quot;]}' for example in data] tokenized = tokenizer(text_batch, padding='longest', max_length=128, truncation=True, return_tensors='pt') return tokenized from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir=Path('./results').expanduser(), # output directory max_steps=max_steps, # max_steps per_device_train_batch_size=per_device_train_batch_size, # batch size per device during training per_device_eval_batch_size=batch_size, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=Path('./logs').expanduser(), # directory for storing logs logging_steps=10, report_to='none', ) trainer = Trainer( model=model, # the instantiated πŸ€— Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset data_collator = collate_tokenize, ) trainer.train() print('Done!\a') </code></pre> <p>error:</p> <pre><code>len(train_dataset)=14 len(eval_dataset)=13 max_steps=1 /usr/local/lib/python3.10/dist-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( --------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-2-4403554fc52d&gt; in &lt;cell line: 63&gt;() 61 data_collator = collate_tokenize, 62 ) ---&gt; 63 trainer.train() 64 print('Done!\a') 11 frames /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py in _check_valid_index_key(key, size) 524 if isinstance(key, int): 525 if (key &lt; 0 and key + size &lt; 0) or (key &gt;= size): --&gt; 526 raise IndexError(f&quot;Invalid key: {key} is out of bounds for size {size}&quot;) 527 return 528 elif isinstance(key, slice): IndexError: Invalid key: 12 is out of bounds for size 0 </code></pre> <p>why? How to fix?</p> <ul> <li>colab: <a href="https://colab.research.google.com/drive/1io951Ex17-6OUaogCo7OiR-eXga_oUOH?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1io951Ex17-6OUaogCo7OiR-eXga_oUOH?usp=sharing</a></li> <li>hf discuss: <a href="https://discuss.huggingface.co/t/how-to-use-huggingface-hf-trainer-train-with-custom-collate-function/50347" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-to-use-huggingface-hf-trainer-train-with-custom-collate-function/50347</a></li> </ul>
<python><huggingface-transformers><huggingface><huggingface-datasets><huggingface-trainer>
2023-08-10 23:22:56
2
6,126
Charlie Parker
76,879,834
11,036,109
Find a variation of an image inside another image
<p>I'm working with Python and cv2 to try an build a script to find an image inside another image. This is a simplified version of the script I'm working on.</p> <p>It works fine if the EXACT image used as reference is inside the other image i'm searching inside of.</p> <pre><code>import cv2 def checkimages(img, template): result = cv2.matchTemplate(img, template, cv2.TM_SQDIFF) min_val = cv2.minMaxLoc(result)[0] thr = 10000 return min_val &lt;= thr template = cv2.imread('logo3.png') images = ['withlogo.png','withlogo2.png', 'nologo.png'] for image in images: print('-------------------------------------') if checkimages(cv2.imread(image), template): print('{}: {}'.format(image, 'Logo found.')) else: print('{}: {}'.format(image, 'No Logo.')) </code></pre> <p>I'm using TM_SQDIFF of the matchTemplate function to try and get an aprox value and see how likely it is the template image is inside of the other.</p> <p>As I said, it works fine if the EXACT template image is inside of the other image. But as soon as the template image looks a bit different inside of the other image, it may as well not exist as the result value of the matchTemplate function passes from a 0.0 to 304025248.0. And even more, if I try to find the template in another image that doesn't contain any traces of the template, I also get a massive number, but curiosly it's a lot lower that the previous one that does have a variation of it (104060152.0 for the nologo.png image).</p> <p>Now, granted, this is the first time I work with cv2, and I may be asking to much of it to find the P logo in other images, without any preprocessing. But I don't understand those matchTemplate results. Do you guys have any recommendations of what I could add or change to make this a bit better.</p> <p>I realize this won't be fullproof for all images, but I need to get at least an aprox value when the template is inside another image.</p> <p><strong>------- EDIT 1 - Aug 14 2023</strong></p> <p>I just made a test script with what @fmw42 commented.</p> <p>This is the code. Sorry, it's kind of dirty and unoptimized, but what I'm trying to do, using @fmw42 advices is to mask the template and search for it in the main picture. If it's not found I shrink the main pic and check if it can be found in the smaller picture.</p> <pre><code>import cv2 import numpy as np confidence_threshold = 0.92 subimage_found = False def image_resize(image, width = None, height = None, inter = cv2.INTER_AREA): dim = None if width is None and height is None: return image if width is None: r = height / float(h) dim = (int(w * r), int(height)) else: r = width / float(w) dim = (int(width), int(h * r)) print(f'resize dim: {dim}') resized = cv2.resize(image, dim, interpolation = inter) return resized # read image img = cv2.imread('withlogo2.png') h, w = img.shape[:2] # read template with alpha channel template_with_alpha = cv2.imread('logochico.png', cv2.IMREAD_UNCHANGED) hh, ww = template_with_alpha.shape[:2] # extract base template image and alpha channel and make alpha 3 channels template = template_with_alpha[:,:,0:3] alpha = template_with_alpha[:,:,2] alpha = cv2.merge([alpha,alpha,alpha]) # do masked template matching and save correlation image correlation = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED, mask=alpha) # get best match min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(correlation) max_val_corr = '{:.6f}'.format(max_val) print(&quot;correlation score: &quot; + max_val_corr) print(&quot;match location:&quot;, max_loc) max_val_corr = float(max_val_corr) min_size_reached = False while not subimage_found and not min_size_reached: if max_val_corr &gt; confidence_threshold: print(&quot;Subimage found&quot;) subimage_found = True else: print(&quot;Subimage not found. Resizing...&quot;) for i_ratio in np.arange(0.95, 0.5, -0.05): new_height = h*i_ratio print(f&quot;Original height: {h}&quot;) print(f&quot;New height: {new_height}&quot;) resized_image = image_resize(img, height = new_height) cv2.imshow('resized_image'+str(i_ratio),resized_image) cv2.waitKey(0) correlation = cv2.matchTemplate(resized_image, template, cv2.TM_CCORR_NORMED, mask=alpha) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(correlation) max_val_corr = '{:.6f}'.format(max_val) max_val_corr = float(max_val_corr) print(&quot;--------------&quot;) print(f'Ratio: {i_ratio}') print(f&quot;correlation score: {max_val_corr}&quot;) print(f&quot;match location: {max_loc}&quot;) if max_val_corr &gt; confidence_threshold: print(&quot;Subimage found&quot;) subimage_found = True break print(&quot;Reached min size, no subimage found.&quot;) min_size_reached = True # draw match result = img.copy() if subimage_found: cv2.rectangle(result, (max_loc), ( max_loc[0]+ww, max_loc[1]+hh), (255,0,255), 1) cv2.imshow('template',template) cv2.imshow('alpha',alpha) cv2.imshow('result',result) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>This kinda works.</p> <p>But not well... the picture is &quot;successfully&quot; found in moto.png, and &quot;successfully&quot; not found in horizon.png... but it isn't found in withlogo2.png either. And the correlation score for horizon.png, even though the image is not found, is extremly high, very close to be a match. for that matter, the correlation score is VERY high with almost all pictures I've used to search for the logo. Thats why the threshold is so high.</p> <p>I'm guessing I need to do further preprocessing. Any further advice would be greatly appreciated.</p> <p>Expected Result: the template should only be found when comparing to withlogo2.png. It should return a high correlation score there. It shouldn't be found when comparing to horizon.png.</p> <p>Actual Result: the template isn't found in either withlogo2.png nor horizon.png, as the threshold is set quite high. If I lower the threshold a bit,the template is found in both. For some reason the correlation score is higher when comparing to horizon.png than comparing to withlogo2.png.</p> <p><strong>------- EDIT 3 Aug 17th</strong></p> <p>Sorry, this is the template and the mask that is created (I was having issues with the first template as it did not have a transparency channel):</p> <p>base <a href="https://i.sstatic.net/rAm9T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rAm9T.png" alt="enter image description here" /></a></p> <p>template <a href="https://i.sstatic.net/phe5I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phe5I.png" alt="enter image description here" /></a></p> <p>alpha <a href="https://i.sstatic.net/yHYul.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yHYul.png" alt="enter image description here" /></a></p> <p>And these are the results (pink squares)...</p> <p><a href="https://i.sstatic.net/zuk3Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zuk3Z.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/YtHF2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YtHF2.jpg" alt="enter image description here" /></a> <a href="https://i.sstatic.net/VUuDw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VUuDw.jpg" alt="enter image description here" /></a></p> <p>As you can see the template is &quot;found&quot; in all 3 images, for all 3 I get a correlation ratio of more than 0.94... but it's all pretty much nonsense... not sure how to correct this.</p>
<python><opencv>
2023-08-10 23:12:55
0
411
Alain
76,879,804
1,194,761
Replace white space from column name with underscore in DuckDB Python client API
<p>I have a DuckDB table whose column names have white spaces, and I'd like to just specify a blanket rule that says &quot;for all columns with spaces, replace it with an underscore&quot;. I know how to do this by converting the table to a Polars DataFrame, but can I do it without the conversion, preferably using DuckDB's relational API?</p> <h4>Sample data</h4> <p><code>test.csv</code></p> <pre><code>&quot;id&quot;,&quot;company name&quot; 1,&quot;Walmart&quot; 2,&quot;Amazon&quot; 3,&quot;Apple&quot; </code></pre> <pre><code>df = duckdb.sql(&quot;SELECT * FROM test.csv&quot;) </code></pre> <p>This yields the <code>company name</code> column name with spaces.</p> <pre><code>β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id β”‚ company name β”‚ β”‚ int64 β”‚ varchar β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 1 β”‚ Walmart β”‚ β”‚ 2 β”‚ Amazon β”‚ β”‚ 3 β”‚ Apple β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <h4>Polars works</h4> <pre class="lang-py prettyprint-override"><code>df = duckdb.sql(&quot;SELECT * FROM 'test.csv'&quot;).pl() # Replace spaces with underscores in column names using Polars df.columns = list(map(lambda x: x.replace(&quot; &quot;, &quot;_&quot;), df.columns)) df_new = duckdb.sql(&quot;SELECT * from df&quot;) </code></pre> <pre><code>β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id β”‚ company_name β”‚ β”‚ int64 β”‚ varchar β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 1 β”‚ Walmart β”‚ β”‚ 2 β”‚ Amazon β”‚ β”‚ 3 β”‚ Apple β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <h4>What I want to do</h4> <p>I'm unable to find any examples on how to do this directly, <em>without conversion to polars</em> in DuckDB's Python client API. The <a href="https://duckdb.org/docs/sql/statements/alter_table.html" rel="nofollow noreferrer">docs</a> show how to use <code>ALTER TABLE</code> via raw SQL), but my goal is to do this via Python.</p>
<python><sql><duckdb>
2023-08-10 23:04:47
2
2,864
prrao
76,879,741
3,462,509
Solving a Modified Subset Sum - Search Algorithms
<p>I have a problem which is similar to a subset sum but with a few modifications:</p> <p><strong>Subset Sum:</strong> Find all ways the following list can sum to 16:</p> <pre><code>[2,9,40,2,404,12....] </code></pre> <p><strong>My Problem</strong>: Find all ways you can select <code>n</code> items (integer, discrete), each which has 3 properties - <code>length</code>,<code>width</code>,and <code>height</code>, such that the combined total of each property is as close to a specific target as possible.</p> <hr> <p><strong>Example:</strong></p> <pre><code>items = [(92,10,15),(8,18,34),(29,50,110) ...] l,w,h = [150,200,180] n = 3 </code></pre> <p>We select 3 items such that the sum of the length, width, and height across all 3 items is as close as possible 150, 200, and 180. This <code>error</code> can be formulated as a simple distance metric:</p> <pre><code>error = sum(abs(x-y) for x,y in zip(targets,actuals)) </code></pre> <p>This is a contrived example but the general problem I'm solving is the same, just with more variables and constraints.</p> <p>One obvious solution is just an exhaustive graph search of some sort, but I expect there are superior approaches, perhaps in the constraint programming domain.</p> <p><strong>Note</strong>:The solution should execute in a few seconds. Finding an approximate solution quickly with low error is more important than finding the global optimum. It doesn't matter if we undershoot or overshoot the targets, which is why the error uses absolute distance.</p> <p>What is a &quot;low error&quot;? I'm able to get within ~5% of the targets when selecting from 1M items, in &lt; 3 seconds. I would prefer to get that error down if possible, but 1%-5% is acceptable.</p> <p>Please take into consideration my prior attempts <a href="https://stackoverflow.com/a/76892037/3462509">outlined below</a> before recommending an approach.</p>
<python><algorithm><optimization><combinatorics><constraint-programming>
2023-08-10 22:44:52
2
2,792
Solaxun
76,879,698
2,846,923
Pytorch automatic mixed precision - cast whole code block to float32
<p>I have a complex model that I would like to train in mixed precision. To do this, I use the <a href="https://pytorch.org/docs/stable/amp.html" rel="nofollow noreferrer">torch.amp package</a>. I can enable AMP for the whole model using <code>with torch.cuda.amp.autocast(enabled=enable_amp, dtype=torch.float16):</code>. However, the model training is not stable, so I would like to force certain areas of the model to float32.</p> <p>Here's what I've tried or considered:</p> <p>There are two officially endorsed solutions I'm aware of: disable AMP for a block and cast all input tensors at the start of the block, or use <code>custom_fwd</code> as described in <a href="https://stackoverflow.com/a/73529099/2846923">this answer</a>. However, these both have issues. The first requires manually casting each input tensor to float32. The second requires adding the <code>custom_fwd</code> decorator to a forward function, so I need to either add that to each module individually or make a new container module that holds the other modules. Neither of those solutions works well for me since I want to test enabling and disabling float16 for many different parts of my model, so I would need to be constantly adding and removing code to cast dozens of tensors and/or modules.</p> <p>What I want is the ability to cast to float32 for an entire block of code like <code>with [cast everything to float32]:</code>, but I don't know a way to do that reliably. <code>with torch.cuda.amp.autocast(enabled=False):</code> doesn't cast float16 tensors to float32, it only disables casting float32 tensors to float16. <code>with torch.cuda.amp.autocast(dtype=torch.float32):</code> appears to maybe work, but that's not officially documented usage, and based on the docs I'm not confident that it will reliably work in the future. This model will continue to be used/updated for years by a team of people, so I don't want to risk it breaking in the future if a pytorch update changes undocumented functionality.</p> <p>Does anyone know of a way to reliably cast everything in a whole block of code to float32?</p>
<python><pytorch><casting><automatic-mixed-precision>
2023-08-10 22:34:48
0
11,162
The Guy with The Hat
76,879,593
7,452,220
Pycharm 2023.2 (CE) does not recognize packages installed with pip install -e when using a python virtualenv
<ul> <li>In an Linux ubuntu shell</li> <li>In a python 3.9 <strong>virtualenv</strong> installed &quot;my_package&quot; with <code>pip install -e &lt;path_to_my_package&gt;</code></li> <li>In a <strong>module</strong> in the pycharm editor: <code>import &lt;my_package&gt;</code></li> <li><strong>PyCharm</strong> throws an error for &lt;my_package&gt; <code>reference not found</code></li> </ul> <p>The same installation on Pycharm worked perfectly for months before it suddenly started throwing the error.</p> <p>All these attempts <strong>failed</strong>.</p> <ul> <li>Restarting Pyhcarm, invalidating it's caches, repairing the IDE</li> <li>Delete and reload the package with <code>pip uninstall &lt;my_package&gt;</code> and <code>pip install -e &lt;my_package&gt;</code></li> <li>Delete and reinstall it in the Pycharm Interpreter settings.</li> </ul> <p>This <strong>worked</strong> (but the package was not editable of course, so <strong>useless</strong> in this case):</p> <ul> <li>re-install the package in <strong>non</strong> editable mode: <code>pip install -e &lt;my_package&gt;</code></li> <li>No Pycharm error.</li> </ul>
<python><pycharm><virtualenv>
2023-08-10 22:07:54
1
301
Gerard G
76,879,477
238,074
google-cloud-platform python client library - how do I get project information, specifically the integer project identifier
<p>Apparently for Secret Manager secrets, you must have a reference to the secret that includes an integer for your project instead of the project name. A path like: <code>/projects/&lt;proj_number&gt;/secrets/&lt;secret_name&gt;</code>, where, the project number, in my case is a 12 digit integer. How do I locate this project number via the Python Client Library?</p> <p>If possible, also sharing how I could find this info in the &quot;documentation&quot; would be wonderful as I find the documentation I have to be severely lacking.</p>
<python><google-cloud-platform>
2023-08-10 21:43:32
1
2,922
Kevin Buchs
76,879,421
1,459,607
How can I disable expanding on a QTreeView for one row?
<p>I want to make it so that one row does not have the expanding carrot, but can still be selected.</p> <p>I tried this:</p> <pre><code>import sys from PySide2.QtWidgets import QApplication, QMainWindow, QTreeView, QVBoxLayout, QWidget, QAbstractItemView, QSizePolicy, QStyledItemDelegate from PySide2.QtCore import Qt, QModelIndex class CustomItemDelegate(QStyledItemDelegate): def sizeHint(self, option, index): size = super().sizeHint(option, index) if index.row() == self.model.max_display_rows: size.setWidth(0) # Hide the expanding icon return size class CustomItemModel(QAbstractItemModel): def __init__(self, data, parent=None): super().__init__(parent) self.data = data self.max_display_rows = 5 # ... (rest of the model methods) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle(&quot;Custom Item Model Example&quot;) self.setGeometry(100, 100, 400, 300) data = [&quot;Item {}&quot;.format(i) for i in range(1, 11)] self.model = CustomItemModel(data) self.treeView = QTreeView() self.treeView.setModel(self.model) self.treeView.setSelectionBehavior(QAbstractItemView.SelectRows) self.treeView.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding) delegate = CustomItemDelegate(self.treeView) self.treeView.setItemDelegate(delegate) layout = QVBoxLayout() layout.addWidget(self.treeView) central_widget = QWidget() central_widget.setLayout(layout) self.setCentralWidget(central_widget) def main(): app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_()) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>it doesn't work.</p>
<python><pyside2>
2023-08-10 21:31:42
1
1,386
Ryan Glenn
76,879,007
454,671
Install the correct onnxruntime for chromadb with pip install
<p>I am trying to install <code>chromadb</code> on my Jupyter notebook (Anaconda) using:</p> <pre><code>pip install chromadb </code></pre> <p>I get error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement onnxruntime&gt;=1.14.1 (from chromadb) (from versions: 1.2.0, 1.3.0, 1.4.0, 1.5.1, 1.5.2, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.11.0, 1.11.1) ERROR: No matching distribution found for onnxruntime&gt;=1.14.1 (from chromadb) </code></pre> <p>Ok, so I run:</p> <pre><code>pip install onnxruntime </code></pre> <p>And it installed <code>onnxruntime</code> 1.11.1 But <code>chromadb</code> requires &gt;1=1.14.1</p> <p>I am assuming that the highest <code>onnxruntime</code> compatible with my OS (mac) is 1.11.1. Is there a way around it?</p>
<python><pip><openai-api><vector-database><chromadb>
2023-08-10 20:08:28
3
17,167
Victor
76,878,988
17,877,528
How to deploy a python application that needs JVM to google cloud
<p>it's my first time deploying a application with Python and Flask, so i'm not sure how to properly do it. I watched <a href="https://www.youtube.com/watch?v=3fsIcMgUOY8&amp;t=396s" rel="nofollow noreferrer">this video</a> and followed the exact steps, but i'm getting the below error.The application is pretty simple, but i'm using a package that uses <code>JVM</code>. Locally it works fine, but once i deployed, i'm getting this error</p> <blockquote> <p>jpype._jvmfinder.JVMNotFoundException: No JVM shared library file (libjvm.so) found. Try setting up the JAVA_HOME environment variable properly.</p> </blockquote> <p>This route works fine</p> <pre><code>@app.route(&quot;/&quot;) def home(): return &quot;Hello, World!&quot; </code></pre> <p>But the one i'm using that package throws that error.</p> <p>This is my <code>requirements.txt</code></p> <pre><code>Flask==2.2.2 autopep8==2.0.1 gunicorn==21.2.0 JPype1==1.4.1 konlpy==0.6.0 lxml==4.9.2 numpy==1.24.3 packaging==23.1 pycodestyle==2.10.0 </code></pre> <p>And at my <code>app.yaml</code>, i have only this</p> <pre><code>runtime: python38 </code></pre> <p>How should i deploy it on Google Cloud?</p> <p>Thanks in advance</p>
<python><flask><google-cloud-platform><jvm>
2023-08-10 20:05:13
0
774
JosΓ© Carlos
76,878,936
12,027,858
How to read Picklist values via Salesforce Metadata API & simple-salesforce
<p>I am using simple-salesforce to read metadata from some CustomObjects: <code>metadata = _sf.mdapi.CustomObject.read(sf_object_name)</code></p> <p>This generally works, EXCEPT for a number of Picklist fields: the options return as a named ValueSet instead of a list of options (see below).</p> <p><strong>How can I get the actual Picklist options? Do I need to call a different API method with the ValueSetName (ex. <code>Student_Challenge_Experience_Homesick</code>)?</strong></p> <pre><code> { 'fullName': 'Experiencing_Homesickness__c', ... ... 'type': 'Picklist', 'unique': None, 'valueSet': { 'controllingField': None, 'restricted': True, 'valueSetDefinition': None, 'valueSetName': 'Student_Challenge_Experience_Homesick', 'valueSettings': [ ] }, 'visibleLines': None, 'writeRequiresMasterRead': None }, </code></pre> <p>For completeness' sake: some fields DO return a list of picklist values with their label &amp; value:</p> <pre><code> 'valueSet': { 'controllingField': None, 'restricted': None, 'valueSetDefinition': { 'sorted': False, 'value': [ { 'fullName': '3', 'color': None, 'default': False, 'description': None, 'isActive': None, 'label': '3' }, { 'fullName': '4', 'color': None, 'default': False, 'description': None, 'isActive': None, 'label': '4' }, { 'fullName': '5+', 'color': None, 'default': False, 'description': None, 'isActive': None, 'label': '5+' } ] }, 'valueSetName': None, 'valueSettings': [ ] }, 'visibleLines': None, 'writeRequiresMasterRead': None }, </code></pre>
<python><salesforce><metadata><simple-salesforce><sfdc-metadata-api>
2023-08-10 19:55:19
1
600
ezeYaniv
76,878,713
6,461,882
tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
<p>I am trying to install (through <a href="https://github.com/pyenv/pyenv" rel="noreferrer">pyenv</a>) Python-3.11.4 under CentOS-7. It installs but without GUI. I get the following error message:</p> <pre class="lang-none prettyprint-override"><code>Installing Python-3.11.4... Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/.../pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py&quot;, line 38, in &lt;module&gt; import _tkinter # If this fails your Python may not be configured for Tk ^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named '_tkinter' WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit? Installed Python-3.11.4 to /.../pyenv/versions/3.11.4 </code></pre> <p>While Python-3.9.16 installs successfully on the same machine. According to <a href="https://docs.python.org/3/whatsnew/3.11.html#build-changes" rel="noreferrer">Python 3.11 Build Changes</a>, the requirement is to have <em>&quot;Tcl/Tk version 8.5.12 or newer&quot;</em> installed. I have</p> <pre class="lang-none prettyprint-override"><code>$ rpm -q tk tk-devel tcl tcl-devel tk-8.5.13-6.el7.x86_64 tk-devel-8.5.13-6.el7.x86_64 tcl-8.5.13-8.el7.x86_64 tcl-devel-8.5.13-8.el7.x86_64 </code></pre> <p>The same page says <em>&quot;Tcl/Tk, and uuid flags are detected by pkg-config (when available). tkinter now requires a pkg-config command to detect development settings for Tcl/Tk headers and libraries.&quot;</em>, which is also installed:</p> <pre class="lang-none prettyprint-override"><code>$ rpm -q pkgconfig pkgconfig-0.27.1-4.el7.x86_64 </code></pre> <p>Could you please help me to understand what might be the reason of the failure to install <code>_tkinter</code>?</p> <p>Thank you very much for your help!</p>
<python><tkinter><build><tk-toolkit>
2023-08-10 19:13:16
2
2,855
S.V
76,878,696
2,200,963
How can AWS Copilot's manifest.image.context be set to the project repository's root directory?
<p>How can the AWS Copilot image.context manifest file parameter be set to the project’s root directory? I’m translating a Docker Compose file to Copilot manifest files but I’ve been unable to get Copilot to honor the image.context parameter using either relative or absolute paths.</p> <p><strong>Approximate project repository structure:</strong></p> <pre><code>- project_root - src - api.py - copilot - api - manifest.yaml - requirements.txt - requirements - base.in - base.txt - development.in - development.txt - production.in - production.txt - dockerfiles - api - Dockerfile - start_agent.sh - start_api.sh - docker-compose.yaml </code></pre> <p><strong>Service Dockerfile</strong></p> <pre><code>FROM python:3.11 RUN mkdir /app WORKDIR /app COPY ./requirements/ requirements COPY ./requirements.txt . RUN pip install -r requirements.txt COPY . /app COPY ./dockerfiles/services/api/supervisord.conf /etc/supervisor/conf.d/supervisord.conf COPY --chmod=### ./dockerfiles/services/api/start_agent.sh /app/start_agent.sh COPY --chmod=### ./dockerfiles/services/api/start_api.sh /app/start_api.sh ENTRYPOINT [&quot;/usr/bin/supervisord&quot;, &quot;-c&quot;, &quot;/etc/supervisor/conf.d/supervisord.conf&quot;] EXPOSE 8080 </code></pre> <p><strong>Service's Copilot manifest.yaml File</strong></p> <pre class="lang-yaml prettyprint-override"><code>name: api type: Backend Service image: build: ./dockerfiles/services/api/Dockerfile context: . port: 8080 ... exec: true network: connect: true </code></pre> <p><strong>Contexts Tried</strong></p> <ul> <li>&quot;.&quot;</li> <li>&quot;../..&quot;</li> <li>&quot;../../..&quot;</li> <li>&quot;/path/to/project/repo&quot;</li> </ul> <p><strong>Copilot Command Used</strong></p> <pre class="lang-bash prettyprint-override"><code># from project_root copilot svc deploy --name api --env staging </code></pre> <p>For each image.context I get the same error that none of the file source locations of any of the Dockerfile COPY commands could be found. The only two workarounds I see are to restructure the project repository following the documentation examples or build the service images in docker, push to ECR, then use the imagee.location parameter instead.</p> <p><strong>Copilot Error Message</strong></p> <pre><code>[+] Building 0.5s (14/14) FINISHED =&gt; [internal] load build definition from Dockerfile =&gt; =&gt; transferring dockerfile: 37B =&gt; [internal] load .dockerignore =&gt; =&gt; transferring context: 2B =&gt; [internal] load metadata for docker.io/library/python:3.11 =&gt; [internal] load build context =&gt; =&gt; transferring context: 2B =&gt; [ 1/10] FROM docker.io/library/python:3.11 ... =&gt; ERROR [10/10] COPY requirements.txt . ------ &gt; [10/10] COPY requirements.txt .: ------ failed to compute cache key: &quot;/requirements.txt&quot; not found: not found </code></pre> <p>In addition to the different contexts values tried, I've also upgraded copilot from version 1.25.0 to 1.29.1 using the homebrew aws/tap/copilot-cli tap.</p>
<python><amazon-web-services><aws-copilot><aws-copilot-cli>
2023-08-10 19:09:34
2
764
datasmith
76,878,671
13,916,049
Use a list comprehension to concatenate each item in list to a substring and store as a dictionary
<p>I want to use a list comprehension to concatenate the strings in <code>metadata.index.to_list()</code> to the <code>: set()</code> substring and store them as `` dictionary. The length of the list is <strong>undefined</strong> and the following is just an example.</p> <p>My attempt:</p> <pre><code>uuids_to_zarr_files = dict([uuid + &quot;: set()&quot; for uuid in metadata.index.to_list()]) </code></pre> <p>Traceback:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [29], in &lt;cell line: 1&gt;() ----&gt; 1 uuids_to_zarr_files = dict([uuid + &quot;: set()&quot; for uuid in metadata.index.to_list()]) ValueError: dictionary update sequence element #0 has length 39; 2 is required </code></pre> <p>Input:</p> <p><code>metadata.index.to_list()</code></p> <pre><code>['07317dfade92994d6fbbe9faef1236f7', '91890e05edbd0a767092e10502b3ea99', '56b00177fa2bbd091a10dae8d57c9539'] </code></pre> <p>Expected output:</p> <pre><code>uuids_to_zarr_files = {'07317dfade92994d6fbbe9faef1236f7': set(), '91890e05edbd0a767092e10502b3ea99': set(), '56b00177fa2bbd091a10dae8d57c9539': set()} </code></pre>
<python><dictionary><list-comprehension>
2023-08-10 19:04:52
2
1,545
Anon
76,878,564
11,937,086
Is there a way to multithread or batch REST API calls in Python?
<p>I've got a very long list of keys, and I am calling a REST API with each key to GET some metadata about it.</p> <p>The API can only accept one key at a time, but I wondered if there was a way I could batch or multi-thread the calls from my side?</p>
<python><multithreading><rest><get>
2023-08-10 18:46:46
1
378
travelsandbooks
76,878,516
429,040
What is the proper, pythonic way to apply some properties of one nested dictionary to another?
<p>Say I have this nested dict:</p> <pre><code>[ { &quot;id&quot;: 1, &quot;name&quot;: &quot;Mammals&quot;, &quot;animals&quot;: [ { &quot;id&quot;: 1, &quot;name&quot;: &quot;Chinchilla&quot;, &quot;extinct&quot;: False }, { &quot;id&quot;: 2, &quot;name&quot;: &quot;Wooly Mammoth&quot;, &quot;extinct&quot;: True }, ... ] }, ... ] </code></pre> <p>But Oh No! The Chinchilla has gone extinct! Now I have another, smaller, dict of identical format which represents updates to the <code>extinct</code> value on one or more members:</p> <pre><code>[ { &quot;id&quot;: 1, &quot;name&quot;: &quot;Mammals&quot;, &quot;animals&quot;: [ { &quot;id&quot;: 1, &quot;name&quot;: &quot;Chinchilla&quot;, &quot;extinct&quot;: True }, ... ] }, ... ] </code></pre> <p>I could start nesting <code>for</code> loops, throw in an <code>if</code> or two to compare <code>id</code> values, and tackle this pretty easily - but that many layers of nesting just <em>feels</em> bad to do. Is there a better and/or more pythonic way to do this?</p>
<python><dictionary>
2023-08-10 18:39:39
0
1,364
David Perry
76,878,461
629,186
Using Python Requests, how can I send a request using http only, NOT https
<p>Before you mark this as a duplicate of <a href="https://stackoverflow.com/q/57111689/629186">this question</a>, here me out...</p> <p>I am making a <code>POST</code> request to <code>http://my-local-server:80/apistuff/user' to create a user. When I walk through with a debugger, right up until the actual </code>requests.post()`, that URL is http.</p> <p>But when I look at the <code>Response.url</code>, it says <code>https://my-local-server/apistuff/user</code>. Requests automatically made it an https URL. But there is a problem with this I'll explain in a moment.</p> <p>The answer to the other question was to add <code>, verify=False</code> as an argument to the <code>post</code> call. But that only says don't verify the certificates; it still changes the URL, and more to my point, it changes THE PORT.</p> <p>The app thinks it's making a call out on port 80, but it's really sent out on 443. The api does some some stuff--calls an external file with parms--which then sends the reply to the configured url. The configuration says the app is running on port 80, which is important for other sections of the code. So the api tries sending back the response from port 443 (because that's where it came from) to port 80 (because that is where it's directed to).</p> <p>From this, I've gotten 308 (redirect) errors, 400 (bad request) errors, and 500 (server bad) errors depending on setup.</p> <p>So verifying certs is not the problem. The problem is that it changed the url/port so it can no longer communicate back.</p> <p>So the question remains: <strong>how do you send something through requests so it STAYS an http request?</strong></p>
<python><python-requests>
2023-08-10 18:31:48
0
1,817
MivaScott
76,878,419
404,264
How do I set up and verify Python mock for chained calls?
<p>Say I want to test my calls <code>Greeter</code> which depends a 3rd party class <code>Foo</code> which in turn depends on another class <code>Bar</code>. I need to mock <code>Foo</code>, but how do I set up and verify the chained call <code>self.foo.get_bar().format(name)</code>?</p> <p>greeter.py:</p> <pre><code>class Bar: def format(self, name): return name.upper() class Foo: def __init__(self): self.bar = Bar() def get_bar(): return self.bar class Greeter: def __init__(self, foo): self.foo = foo def hi(self, name): return f'Hi {self.foo.get_bar().format(name)}' </code></pre> <p><strong>PS:</strong> this question is purely about the use of Python mock, not about the best practice of writing Python code and test, so I won't refactor the code.</p>
<python><python-mock>
2023-08-10 18:26:22
1
26,824
Dagang Wei
76,878,387
1,601,580
How does one create a pytoch data loader using an interleaved hugging face dataset?
<p>When I interleave data sets, get a tokenized batch, feed the batch to the pytorch data loader, I get errors:</p> <pre><code># -*- coding: utf-8 -*- &quot;&quot;&quot;issues with dataloader and custom data sets Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1sbs95as_66mtK9VK_vbaE9gLE-Tjof1- &quot;&quot;&quot; !pip install datasets !pip install pytorch !pip install transformers token = None batch_size = 10 from datasets import load_dataset import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(&quot;gpt2&quot;) if tokenizer.pad_token_id is None: tokenizer.pad_token = tokenizer.eos_token probe_network = GPT2LMHeadModel.from_pretrained(&quot;gpt2&quot;) device = torch.device(f&quot;cuda:{0}&quot; if torch.cuda.is_available() else &quot;cpu&quot;) probe_network = probe_network.to(device) # -- Get batch from dataset from datasets import load_dataset # path, name = 'brando/debug1_af', 'debug1_af' path, name = 'brando/debug0_af', 'debug0_af' remove_columns = [] dataset = load_dataset(path, name, streaming=True, split=&quot;train&quot;, token=token).with_format(&quot;torch&quot;) print(f'{dataset=}') batch = dataset.take(batch_size) # print(f'{next(iter(batch))=}') # - Prepare functions to tokenize batch def preprocess(examples): # gets the raw text batch according to the specific names in table in data set &amp; tokenize return tokenizer(examples[&quot;link&quot;], padding=&quot;max_length&quot;, max_length=128, truncation=True, return_tensors=&quot;pt&quot;) def map(batch): # apply preprocess to batch to all examples in batch represented as a dataset return batch.map(preprocess, batched=True, remove_columns=remove_columns) tokenized_batch = batch.map(preprocess, batched=True, remove_columns=remove_columns) tokenized_batch = map(batch) # print(f'{next(iter(tokenized_batch))=}') from torch.utils.data import Dataset, DataLoader, SequentialSampler dataset = tokenized_batch print(f'{type(dataset)=}') print(f'{dataset.__class__=}') print(f'{isinstance(dataset, Dataset)=}') # for i, d in enumerate(dataset): # assert isinstance(d, dict) # # dd = dataset[i] # # assert isinstance(dd, dict) loader_opts = {} classifier_opts = {} # data_loader = DataLoader(dataset, shuffle=False, batch_size=loader_opts.get('batch_size', 1), # num_workers=loader_opts.get('num_workers', 0), drop_last=False, sampler=SequentialSampler(range(512)) ) data_loader = DataLoader(dataset, shuffle=False, batch_size=loader_opts.get('batch_size', 1), num_workers=loader_opts.get('num_workers', 0), drop_last=False, sampler=None) print(f'{iter(data_loader)=}') print(f'{next(iter(data_loader))=}') print('Done\a') </code></pre> <p>with error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map) 126 try: --&gt; 127 return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}) 128 except TypeError: 9 frames TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found &lt;class 'NoneType'&gt; During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map) 148 return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] 149 --&gt; 150 raise TypeError(default_collate_err_msg_format.format(elem_type)) 151 152 TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found &lt;class 'NoneType'&gt; </code></pre> <p>why? And why doesn't the single data set c4 and wiki-text give this error? Only interleaved data sets?</p> <p>Ideally I don't want to write my own collate_function.</p> <ul> <li>colab: <a href="https://colab.research.google.com/drive/1sbs95as_66mtK9VK_vbaE9gLE-Tjof1-?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1sbs95as_66mtK9VK_vbaE9gLE-Tjof1-?usp=sharing</a></li> <li>related: <a href="https://stackoverflow.com/questions/76872115/how-does-one-create-a-pytorch-data-loader-with-a-custom-hugging-face-data-set-wi">How does one create a pytorch data loader with a custom hugging face data set without having errors?</a></li> <li>hf discuss: <a href="https://discuss.huggingface.co/t/how-does-one-create-a-pytoch-data-loader-using-an-interleaved-hugging-face-dataset/50320" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-does-one-create-a-pytoch-data-loader-using-an-interleaved-hugging-face-dataset/50320</a></li> </ul>
<python><pytorch><huggingface><pytorch-dataloader><huggingface-datasets>
2023-08-10 18:21:06
1
6,126
Charlie Parker
76,878,346
509,977
ValueError: invalid syntax for integer with base 10 while using adafruit_requests in Circuit Python
<p>I am trying to build a simple notification application to detect when my letter box is opened and send a notification via pushbullet using Raspberry Pico W.</p> <p>This is the application:</p> <pre><code>import os import adafruit_requests import board import digitalio import time import adafruit_debouncer import wifi import ssl import os import ipaddress import wifi import socketpool quotes_url = &quot;https://www.adafruit.com/api/quotes.php&quot; def connect_to_wifi(): print(&quot;Connecting to WiFi&quot;) wifi.radio.connect(os.getenv('SSID'), os.getenv('PASSWORD')) print(&quot;Connected to WiFi&quot;) time.sleep(10) def send_notification(title, message): api_key = os.getenv('PUSHBULLET_API_KEY') url = &quot;https://api.pushbullet.com/v2/pushes&quot; headers = { &quot;Authorization&quot;: &quot;Bearer &quot; + api_key, &quot;Content-Type&quot;: &quot;application/json&quot; } data = { &quot;type&quot;: &quot;note&quot;, &quot;title&quot;: title, &quot;body&quot;: message } print(headers) pool = socketpool.SocketPool(wifi.radio) requests = adafruit_requests.Session(pool, ssl.create_default_context()) response = requests.get(quotes_url) print(response.content) response = requests.post(url, headers=headers, json=data) if response.status_code == 200: print(&quot;Notification sent successfully!&quot;) else: print(&quot;Failed to send notification.&quot;) def blink_led(): led = digitalio.DigitalInOut(board.LED) led.direction = digitalio.Direction.OUTPUT switch_pin = digitalio.DigitalInOut(board.GP21) switch_pin.direction = digitalio.Direction.INPUT switch_pin.pull = digitalio.Pull.UP switch_db = adafruit_debouncer.Debouncer(switch_pin) last_switch_state = False BLINK_ON_DURATION = 1000000000 # 1 second in nanoseconds BLINK_OFF_DURATION = 1000000000 # 1 second in nanoseconds LAST_BLINK_TIME = -1 while True: switch_db.update() if switch_db.fell: LAST_BLINK_TIME = -1 current_switch_state = switch_db.value if current_switch_state != last_switch_state: current_time = time.localtime() notification_time = &quot;{}/{}/{} {}:{}:{}&quot;.format(current_time.tm_mday, current_time.tm_mon, current_time.tm_year, current_time.tm_hour, current_time.tm_min, current_time.tm_sec) #send_notification(&quot;Mailbox notification&quot;, notification_time + &quot; - You (probably) got mail :)&quot;) send_notification(&quot;Mailbox notification&quot;, &quot;You (probably) got mail :)&quot;) last_switch_state = current_switch_state now = time.monotonic_ns() if not current_switch_state and now &gt;= LAST_BLINK_TIME + BLINK_OFF_DURATION: led.value = True LAST_BLINK_TIME = now if led.value and now &gt;= LAST_BLINK_TIME + BLINK_ON_DURATION: led.value = False LAST_BLINK_TIME = now print(&quot;LED status:&quot;, led.value) print(&quot;Reed switch status:&quot;, current_switch_state) time.sleep(0.01) connect_to_wifi() blink_led() </code></pre> <p>It used to work fine but now I get:</p> <pre class="lang-none prettyprint-override"><code>ValueError: invalid syntax for integer with base 10 </code></pre> <p>The first get request is just a test and works perfectly, the second one throws the error, here's a more complete log:</p> <pre class="lang-none prettyprint-override"><code>b'[{&quot;text&quot;:&quot;The most worth-while thing is to try to put happiness into the lives of others&quot;,&quot;author&quot;:&quot;Robert Baden-Powell&quot;}]' Traceback (most recent call last): File &quot;code.py&quot;, line 94, in &lt;module&gt; File &quot;code.py&quot;, line 76, in blink_led File &quot;code.py&quot;, line 41, in send_notification File &quot;adafruit_requests.py&quot;, line 715, in post File &quot;adafruit_requests.py&quot;, line 677, in request File &quot;adafruit_requests.py&quot;, line 183, in __init__ ValueError: invalid syntax for integer with base 10 </code></pre> <p>How can I solve the error?</p>
<python><adafruit-circuitpython><pushbullet>
2023-08-10 18:15:16
0
8,679
Pitto
76,878,134
12,708,740
Where to find spacy.py file to rename
<p>I am attempting to install <a href="https://spacy.io/usage/" rel="nofollow noreferrer">spacy</a> and have tried a number of methods using <code>pip</code>, <code>conda</code>, and installing directing from git. However, I am running into the same error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[26], line 3 1 import spacy ----&gt; 3 nlp = spacy.load(&quot;en_core_web_sm&quot;) AttributeError: module 'spacy' has no attribute 'load' </code></pre> <p>From reading various articles online like <a href="https://stackoverflow.com/questions/67769003/attributeerror-module-spacy-has-no-attribute-load">this one</a>, I see that my error is likely due to a file called &quot;spacy.py&quot; that is causing a shadowing error. However, I can't find this file.</p> <p>Using <code>which python</code> in my dir shows me:</p> <pre><code>/Users/my_name/anaconda3/bin/python </code></pre> <p>Looking into my anaconda3/bin/python dir, I do see a Unix Executable file named &quot;spacy&quot; but renaming it has not fixed my error.</p>
<python><nlp><spacy>
2023-08-10 17:40:45
1
675
psychcoder
76,878,031
4,138,044
How to capture the traceback for an error in python along with source code?
<p>When I run a vscode notebook cell or a python file with these contents:</p> <pre class="lang-py prettyprint-override"><code>def foo(): return 4/0 # This will cause a division by zero exception foo() </code></pre> <p>I get the following error traceback which is very meaningful and tells me at what line along with the line of code that errors:</p> <pre><code>--------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) Cell In[12], line 4 1 def foo(): 2 return 4/0 # This will cause a division by zero exception ----&gt; 4 foo() Cell In[12], line 2, in foo() 1 def foo(): ----&gt; 2 return 4/0 ZeroDivisionError: division by zero </code></pre> <p>However when I run the program using a <code>exec</code> block I am unable to see the source code. How do I capture this?</p> <pre class="lang-py prettyprint-override"><code>import traceback import sys code = &quot;&quot;&quot;def foo(): return 4/0 # This will cause a division by zero exception foo() &quot;&quot;&quot; try: exec(code) except Exception as e: traceback.print_exc() </code></pre> <p>gives:</p> <pre><code>Traceback (most recent call last): File &quot;/var/folders/fd/f7mj1ws56pq6xvyzj4zv1r_m0000gn/T/ipykernel_39670/712868849.py&quot;, line 11, in &lt;module&gt; exec(code) File &quot;&lt;string&gt;&quot;, line 4, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 2, in foo ZeroDivisionError: division by zero </code></pre>
<python><traceback>
2023-08-10 17:26:17
1
515
canonball
76,877,974
160,808
easiest way to stop threads in python created by Thread object
<p>Hi I am working on forking the GAssistPi project since the developer is no longer actively working on it. There appears to be an issue with how hes handling threads. I am new to python so I probably can't explain it properly.</p> <p>The code base is huge. it contains many libraries and is written for Linux. However here is the code which only includes the bits I think are relavent:</p> <pre><code>class Myassistant(): def __init__(self): self.event = threading.Event self.t1 = Thread(target=self.picovoice_run) if GPIOcontrol: self.t2 = Thread(target=self.pushbutton) if configuration['MQTT']['MQTT_Control']=='Enabled': self.t3 = Thread(target=self.mqtt_start) if irreceiver!=None: self.t4 = Thread(target=self.ircommands) if configuration['ADAFRUIT_IO']['ADAFRUIT_IO_CONTROL']=='Enabled': self.t5 = Thread(target=self.adafruit_mqtt_start) </code></pre> <p>So my question is how do I safely shutdown these threads?</p> <p>Due how the project is currently configured and coded, only the t2 thread is started. And evidently from not being able to quit the python process by pressing ctrl c once, it's not being shutdown safely.</p> <p>I tried following this tutorial <a href="https://superfastpython.com/stop-a-thread-in-python/" rel="nofollow noreferrer">https://superfastpython.com/stop-a-thread-in-python/</a> However at first it gave an error saying ERROR : set() missing 1 required positional argument: 'self' when I called assistant.event.set() . The print statement proves there is an instance of threading.Event class</p> <pre><code>if __name__ == '__main__': try: assistant = Myassistant() assistant.main() print(assistant.event) assistant.event.set() y = assistant.event.is_set() print(y) </code></pre>
<python><multithreading>
2023-08-10 17:15:20
1
2,311
Ageis
76,877,970
2,702,600
Download temp file from website using python return 500
<p>I am trying to get automatically download a file from a website using requests.</p> <p>When I go to this webpage <a href="https://statistikbanken.dk/statbank5a/default.asp?w=1448" rel="nofollow noreferrer">https://statistikbanken.dk/statbank5a/default.asp?w=1448</a> and open a table to download, I find that it opens this link</p> <p><a href="https://statistikbanken.dk/statbank5a/selectout/ready.asp?PLanguage=0&amp;pxfile=D:%5Cftproot%5CLocalUser%5Cstatbank%5Cstatbank5a%5CTemp%5C202381019834427555087BIL5.px&amp;outfile=D:%5Cftproot%5CLocalUser%5Cstatbank%5Cstatbank5a%5CTemp%5C202381019834427555087BIL5&amp;queryfile=202381019834427555087BIL5&amp;Pxsid=" rel="nofollow noreferrer">https://statistikbanken.dk/statbank5a/selectout/ready.asp?PLanguage=0&amp;pxfile=D:\ftproot\LocalUser\statbank\statbank5a\Temp\202381019834427555087BIL5.px&amp;outfile=D:\ftproot\LocalUser\statbank\statbank5a\Temp\202381019834427555087BIL5&amp;queryfile=202381019834427555087BIL5&amp;Pxsid=</a></p> <p>Which creates the file. But when I try to make a post or get request to this link I get return code 500</p> <p>Any idea on how to fix this?</p>
<python><python-requests>
2023-08-10 17:14:35
0
411
Rud Faden
76,877,932
9,443,671
Using token_ids directly for response_template in DataCollatorForLM (not working)?
<p>I'm trying to use DataCollatorForCompletionOnlyLM in my SFT script. I'm using LLAMA2 as a base model and therefore I'm running into the same problem that is mentioned in the <a href="https://huggingface.co/docs/trl/main/en/sft_trainer" rel="nofollow noreferrer">SFT write-up from HF</a>. I've tried the proposed fix in the write-up, which is to encode the contextualized response key and then pass that directly in to the collator, however, I run into the following error:</p> <p><code>ValueError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]]</code></p> <p>Here is what I'm trying:</p> <pre><code>response_template_with_context=&quot;\n### Response:&quot; response_template_ids = tokenizer.encode(response_template_with_context, add_special_tokens=False) data_collator = DataCollatorForCompletionOnlyLM(response_template=response_template_ids, tokenizer=tokenizer, mlm=False, return_tensors=&quot;pt&quot;, pad_to_multiple_of=8) </code></pre> <p>and then configure my <code>TrainingArguments</code> and <code>SFTTrainer</code></p> <pre><code>trainer = SFTTrainer( model=model, train_dataset=train_dataset, eval_dataset=test_dataset, peft_config=peft_config, max_seq_length=script_args.seq_length, tokenizer=tokenizer, dataset_text_field=&quot;text&quot;, packing=False, data_collator=data_collator, # formatting_func=format_samples, args=args, ) </code></pre> <p>and run <code>trainer.train()</code> which yields the error. Any idea how I might be able to work around this?</p>
<python><pytorch><huggingface-transformers><huggingface><huggingface-datasets>
2023-08-10 17:09:38
2
687
skidjoe
76,877,887
14,040,092
str not compatible with StrOrLiteralStr
<p>Let's say I have this source code:</p> <pre><code>from string import Formatter from typing import Iterable, Tuple class MyFormatter(Formatter): def parse(self, s: str) -&gt; Iterable[Tuple[str, str, str, str]]: for text, field, spec, conversion in super().parse(s): yield &quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot; </code></pre> <p>mypy complains about it:</p> <p><code>error: Return type &quot;Iterable[tuple[str, str, str, str]]&quot; of &quot;parse&quot; incompatible with return type &quot;Iterable[tuple[StrOrLiteralStr, StrOrLiteralStr | None, StrOrLiteralStr | None, StrOrLiteralStr | None]]&quot; in supertype &quot;Formatter&quot; [override]</code></p> <p>I don't understand why <code>str</code> isn't acceptable for <code>StrOrLiteralStr</code>.</p> <p>What am I missing here and what do I do to fix it?</p>
<python><mypy><python-typing>
2023-08-10 17:02:29
0
311
solaluset
76,877,794
9,550,867
ValueError: Expected str or list, got <class 'streamlit.runtime.uploaded_file_manager.UploadedFile'>
<p>I was trying to create a simple <code>chat with csv</code> app with OpenAI and streamlit. Following is my code:</p> <pre><code>from langchain.agents import create_csv_agent from langchain.llms import OpenAI from dotenv import load_dotenv import os import streamlit as st def main(): load_dotenv() # Load the OpenAI API key from the environment variable if os.getenv(&quot;OPENAI_API_KEY&quot;) is None or os.getenv(&quot;OPENAI_API_KEY&quot;) == &quot;&quot;: print(&quot;OPENAI_API_KEY is not set&quot;) exit(1) else: print(&quot;OPENAI_API_KEY is set&quot;) st.set_page_config(page_title=&quot;Ask your CSV&quot;) st.header(&quot;Ask your CSV πŸ“ˆ&quot;) csv_file = st.file_uploader(&quot;Upload a CSV file&quot;, type=&quot;csv&quot;) if csv_file is not None: agent = create_csv_agent( OpenAI(temperature=0), csv_file, verbose=True) user_question = st.text_input(&quot;Ask a question about your CSV: &quot;) if user_question is not None and user_question != &quot;&quot;: with st.spinner(text=&quot;In progress...&quot;): st.write(agent.run(user_question)) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The code shows me this error:</p> <pre><code>2023-08-10 22:40:36.259 Uncaught app exception Traceback (most recent call last): File &quot;C:\Users\raiya\OneDrive\Desktop\chatCSV\venv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py&quot;, line 552, in _run_script exec(code, module.__dict__) File &quot;C:\Users\raiya\OneDrive\Desktop\chatCSV\main.py&quot;, line 33, in &lt;module&gt; main() File &quot;C:\Users\raiya\OneDrive\Desktop\chatCSV\main.py&quot;, line 24, in main agent = create_csv_agent( File &quot;C:\Users\raiya\OneDrive\Desktop\chatCSV\venv\lib\site-packages\langchain\agents\agent_toolkits\csv\base.py&quot;, line 32, in create_csv_agent raise ValueError(f&quot;Expected str or list, got {type(path)}&quot;) ValueError: Expected str or list, got &lt;class 'streamlit.runtime.uploaded_file_manager.UploadedFile'&gt; </code></pre> <p>I suppose it has something to do with reading the file. Additionally to solve the issue I tried <a href="https://stackoverflow.com/questions/76374664/langchain-create-csv-agent-with-streamlit-file-uploader-raise-valueerrorfexpe">this solution</a> but it does not seem to work. Can anyone help me fix this error?</p>
<python><streamlit><openai-api><langchain>
2023-08-10 16:49:22
1
1,195
raiyan22
76,877,751
17,160,160
Pyomo Decrement by Decision Variable Values in Constraint Formula
<p>Below is an example Pyomo model that seeks to maximise the sum of bounded decision variables A, B for a set VALS 1, 2, 3 with the constraint that the sum cannot be greater than a defined maxNum parameter of 10000.</p> <p>However, I would like to add an additional single decision variable, C that takes a defined parameter i.e. <code>startNum := 500</code> and is decremented by the sum of values assigned to A for each element in the set.</p> <p>eg. given the following<br /> A1 50<br /> A2 35<br /> A3 60</p> <p>C = 355 (500 - (50 + 35 + 60))</p> <p>I'm brand new to Pyomo and have no idea how to write the constraint function for C. I'd really appreciate some advice!</p> <p>Many thanks...</p> <p>A simple .dat file:</p> <pre><code>set VALS := 1 2 3; param maxNum := 10000; </code></pre> <p>basic model formulation:</p> <pre><code># define sets model.VALS = Set() # parameters model.maxNum = Param() # decision variables model.A = Var(model.WEEKS, within = NonNegativeIntegers, bounds = (0,100)) model.B = Var(model.WEEKS, within = NonNegativeIntegers, bounds = (100,200)) # constraints def maxNum_rule(model): return sum(model.A[t]+model.B[t] for t in model.WEEKS) &lt;= model.maxNum model.maxNumCalc = Constraint(rule = maxNum_rule) # objective function def objective_rule(model): return sum(model.A[t] + model.B[t] for t in model.WEEKS) model.maxNumObj = Objective(rule = objective_rule, sense = maximize) </code></pre>
<python><pyomo>
2023-08-10 16:43:25
0
609
r0bt
76,877,431
12,961,237
How to receive IAM credentials based on AWS Cognito User Pool Access token
<p>Let's say I have an access token of my user. Now I want to initiate a boto3 client with the IAM access rights of that user.</p> <p>I found out that for an identity pool I could call the get_credentials_for_identity endpoint. But what do I do for a user pool?</p>
<python><amazon-web-services><oauth-2.0>
2023-08-10 15:56:50
0
1,192
Sven
76,877,416
10,788,239
Why does a \ change what is in a string in python?
<p>I recently noticed that the symbol \ behaves differently in a python string than other characters.</p> <p>For example: <code>print('ab' + '\cde')</code> prints <code>ab\cde</code> as expected, but <code>print('a'+'\bcde')</code> only prints <code>cde</code>.</p> <p>I was wondering what the reason for this was. I am currently using python 11, so it may be a glitch with the new version? Only thing I can think of so far.</p>
<python><python-3.11>
2023-08-10 15:54:51
1
438
Arkleseisure
76,877,379
5,679,047
Conda environments are no longer isolated
<p>Anaconda has become completely broken on my Ubuntu server. In any conda environment, I can import packages all that I installed in my base environment, at a minimum. For example I have a package <code>ctha_xml</code> that comes from a private GitHub repository belonging to my company. I can run:</p> <pre><code>conda create -n foo python=3.9 conda activate foo python &gt;&gt;&gt; import ctha_xml </code></pre> <p>and it works, because <code>ctha_xml</code> is installed in my base environment. This applies to all packages in my base environment, and all environments besides base.</p> <p>What can I do to fix this? Could this be something wrong with my environment variables or my <code>.bashrc</code>?</p> <p><code>os.environ</code> inside a conda environment called <code>text-generation-client</code> looks like this:</p> <pre><code>environ{'SHELL': '/bin/bash', 'CONDA_EXE': '/opt/conda/bin/conda', '_CE_M': '', 'PWD': '/home/holmes', 'LOGNAME': 'holmes', 'XDG_SESSION_TYPE': 'tty', 'CONDA_PREFIX': '/home/holmes/.conda/envs/text-generation-client', 'MOTD_SHOWN': 'pam', 'HOME': '/home/holmes', 'LANG': 'C.UTF-8', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'CONDA_PROMPT_MODIFIER': '(text-generation-client) ', 'LC_TERMINAL': 'iTerm2', 'SSH_CONNECTION': '10.159.5.25 44390 10.60.244.9 22', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_SESSION_CLASS': 'user', 'TERM': 'xterm-256color', '_CE_CONDA': '', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'USER': 'holmes', 'CONDA_SHLVL': '2', 'LC_TERMINAL_VERSION': '3.4.19', 'SHLVL': '1', 'XDG_SESSION_ID': '2383', 'CONDA_PYTHON_EXE': '/opt/conda/bin/python', 'XDG_RUNTIME_DIR': '/run/user/1001', 'SSH_CLIENT': '10.159.5.25 44390 22', 'CONDA_DEFAULT_ENV': 'text-generation-client', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'PATH': '/home/holmes/.conda/envs/text-generation-client/bin:/opt/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/holmes/bin', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1001/bus', 'SSH_TTY': '/dev/pts/8', 'CONDA_PREFIX_1': '/opt/conda', '_': '/home/holmes/.conda/envs/text-generation-client/bin/ipython', 'KMP_DUPLICATE_LIB_OK': 'True', 'KMP_INIT_AT_FORK': 'FALSE'} </code></pre> <p><code>ctha_xml.__file__</code> indicates that it is being loaded from</p> <pre><code>/home/holmes/.local/lib/python3.10/site-packages/ctha_xml/__init__.py </code></pre>
<python><anaconda>
2023-08-10 15:49:25
1
681
Zorgoth
76,877,317
16,703,301
Psycopg2 closed connections in fact just iddle
<p>I use psycopg2 for manipulating data in postgresql db.</p> <pre><code>class Global_db_handler: def __init__(self, host, port, db, user, password, app_name): self.host = host self.port = port self.db = db self.user = user self.password = password self.appname = app_name if not self.connect_to_db(): raise Exception(&quot;Can't connect to the global db on initialization&quot;) def __del__(self): self.connection.close() def connect_to_db(self): try: self.connection.close() except: pass try: self.connection = psycopg2.connect(host = self.host, port = self.port, database = self.db, user = self.user, password = self.password, application_name = self.appname, connect_timeout=3, tcp_user_timeout = 3000) self.cursor = self.connection.cursor() return True except Exception as e: print(e) return False def foo1(self): for tries in range(2): try: if self.connect_to_db(): *do things* self.connection.close() return except psycopg2.OperationalError as e: print(e) raise Exception(&quot;Error in foo1&quot;) def foo2(self): for tries in range(2): try: if self.connect_to_db(): *do things* self.connection.commit() self.connection.close() return except psycopg2.OperationalError as e: print(e) raise Exception(&quot;Error in foo2&quot;) </code></pre> <p>As you can see - the connection is closed in every possible way. Each 30 minutes foo1 and foo2 are called. And each 30 minutes I see yet another iddle connection in <code>SELECT * FROM pg_stat_activity;</code> . It is a problem because db falls when they accumulate to big number.</p> <p>How can I close connection properly?</p>
<python><postgresql><psycopg2>
2023-08-10 15:41:53
0
336
Dmitry
76,877,204
22,221,987
How to make QListView::item:first and QListView::item:last with round corners
<p>I've tried to make QMenu and QComboBox item's corners rounded. And i have a result. But i can't force only first and last items become rounded. So, it should look like this:</p> <p><a href="https://i.sstatic.net/xWEbr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xWEbr.png" alt="" /></a>.</p> <p>But with this code below i can only round all items. I've tried to call <code>QListView::item:first</code> and <code>QListView::item:last</code>, but it didn't help.</p> <pre><code>css /* ======= QMenu ======= */ QMenu { background-color: #FFFFFF; border: 2px solid #CCCCCC; border-radius: 7px; padding: 0px; } QMenu::item { padding: 5px; border-radius: 4px; } QMenu::item:selected { background-color: #D3E3ED; color: #3B3B3B; } /* ======= QComboBox ======= */ QComboBox { combobox-popup: 0; border: 2px solid #CCCCCC; border-radius: 7px; min-width: 6em; padding: 7px; } QComboBox:on{ border-bottom-left-radius: 0px; border-bottom-right-radius: 0px; background-color: #D3E3ED; } QComboBox::drop-down{ border: 0px; } QComboBox::down-arrow { image: url(:/icons/arrow_down.svg); width: 10px; height: 10px; margin-right: 10px; } QComboBox::down-arrow:on { image: url(:/icons/arrow_up.svg); } QComboBox QListView{ top: 2px; border: 2px solid #CCCCCC; border-bottom-left-radius: 7px; border-bottom-right-radius: 7px; background: #FFFFFF; padding:0px; } QListView::item:hover{ background: #D3E3ED; color: #3B3B3B } QListView::item{ background: transparent; color: #3B3B3B; padding: 5px; outline: 0; border-radius: 4px; } QListView{ outline: 0; } </code></pre> <p>Is it possible to make it with clear css?</p>
<python><python-3.x><qtstylesheets><pyside6><pyqt6>
2023-08-10 15:27:20
0
309
Mika
76,877,144
2,496,293
voxel51 / fiftyone: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory
<p>After installing and running <a href="https://docs.voxel51.com/" rel="nofollow noreferrer">fiftyone</a> in python (using ubuntu 22.04), you get:</p> <pre><code>error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory </code></pre>
<python><ubuntu-22.04><fiftyone>
2023-08-10 15:18:55
1
2,441
Sam De Meyer
76,877,069
4,259,243
pandas styler doesn't work on transposed df
<p>I just want my data to display with bars and run horizontally instead of vertically. But when I switch from df to df.T, all the style info goes away:</p> <p>Sample code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd from IPython.display import HTML, display input = &quot;Please go to the store and get some milk&quot; words = input.split() weights = [0.4,.9,0.15,0.0,.8,0.05,.93,0.05,.98] df = pd.DataFrame({&quot;word&quot;:words, &quot;weight&quot;:weights}) display(HTML(df.style.format(precision=3).bar(vmin=0,vmax=1).hide().to_html())) display(HTML(df.T.style.format(precision=3).bar(vmin=0,vmax=1).hide().to_html())) </code></pre> <p>Output: <a href="https://i.sstatic.net/aegBa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aegBa.png" alt="sample output showing bars on vertical display but not horizontal" /></a> Notice how the second version doesn't display any colored bars, and displays the (unwanted) index numbers (now column names) on the top. I want the first version, just horizontally.</p> <p>I've been trying various combinations of supplying subsets and changing axes, using <code>.apply()</code> and/or <code>.applymap()</code> with some new styling function I define (e.g. iterating over rows), but everything I've tried either gives me error messages or doesn't have any effect.</p> <p>For example, the code</p> <pre><code>display(HTML(df.T.style.format(precision=3).bar(subset=['weight'],axis=1,vmin=0,vmax=1).hide().to_html())) </code></pre> <p>yields the error message &quot;<code>KeyError: &quot;None of [Index(['weight'], dtype='object')] are in the [columns]&quot;</code>&quot; (and changing to axis=0 doesn't help.)</p> <p>Can anyone suggest a fix?</p> <p>Related posts (but it didn't help me): <a href="https://stackoverflow.com/questions/70250156/transpose-pandas-styler">Transpose pandas Styler</a></p>
<python><pandas><dataframe>
2023-08-10 15:10:35
1
1,542
sh37211
76,877,041
5,525,901
How does python3.11's StrEnum's MRO work differently for __str__ and __repr__?
<p>Python3.11 introduced <code>StrEnum</code> and <code>IntEnum</code> which inherit <code>str</code> or <code>int</code> respectively, and also inherit <code>ReprEnum</code>, which in turn inherits <code>Enum</code>.</p> <p><code>ReprEnum</code>'s implementation is actually empty.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(inspect.getsource(ReprEnum)) class ReprEnum(Enum): &quot;&quot;&quot; Only changes the repr(), leaving str() and format() to the mixed-in type. &quot;&quot;&quot; </code></pre> <p>If I create a <code>StrEnum</code> and check the MRO, I can see that <code>str</code> comes first.</p> <pre class="lang-py prettyprint-override"><code>class Strings(StrEnum): A = &quot;a&quot; </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; Strings.__mro__ (&lt;enum 'Strings'&gt;, &lt;enum 'StrEnum'&gt;, &lt;class 'str'&gt;, &lt;enum 'ReprEnum'&gt;, &lt;enum 'Enum'&gt;, &lt;class 'object'&gt;) </code></pre> <p>Both <code>str</code> and <code>Enum</code> define a <code>__str__</code> and a <code>__repr__</code></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; str.__repr__ &lt;slot wrapper '__repr__' of 'str' objects&gt; &gt;&gt;&gt; str.__str__ &lt;slot wrapper '__str__' of 'str' objects&gt; &gt;&gt;&gt; Enum.__repr__ &lt;function Enum.__repr__ at 0x7ffff69f72e0&gt; &gt;&gt;&gt; Enum.__str__ &lt;function Enum.__str__ at 0x7ffff69f7380&gt; </code></pre> <p>How then does <code>__repr__</code> get inherited from <code>Enum</code> and <code>__str__</code> get inherited from <code>str</code>?</p>
<python><inheritance><enums><method-resolution-order>
2023-08-10 15:07:49
1
1,752
Abraham Murciano Benzadon
76,877,032
3,767,514
numpy built with locally built blis does not use multithreading
<p>I'm looking for help with an issue I'm having building Numpy against locally built blis for zen3.<br /> I've configured blis to enable threading using openmp. (it is installed and working on my machine, validates using a cpp code).</p> <p>After building blis and numpy against it, it seems like numpy is only using single thread.<br /> I'm not able to find out the reason.</p> <p>Here is the showconfig output:</p> <pre><code>configuration family: zen3 sub-configurations: zen3 requisite kernels sets: zen3 zen2 zen haswell kernel-to-config map: haswell:zen3 zen:zen3 zen2:zen3 zen3:zen3 ------------------------- BLIS version string: 0.9.0-118 .so major version: 4 .so minor.build vers: 0.0 install libdir: /home/me/blis/lib install includedir: /home/me/blis/include install sharedir: /home/me/blis/share debugging status: off enable AddressSanitizer? no enabled threading model(s): openmp single enable BLAS API? yes enable CBLAS API? yes build static library? yes build shared library? yes ARG_MAX hack enabled? no </code></pre> <p>I have set <code>OMP_NUM_THREADS=64</code> (using 64 threads cpu) and export <code>BLIS_THREAD_IMPL=openmp</code> just in case.</p> <p>Using pip's version of numpy, multhreading works (but it's configured for OpenBLAS). numpy's config shows it is configured to use blis:</p> <pre><code> libraries = ['blis', 'blis'] library_dirs = ['/home/me/blis/lib'] define_macros = [('HAVE_CBLAS', None)] include_dirs = ['/home/me/blis/include/blis'] language = c runtime_library_dirs = ['/home/or/blis/lib'] blas_opt_info: libraries = ['blis', 'blis'] library_dirs = ['/home/me/blis/lib'] define_macros = [('HAVE_CBLAS', None)] include_dirs = ['/home/me/blis/include/blis'] language = c runtime_library_dirs = ['/home/me/blis/lib'] </code></pre>
<python><numpy><blas><amd-processor>
2023-08-10 15:06:31
0
472
Crispy Holiday
76,877,029
9,381,966
How to use salesforce market cloud source in AppFlow operator in Airflow?
<p>I'm extracting data from salesforce to a AWS S3 bucket using Airflow AppFlowOperator. To extract the data using salesforce as source, I just use:</p> <pre class="lang-py prettyprint-override"><code>task_campaign_dump = AppflowRunOperator( task_id=&quot;campaign_dump&quot;, source=&quot;salesforce&quot;, dag=dag, flow_name=flow_name) </code></pre> <p>It works correctly. However, when I tried to run a flow that has &quot;SalesForce Market Cloud&quot; as source, the above code didn't work. I figure I should change the source name, but the Airflow documentation says that only salesforce and zendesk are accepted as values for the source argument.</p> <p>Is there a way to run a AppFlow flow with salesforce market cloud source in Airflow?</p>
<python><amazon-web-services><salesforce><mwaa><amazon-appflow>
2023-08-10 15:06:19
0
1,590
Lucas
76,876,844
14,167,846
Color Regions in a Scatter Plot
<p>I recently found out that you can create color regions for scatter plots in Orange. I know Orange sits on top of python, so I figured I'd be able to recreate this, but I'm having a hard time. I haven't figured out how to convert a pandas dataframe for orange. More importantly, I'm working in a spark environment, so if I could go from pyspark to orange that would be better.</p> <p>I've set up a basic scatter plot in both seaborn and matplotlib to see if I could figure it out.</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt # Load the Iris dataset from Seaborn iris = sns.load_dataset(&quot;iris&quot;) # Create a scatter plot sns.scatterplot(x=&quot;sepal_length&quot;, y=&quot;petal_width&quot;, hue=&quot;species&quot;, data=iris) # Add labels and title plt.xlabel(&quot;Sepal Length&quot;) plt.ylabel(&quot;Petal Width&quot;) plt.title(&quot;Scatter Plot of Sepal Length vs. Petal Width&quot;) # Show the plot plt.legend() plt.show() </code></pre> <p><a href="https://i.sstatic.net/om4pt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/om4pt.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn><scatter-plot><orange>
2023-08-10 14:45:37
2
545
pkpto39
76,876,814
4,575,197
How to drop (delete) consecutive values in a Dataframe
<p>I have a dataframe with a column that has 0 Values. I wish to find those 0 values and check if till the end they are 0, drop only those at the end and not in the middle.</p> <p>this is how the Data in <code>secondary_df</code> looks like:</p> <pre><code> DSCD date year month RI RIu RIu1 RIe 203 1316 1/29/2010 2010.0 1.0 66.39 66.30 6.21 6.39 \ 275 1316 1/29/2016 2016.0 1.0 66.97 166.84 6.89 6.32 131 1316 1/30/2004 2004.0 1.0 66.01 66.15 6.36 6.60 191 1316 1/30/2009 2009.0 1.0 66.36 6.54 685.25 6.71 263 1316 1/30/2015 2015.0 1.0 66.43 6.94 114.14 6.33 .. ... ... ... ... ... ... ... ... 250 1316 12/31/2013 2013.0 12.0 99.98 5.24 59.91 5.07 262 1316 12/31/2014 2014.0 12.0 99.33 54.14 54.64 55.96 274 1316 12/31/2015 2015.0 12.0 55.32 5.89 15.19 54.34 310 1316 12/31/2018 2018.0 12.0 55.56 55.23 5.40 5.49 322 1316 12/31/2019 2019.0 12.0 55.39 55.98 5.69 5.88 RIu Pct Return RIe_Pct_Return 203 -0.05 0.0255 \ 275 -0.0358 -0.059 131 0.058 0.05106 191 0.0055 0.0535 263 -0.035 0.053 .. ... ... 250 0.01092 -0.05 262 -0.001 0.02572 274 -0.003 -0.0512 310 -0.000 -0.05274 322 0.004 0.039 </code></pre> <p>This is what I got so far.</p> <pre><code>for DSCD in FirmReturnIndexValues['DSCD'].unique(): secondary_df=FirmReturnIndexValues[FirmReturnIndexValues['DSCD']==DSCD] t=secondary_df[(secondary_df['RIe Pct Return'].values == 0)].index.values.tolist() t.sort() if len(t)&gt;=1: print(np.diff((t))) </code></pre> <p>for example this part is t:</p> <pre><code>[69438, 69439, 69440, 69441, 69442, 69443, 69444, 69445, 69446, 69447, 69448, 69449, 69450, 69451, 69452, 69453, 69454, 69455, 69456, 69457, 69458, 69459, 69460, 69461, 69462, 69463, 69464, 69465, 69466, 69468, 69548, 69570, 69571, 69572, 69573, 69574, 69575, 69576, 69577, 69578, 69579, 69580, 69581, 69582, 69583, 69584, 69585, 69586, 69587, 69588, 69589, 69590, 69591, 69592, 69593, 69594, 69595, 69596, 69597, 69598, 69599, 69600, 69601, 69602, 69603, 69604, 69605, 69606, 69607, 69608, 69609, 69610, 69611, 69612, 69613, 69614, 69615, 69616, 69617, 69618, 69619, 69620, 69621, 69622, 69623, 69624, 69625, 69626, 69627, 69628, 69629, 69630, 69631, 69632, 69633, 69634, 69635, 69636, 69637, 69638, 69639, 69640, 69641, 69642, 69643, 69644, 69645, 69646, 69647, 69648, 69649, 69650, 69651, 69652, 69653, 69654, 69655, 69656, 69657, 69658] </code></pre> <p>this is the Indexes that I get from my code and when I use the <code>np.diff()</code> method I get this values and the values I wish to drop (delete) are being <strong>bolded</strong>:</p> <blockquote> <p>[ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 80 22 1 <em><strong>1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]</strong></em></p> </blockquote> <p>so I have 2 Questions.</p> <ol> <li>how can I delete the bolded one's?</li> <li>the first for loop contains 8000 DSCD's is there anyway it can be more efficient?</li> </ol> <p>Another Example: list t:</p> <pre><code>[74536, 74537, 74538, 74540, 74542, 74543, 74544, 74545, 74547, 74551, 74554, 74555, 74559, 74560, 74561, 74562, 74563, 74566, 74567, 74568, 74569, 74571, 74572, 74573, 74574, 74575, 74578, 74579, 74580, 74582, 74584, 74585, 74586, 74587, 74588, 74589, 74590, 74591, 74592, 74595, 74596, 74597, 74598, 74599, 74601, 74602, 74603, 74604, 74605, 74606, 74607, 74608, 74609, 74610, 74612, 74613, 74614, 74615, 74616, 74617, 74618, 74619, 74620, 74621, 74622, 74623, 74624, 74625, 74626, 74627, 74628, 74629, 74630, 74631, 74632, 74633, 74634, 74635, 74636, 74637, 74638, 74639, 74640, 74641, 74642, 74643, 74644, 74645, 74646, 74647, 74648, 74649, 74650, 74651, 74652, 74653, 74654, 74655, 74656, 74657, 74658, 74659, 74660, 74661, 74662, 74663, 74664, 74665, 74666, 74667, 74668, 74669, 74670, 74671, 74672, 74673, 74674, 74675, 74676, 74677, 74678, 74679, 74680, 74681, 74682, 74683, 74684, 74685, 74686, 74687, 74688, 74689, 74690, 74691, 74692, 74693, 74694, 74695, 74696, 74697, 74698, 74699, 74700, 74701, 74702, 74703, 74704, 74705, 74706, 74707, 74708, 74709, 74710, 74711, 74712, 74713, 74714, 74715, 74716, 74717, 74718, 74719, 74720, 74721, 74722, 74723, 74724, 74725, 74726, 74727, 74728, 74729, 74730, 74731, 74732, 74733, 74734, 74735, 74736, 74737, 74738, 74739, 74740, 74741, 74742, 74743, 74744, 74745, 74746, 74747, 74748, 74749, 74750, 74751, 74752, 74753, 74754, 74755, 74756, 74757, 74758, 74759, 74760, 74761, 74762, 74763, 74764, 74765, 74766, 74767, 74768, 74769, 74770, 74771, 74772, 74773, 74774, 74775, 74776, 74777, 74778, 74779, 74780, 74781, 74782, 74783, 74784, 74785, 74786, 74787, 74788, 74789, 74790, 74791, 74792, 74793, 74794, 74795, 74796, 74797, 74798, 74799, 74800, 74801, 74802, 74803, 74804, 74805, 74806, 74807, 74808, 74809, 74810, 74811, 74812, 74813, 74814, 74815, 74816, 74817, 74818, 74819, 74820, 74821, 74822, 74823, 74824, 74825, 74826, 74827, 74828, 74829, 74830, 74831, 74832, 74833, 74834, 74835, 74836, 74837, 74838, 74839, 74840, 74841, 74842] </code></pre> <p>result of <code>np.diff():</code></p> <blockquote> <p>[1 1 2 2 1 1 1 2 4 3 1 4 1 1 1 1 3 1 1 1 2 1 1 1 1 3 1 1 2 2 1 1 1 1 1 1 1 1 3 1 1 1 1 2 1 1 1 1 1 1 1 1 1 2 1 <strong>1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1</strong>]</p> </blockquote> <p>Hard one:</p> <pre><code>[6359, 6431, 6287, 6347, 6419, 6263, 6275, 6299, 6311, 6323, 6335, 6371, 6383, 6395, 6407, 6443, 6455, 6467, 6360, 6288, 6348, 6420, 6252, 6264, 6276, 6300, 6312, 6324, 6372, 6396, 6408, 6444, 6456, 6468, 6336, 6384, 6432, 6265, 6397, 6469, 6253, 6325, 6385, 6457, 6277, 6289, 6301, 6313, 6337, 6349, 6361, 6373, 6409, 6421, 6433, 6445, 6314, 6446, 6302, 6374, 6434, 6254, 6266, 6278, 6290, 6326, 6338, 6350, 6362, 6386, 6398, 6410, 6422, 6458, 6470, 6423, 6279, 6339, 6411, 6255, 6267, 6291, 6303, 6315, 6327, 6363, 6375, 6387, 6399, 6435, 6447, 6459, 6471, 6268, 6400, 6472, 6256, 6328, 6388, 6460, 6280, 6292, 6304, 6316, 6340, 6351, 6364, 6376, 6412, 6424, 6436, 6448, 6305, 6377, 6437, 6293, 6365, 6257, 6269, 6281, 6317, 6329, 6341, 6389, 6401, 6413, 6425, 6449, 6461, 6473, 6282, 6342, 6414, 6270, 6402, 6474, 6258, 6294, 6306, 6318, 6330, 6353, 6366, 6378, 6390, 6426, 6438, 6450, 6462, 6259, 6331, 6391, 6463, 6319, 6451, 6271, 6283, 6295, 6307, 6343, 6355, 6367, 6379, 6403, 6415, 6427, 6439, 6475, 6296, 6368, 6356, 6428, 6260, 6272, 6284, 6308, 6320, 6332, 6344, 6380, 6392, 6404, 6416, 6440, 6452, 6464, 6476, 6285, 6345, 6417, 6273, 6405, 6477, 6261, 6297, 6309, 6321, 6333, 6357, 6369, 6381, 6393, 6429, 6441, 6453, 6465, 6322, 6454, 6310, 6382, 6442, 6262, 6274, 6286, 6298, 6334, 6346, 6358, 6370, 6394, 6406, 6418, 6430, 6466, 6478] </code></pre> <p>after sorting the previous list you'll get:</p> <blockquote> <p>1 <strong>1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1</strong>]</p> </blockquote> <p>as the same logic, the bold ones need to be removed</p>
<python><list><numpy><performance><diff>
2023-08-10 14:41:35
2
10,490
Mostafa Bouzari
76,876,744
9,064,356
How to use argparse library to parse a given string instead of app_args?
<p>I have tried to run below code:</p> <pre><code>import argparse parser = argparse.ArgumentParser() parser.add_argument(&quot;--target&quot;, required=True) parsed_args, _ = parser.parse_known_args(args = [&quot;--target foobar&quot;]) print(parsed_args.target) </code></pre> <p>but I get error saying that <code>following arguments are required: --target</code>. I haven't passed any arguments when running python script, I want to just pass a string to the parser during runtime but it seems to still expect arguments when running the file.</p>
<python><string><overriding><argparse>
2023-08-10 14:32:51
1
961
hdw3
76,876,719
4,564,097
In VSCode, how to get on-hover documentation shown for class properties documented in the class docstring?
<p>The code base I'm working has a convention on to how to document class attributes.</p> <p>Here's how it looks:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class SessionMetadata: &quot;&quot;&quot;Metadata of a Session Attributes: created_at: Date of creation, in POSIX timestamp format. usage_count: The number of times the session was applied to the payload. &quot;&quot;&quot; created_at: int usage_count: int </code></pre> <p>However, I do not get shown my documentation when hovering over the property:</p> <p><a href="https://i.sstatic.net/cEwGJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cEwGJ.png" alt="documentation not showing for the usage_count attribute" /></a></p> <p>In the past, I was using another notation:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class SessionMetadata: created_at: int &quot;&quot;&quot; Date of creation, in POSIX timestamp format. &quot;&quot;&quot; usage_count: int &quot;&quot;&quot; The number of times the session was applied to the payload. &quot;&quot;&quot; </code></pre> <p>And I did get my documentation on hover:</p> <p><a href="https://i.sstatic.net/xrg1d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xrg1d.png" alt="documentation showing for the usage_count attribute" /></a></p> <hr /> <p>I tried updating Pylance. It didn't help.</p> <p>I tried looking into GitHub issues, but I couldn't really find anything that mentions the format I'm using.</p> <p>Has someone a similar docstring convention that shows documentation on hover, with VSCode?</p>
<python><visual-studio-code><docstring><pylance>
2023-08-10 14:30:31
0
437
Camille LouΓ©doc
76,876,637
9,318,323
Fast way of calculating number of consecutive nan values in a column
<p>I want to transform my dataframe so that the new DataFrame is of the same shape where each entry represents the number of consecutive NaNs counted after its position as follows:</p> <p>IN:</p> <pre><code> A B 0 0.1880 0.345 1 0.2510 0.585 2 NaN NaN 3 NaN NaN 4 NaN 1.150 5 0.2300 1.210 6 0.1670 1.290 7 0.0835 1.400 8 0.0418 NaN 9 0.0209 NaN 10 NaN NaN 11 NaN NaN 12 NaN NaN </code></pre> <p>OUT:</p> <pre><code> A B 0 0 0 1 0 0 2 3 2 3 2 1 4 1 0 5 0 0 6 0 0 7 0 0 8 0 5 9 0 4 10 3 3 11 2 2 12 1 1 </code></pre> <p>Similar question that I was trying to modify - <a href="https://stackoverflow.com/questions/43517953/fast-way-to-get-the-number-of-nans-in-a-column-counted-from-the-last-valid-value">Fast way to get the number of NaNs in a column counted from the last valid value in a DataFrame</a></p>
<python><pandas><dataframe>
2023-08-10 14:22:14
3
354
Vitamin C
76,876,531
4,251,301
How to read data from Movesense directly without android adb and whiteboard cmd?
<p>I have been working with the Movesense sensor since last week. The documentation, however, is somehow a little dirty and it took a couple of hours to get started with the sensor to read the data. Now, I am trying to get the data from the sensor with the <a href="https://movesense.com/docs/esw/api_reference/" rel="nofollow noreferrer">API</a>. As per the documentation, there are two ways:</p> <ol> <li>using the <strong>whiteboard</strong> (with <code>wbcmd</code> which is <a href="https://movesense.com/docs/esw/tools/#wbcmd" rel="nofollow noreferrer"><strong>only be used with the programming jig</strong></a>).</li> <li>using the <strong>android</strong> (with <code>adb</code> but <a href="https://movesense.com/docs/esw/api_reference/#exchanging-data-with-the-sensor" rel="nofollow noreferrer"><strong>without programming jig</strong></a>)</li> </ol> <p>In both cases, there is an intermediate device (either a whiteboard or an android phone) between the sensor and the data collector e.g., the development machine. However, I am getting the sensor data from the sensor through <code>adb logcat</code> (where I need to install the <a href="https://bitbucket.org/movesense/movesense-mobile-lib/downloads/" rel="nofollow noreferrer"><code>sampleapp-debug-1-8.apk</code></a> into an android phone) which is (seems to me) not a convenient approach. Now, I am trying to develop an application/service that will collect the sensor data directly from the sensor over the BLE without any intermediate device (<strong>whiteboard</strong> / <strong>android</strong>).</p> <p><strong>Is there any way to read the data from movesense without whiteboard / android?</strong></p> <blockquote> <p><strong>Update 1:</strong></p> </blockquote> <p>When I run <a href="https://bitbucket.org/movesense/movesense-raspi-python3" rel="nofollow noreferrer">movesense-raspi-python3</a>, I am getting the following packet:</p> <pre><code>b'\x04&gt;)\x02\x01\x00\x00\xe7\xae6\xdc\x8c\x0c\x1d\x02\x01\x04\x11\x07A\x00tpc\x88z\xb5\xccI1\x82\x9005a\x07\xff\x9f\x00\x02\x04\x12\x01\xa3' </code></pre> <p><strong>What does it mean by the above packet? What should I do to get ECG packet?</strong></p> <blockquote> <p><strong>Update 2:</strong></p> </blockquote> <p>Petri presented in his <a href="https://www.movesense.com/wp-content/uploads/2021/04/2021-04-21-Movesense-sensor-programming.pdf" rel="nofollow noreferrer">presentation</a> (on page 17 in the <code>Alternative Communication Methods</code> section), it is possible to talk to the sensor and collect data directly without using Android and iOS. Then I run both the <code>python_client</code> and <code>web_client</code> from <a href="https://bitbucket.org/movesense/movesense-device-lib/src/master/samples/gatt_sensordata_app" rel="nofollow noreferrer">gatt_sensordata_app</a> but both are giving following errors:</p> <pre><code>// web_client error: BLE error: NotFoundError: No Services matching UUID 34802252-7185-4d5d-b431-630e7050e8f0 found in Device. select_sensor error: NotFoundError: No Services matching UUID 34802252-7185-4d5d-b431-630e7050e8f0 found in Device. // python_client error: bleak.exc.BleakError: Characteristic 34800002-7185-4d5d-b431-630e7050e8f0 not found! </code></pre> <p>I appreciate it if somebody helps me out in this matter. Thanks in advance.</p>
<python><android><movesense>
2023-08-10 14:10:16
0
852
jislam
76,876,526
6,068,731
Python - turns string into dictionary where keys are subheadings and values are links
<p>In the middle of some text I have the following.</p> <pre><code>Some random text before. ----CAPITAL WORDS: first subheading https://link1 https://link2 second subheading https://link3 third subheading https://link4 https://link5 https://link6 https://link7 ----MORE CAPITAL WORDS: Some random text after. </code></pre> <p>I would like to extract the string between <code>----CAPITAL WORDS:</code> and <code>----MORE CAPITAL WORDS</code> and store it in a dictionary as follows</p> <pre><code>{ 'first subheading': [&quot;https://link1&quot;, &quot;https://link2&quot;], 'second subheading': [&quot;https://link3&quot;] 'third subheading': [&quot;https://link4&quot;, &quot;https://link5&quot;, &quot;https://link6&quot;, &quot;https://link7&quot;] } </code></pre> <h1>Attempt</h1> <pre><code>pattern = r&quot;CAPITAL WORDS:(.*?)(?:\n----MORE CAPITAL WORDS:|$)&quot; matches = re.search(pattern, descriptions[0], re.DOTALL) lines = matches.group(1).strip().split(&quot;\n&quot;) link_dict = {} for line in lines: if line: pass # unsure how to continue </code></pre>
<python><python-3.x><regex><string>
2023-08-10 14:09:41
3
728
Physics_Student
76,876,484
569,976
measuring image similarity with opencv
<p>At my company we print out documents, make changes to those documents, and scan them back in. Sometimes the scans are subtly rotated and I use OpenCV to align the two images. Here's the code I use to do this:</p> <pre><code>import sys import cv2 import numpy as np if len(sys.argv) != 4: print('USAGE') print(' python3 align.py ref.png new.png output.png') sys.exit() FLANN_INDEX_LSH = 6 def filter_matches(kp1, kp2, matches, ratio = 0.75): mkp1, mkp2 = [], [] for m in matches: if len(m) == 2 and m[0].distance &lt; m[1].distance * ratio: m = m[0] mkp1.append( kp1[m.queryIdx] ) mkp2.append( kp2[m.trainIdx] ) p1 = np.float32([kp.pt for kp in mkp1]) p2 = np.float32([kp.pt for kp in mkp2]) kp_pairs = zip(mkp1, mkp2) return p1, p2, list(kp_pairs) def alignImages(im1, im2): detector = cv2.AKAZE_create() flann_params= dict(algorithm = FLANN_INDEX_LSH, table_number = 6, # 12 key_size = 12, # 20 multi_probe_level = 1) #2 matcher = cv2.FlannBasedMatcher(flann_params, {}) kp1, desc1 = detector.detectAndCompute(im1, None) kp2, desc2 = detector.detectAndCompute(im2, None) raw_matches = matcher.knnMatch(desc1, trainDescriptors = desc2, k = 2) p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches) if len(p1) &lt; 4: print('%d matches found, not enough for homography estimation' % len(p1)) sys.exit() H, matches = cv2.findHomography(p1, p2, cv2.RANSAC, 5.0) # the larger the number the better print(str(len(matches.ravel().tolist()))); height, width = im2.shape imResult = cv2.warpPerspective(im1, H, (width, height)) return imResult refFilename = sys.argv[1] imFilename = sys.argv[2] outFilename = sys.argv[3] imRef = cv2.imread(refFilename, cv2.IMREAD_GRAYSCALE) im = cv2.imread(imFilename, cv2.IMREAD_GRAYSCALE) imNew = alignImages(im, imRef) cv2.imwrite(outFilename, imNew) </code></pre> <p>The problem is that sometimes the documents are a complete mismatch. eg.</p> <p><a href="https://i.sstatic.net/dMnjn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dMnjn.jpg" alt="enter image description here" /></a></p> <p>When I try to run the above script two completely mismatched pages I get something like this:</p> <p><a href="https://i.sstatic.net/w1DuA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w1DuA.jpg" alt="enter image description here" /></a></p> <p>My question is... is there a way I can detect these completely mismatched pages? Is there a way to score two different pages by similarity? I had that <code>print(str(len(matches.ravel().tolist())))</code> might do the trick (it's in the code) but sometimes two mismatched pages can have more matches then correctly matched pages, in my experience. If memory serves, one of the image alignment / object recognition methods kinda scores each match so maybe if I could do that with AKAZE I could score the pages by the ratio of number matches with a confidence &gt; 20% vs the number of matches with a confidence of &lt; 20%? But if that were a viable strategy how would I do that with AKAZE?</p>
<python><numpy><opencv>
2023-08-10 14:03:48
0
16,931
neubert
76,876,281
6,068,731
Python - grab video description from every video in a Youtube Playlist
<p>There is a playlist on YouTube with almost 1000 videos. Using Python, I would like to go through each video and extract the video description (the bunch of text where a YouTube would usually put links and resources) from all videos uploaded after a certain date. I have tried <code>pytube</code> with little success</p> <h3>Minimal Working Example</h3> <pre class="lang-py prettyprint-override"><code>from pytube import Playlist from datetime import datetime # Grab playlist BBC = Playlist(&quot;https://www.youtube.com/playlist?list=PLG8IrydigQfcRNrWVqNkeZiCJ_DWgXDVX&quot;) start_date = datetime(year=2023, month=4, day=1) # Generator pipeline filter_date = (video for video in BBC.videos if video.upload_date &gt;= start_date) grab_text = (video.vid_info['videoDetails']['shortDescription'] for video in filter_date) # Run generators and grab descriptions descriptions = list(grab_text) </code></pre> <p>However I get a <code>too many requests</code> error. Is there any way to make this work? I am happy to do this in batch as well. I have tried doing it in batch with generators or lists alike and it did not work. Happy with a scraper solution too.</p>
<python><video><youtube><youtube-api><pytube>
2023-08-10 13:39:28
0
728
Physics_Student
76,876,280
12,304,000
split tabs string into different columns
<p>In my foundry environment, I have a pyspark dataset with only one column called &quot;data&quot;.</p> <p>Each row has a string that looks like a TSV. Each row has a value like this:</p> <pre><code>ott-akamai-logs-processor srv 2023-07-29 17:46:50.134 2023-07-29 17:46:49.358 unstruct 103b9271-777 ott node-3.13.1 ssc-2.8.2-kinesis snowplow-enrich-kinesis-3.7.0 3.65.234.x 12345679 DE HE Karachi 60313 50.1188 8.6843 Malta {&quot;schema&quot;:&quot;iglu:com.xxx/1-0-0&quot;,&quot;data&quot;:{&quot;schema&quot;:&quot;xxx/hls_manifest_requested/jsonschema/1-0-1&quot;,&quot;data&quot;:{&quot;channel&quot;:&quot;bildtv-broadcast&quot;,&quot;session_id&quot;:&quot;xxx&quot;,&quot;request_id&quot;:&quot;xxx&quot;,&quot;total_bytes&quot;:351,&quot;referrer&quot;:&quot;^&quot;,&quot;geo_country&quot;:&quot;DE&quot;,&quot;geo_state&quot;:&quot;Berlin&quot;,&quot;geo_city&quot;:&quot;-&quot;,&quot;variant_name&quot;:&quot;6.m3u8&quot;}}} snowplow-nodejs-tracker/3.13.1 Europe/Berlin 2023-07-29 17:46:49.281 {&quot;schema&quot;:&quot;xxx/contexts/jsonschema/1-0-1&quot;,&quot;data&quot;:[{&quot;schema&quot;:&quot;iglu:nl.basjes/yauaa_context/jsonschema/1-0-4&quot;,&quot;data&quot;:{&quot;deviceBrand&quot;:&quot;Unknown&quot;,&quot;deviceName&quot;:&quot;Unknown&quot;,&quot;operatingSystemVersionMajor&quot;:&quot;??&quot;,&quot;layoutEngineNameVersion&quot;:&quot;Unknown ??&quot;,&quot;operatingSystemNameVersion&quot;:&quot;Unknown ??&quot;,&quot;agentInformationEmail&quot;:&quot;Unknown&quot;,&quot;networkType&quot;:&quot;Unknown&quot;,&quot;webviewAppNameVersionMajor&quot;:&quot;Unknown ??&quot;,&quot;layoutEngineNameVersionMajor&quot;:&quot;Unknown ??&quot;,&quot;operatingSystemName&quot;:&quot;Unknown&quot;,&quot;agentVersionMajor&quot;:&quot;3&quot;,&quot;layoutEngineVersionMajor&quot;:&quot;??&quot;,&quot;webviewAppName&quot;:&quot;Unknown&quot;,&quot;deviceClass&quot;:&quot;Unknown&quot;,&quot;agentNameVersionMajor&quot;:&quot;Snowplow-Nodejs-Tracker 3&quot;,&quot;operatingSystemNameVersionMajor&quot;:&quot;Unknown ??&quot;,&quot;webviewAppVersionMajor&quot;:&quot;??&quot;,&quot;operatingSystemClass&quot;:&quot;Unknown&quot;,&quot;webviewAppVersion&quot;:&quot;??&quot;,&quot;layoutEngineName&quot;:&quot;Unknown&quot;,&quot;agentName&quot;:&quot;Snowplow-Nodejs-Tracker&quot;,&quot;agentVersion&quot;:&quot;3.13.1&quot;,&quot;layoutEngineClass&quot;:&quot;Unknown&quot;,&quot;agentNameVersion&quot;:&quot;Snowplow-Nodejs-Tracker 3.13.1&quot;,&quot;operatingSystemVersion&quot;:&quot;??&quot;,&quot;agentClass&quot;:&quot;Special&quot;,&quot;layoutEngineVersion&quot;:&quot;??&quot;,&quot;agentInformationUrl&quot;:&quot;Unknown&quot;}},{&quot;schema&quot;:&quot;iglu:com.snowplowanalytics.snowplow/ua_parser_context/jsonschema/1-0-0&quot;,&quot;data&quot;:{&quot;useragentFamily&quot;:&quot;Other&quot;,&quot;useragentMajor&quot;:null,&quot;useragentMinor&quot;:null,&quot;useragentPatch&quot;:null,&quot;useragentVersion&quot;:&quot;Other&quot;,&quot;osFamily&quot;:&quot;Other&quot;,&quot;osMajor&quot;:null,&quot;osMinor&quot;:null,&quot;osPatch&quot;:null,&quot;osPatchMinor&quot;:null,&quot;osVersion&quot;:&quot;Other&quot;,&quot;deviceFamily&quot;:&quot;Other&quot;}}]} 2023-07-29 17:46:09.938 com.axelspringer.ott hls_manifest_requested jsonschema 1-0-1 2023-07-29 17:46:09.938 </code></pre> <p>Here, things are separated by tabs. For each tab separation, I want to put the values into different columns. How can I do so?</p> <pre><code>def unnamed_1(my_df): df = my_df return df </code></pre>
<python><pyspark><split><palantir-foundry><foundry-code-repositories>
2023-08-10 13:39:18
2
3,522
x89
76,876,146
5,623,899
Python 3.10+ deconstructing an over engineered solution to better understand how metaclasses work with properties, static methods, and classmethods
<h1>TL;DR</h1> <p>This question examines an over-engineered example of python metaclasses and dataclasses to create a <code>LiteralEnum</code> (for validating a <em>string</em>ly-typed keyword argument) like <code>LinkageMethod</code> and a <code>KeywordsArgumentBaseClass</code> for making wrappers around SciPy methods like <code>SciPyLinkage</code>. The author would like to know how to best distinguish when something should be a <code>property</code>, <code>staticmethod</code>, <code>classmethod</code>, or <code>instance</code> method.</p> <p>As to why someone would do this?</p> <ul> <li>to override default keyword arguments of scipy methods</li> <li>to expose keyword arguments that might be hidden under <code>**kwargs</code> and which get passed to another method for better developer experience.</li> <li>to modify default behavior of scipy methods e.g. add some optional preprocessing / post processing and be able to distinguish which parameters belong to the method and which to the custom handling.</li> </ul> <h1>Disclaimer</h1> <p>Given the above explanation there is a lot of code and the M*.W.E. is not so minimal (as complexity is one of the key reasons to avoid metaclass usage especially in python which favors simplicity and readability)</p> <h1>Question(s)</h1> <h2>Newbie question</h2> <p>I am new to using metaclasses. Are the <code>LiteralEnum</code> classes at least &quot;pythonic&quot;?</p> <h2><code>staticmethod</code> vs <code>classmethod</code> vs <code>property</code> at the metaclass / class level?</h2> <p>The <code>KeywordArgumentsMeta</code> and <code>KeywordArgumentsMixin</code> classes setup some useful attributes for retrieving a dictionary of keyword arguments. With <code>KeywordArgumentsBaseClass</code> combining the <code>KeywordArgumentsMixin</code> and <code>ClassMethodSignatureMixin</code>.</p> <p>This is where I am conflicted:</p> <pre><code>@dataclass class BaseExample(KeywordArgumentsBaseClass): _: KW_ONLY strvar: str = 'default' intvar: int = 2 @dataclass class ChildExample(BaseExample): _: KW_ONLY thirdvar: str = 'three' fourth: int = 4 ChildExample.keywords &gt; ['thirdvar', 'fourth'] ChildExample.ikeywords &gt; ['strvar', 'intvar'] ChildExample.akeywords &gt; ['strvar', 'intvar', 'thirdvar', 'fourth'] ChildExample.defaults &gt; {'thirdvar': 'three', 'fourth': 4} ... ChildExample().kwargs &gt; {'thirdvar': 'three', 'fourth': 4} ... ChildExample().params(**{'thirdvar': 'new', 'banana': 3}) &gt; {'thirdvar': 'new', 'fourth': 4} </code></pre> <p>I am conflicted because I want to make a wrapper for SciPy Methods</p> <pre><code> @dataclass class SciPyMethod(KeywordArgumentsBaseClass): _: KW_ONLY @classmethod def get_method(cls): raise NotImplementedError @classmethod def call_scipy(cls, **kws): inst = cls() method = cls.get_method() params = inst.prepare_params(func = method, scope = locals(), **kws) result = method(**cls.kwargs) raise NotImplementedError def call_scipy(self, **kwargs): cls = type(self) method = type(self).get_method() params = self.prepare_params(func = method, scope = locals(), **kwargs) print(params) raise NotImplementedError result = method(**cls.kwargs) return result def __call__(self, x: NPArray, **kwargs) -&gt; NPArray: method = self.get_method() </code></pre> <p>but I need both classmethods and instance methods for this to work.</p> <p>Since there are classmethods for getting default params, instance methods for getting current params, and the <code>prepare_params</code> methods for getting params for a function signature how can I make <code>call_scipy</code> work with both as classmethod and instance method?</p> <p>How could this be simplified / make more pythonic?</p> <h3>Usefulness of <code>ClassMethodSignatureMixin</code></h3> <p>While <code>ClassMethodSignaturePriority</code> seems useful at first glace, I am not actually sure if it is useful at all consider:</p> <pre class="lang-py prettyprint-override"><code>class Example(ClassMethodSignatureMixin): _: KW_ONLY test_var: str = 'default' def foo(self, test_var: Optional[str] = None, **kwargs): params = self.prepare_params(func=self.foo, scope=locals(), **kwargs) print(params) return params </code></pre> <p>The <code>prepare_params</code> method, without knowing the function signature will can handle explictly named keywords in the func which might be defined or passed in via **kwargs.</p> <p>However, test_var must either be defined in the class, passed in as a (positional) keyword argument or passed in via **kwargs. Python will naturally prevent <code>Example().foo(test_var='fine', **{'test_var': 'causes error'})</code>.</p> <p>The <code>prepare_params</code> method on the other hand is useful as it filters keyword arguments for the function signature only, using the local scope which helps make sure that in the case of <code>foo</code> method, the value of <code>test_var</code> gets put into params.</p> <p>Or to restate more cleanly. Given a function with an unknown number of keyword arguments (like <code>test_var</code> in <code>foo</code>), <code>prepare_params</code> uses <code>locals()</code> and <code>**kwargs</code> to make sure there is a single dictionary to check for the values of the keyword arguments.</p> <h1>Code</h1> <h2>Imports</h2> <pre class="lang-py prettyprint-override"><code>import os, inspect import numpy as np, pandas as pd, scipy as sp from dataclasses import dataclass, KW_ONLY from enum import Enum, StrEnum, EnumMeta, auto from typing import Optional, Callable, List, Tuple, Any, Dict, Union, Literal </code></pre> <h2>LiteralEnum</h2> <h3>MetaClass</h3> <pre class="lang-py prettyprint-override"><code>class LiteralEnumMeta(EnumMeta): '''LiteralEnumMeta See Also: -------- - https://stackoverflow.com/questions/43730305/when-should-i-subclass-enummeta-instead-of-enum - https://peps.python.org/pep-3115/ - https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/#class-attribute-lookup ''' @classmethod def __prepare__(metacls, name, bases, **kwargs): enum_dict = super().__prepare__(name, bases, **kwargs) #print('PREPARE: &lt;enum_dict&gt; = \t', enum_dict) # NOTE: this will through an error since we are using StrEnum # enum_dict['_default'] = None return enum_dict def __init__(cls, clsname, bases, clsdict, **kwargs): super().__init__(clsname, bases, clsdict, **kwargs) # print('INIT: &lt;clsdict&gt; = \t', clsname, clsdict) def __new__( metacls, cls, bases, clsdict, *, default: Optional[str] = None, elements: Optional[List[str]] = None ): # print('NEW: &lt;clsdict&gt; = \t', cls, clsdict) if elements is not None: for element in elements: clsdict[element.upper()] = auto() new_cls = super().__new__(metacls, cls, bases, clsdict) # NOTE: this will result in TypeError: cannot extend if default: setattr(new_cls, '_default', default) return new_cls @property def members(cls): # NOTE: could also use cls._member_names_ return [member.name for member in cls] @property def values(cls): return [member.value for member in cls] @property def items(cls): return list(zip(cls.members, cls.values)) </code></pre> <h3>LiteralEnum</h3> <pre class="lang-py prettyprint-override"><code>class LiteralEnum(StrEnum, metaclass=LiteralEnumMeta): @classmethod def _missing_(cls, value): for member in cls: if member.value.lower() == value.lower(): return member default = getattr(cls, cls._default, None) return default </code></pre> <h3>Decorators</h3> <pre class="lang-py prettyprint-override"><code>def enum_default(default: str = ''): def wrapper(cls): cls._default = default return cls return wrapper def enum_set_attr(name: str = 'attr', attr: str = 'data'): def wrapper(cls): setattr(cls, f'_{name}', attr) return cls return wrapper def set_method(method): def decorator(cls): cls.method = method return cls return decorator </code></pre> <h2>SciPy LiteralEnum Examples</h2> <h3>Linkage</h3> <pre class="lang-py prettyprint-override"><code>@enum_default('SINGLE') class LinkageMethod(LiteralEnum): ''' See Also -------- scipy.cluster.hierarchy.linkage : Performs hierarchical/agglomerative clustering on the condensed distance matrix y. https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html ''' SINGLE = auto() COMPLETE = auto() AVERAGE = auto() WEIGHTED = auto() CENTROID = auto() MEDIAN = auto() WARD = auto() </code></pre> <h3>PDistMetric</h3> <pre class="lang-py prettyprint-override"><code>@enum_default('EUCLIDEAN') class PDistMetric(LiteralEnum): ''' See Also -------- scipy.spatial.distance.pdist : Compute the pairwise distances between observations in n-dimensional space. https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist ''' BRAYCURTIS = auto() CANBERRA = auto() CHEBYSHEV = auto() CITYBLOCK = auto() CORRELATION = auto() COSINE = auto() DICE = auto() EUCLIDEAN = auto() HAMMING = auto() JACCARD = auto() JENSENSHANNON = auto() KULCZYNSKI1 = auto() MAHALANOBIS = auto() MATCHING = auto() MINKOWSKI = auto() ROGERSTANIMOTO = auto() RUSSELLRAO = auto() SEUCLIDEAN = auto() SOKALMICHENER = auto() SOKALSNEATH = auto() SQEUCLIDEAN = auto() YULE = auto() </code></pre> <h3>ScoreMethod</h3> <pre class="lang-py prettyprint-override"><code>@enum_default('ZSCORE') class ScoreMethod(LiteralEnum): ''' See Also -------- scipy.stats.zscore : Compute the z-score. scipy.stats.gzscore : Compute the geometric standard score. ''' ZSCORE = auto() GZSCORE = auto() </code></pre> <h2>ClassMethodSignaturePriority</h2> <pre class="lang-py prettyprint-override"><code>@enum_default('OBJ') @enum_set_attr('attr', 'data') class ClassMethodSignaturePriority(LiteralEnum): OBJ = auto() ARG = auto() KWS = auto() def get(self, obj: object, attr: Optional[str] = None, arg: Optional[Any] = None, **kws) -&gt; Union[NPArray, DataFrame, Any]: match self: # try and get `attr` from `obj` defaulting back to `arg` case ClassMethodSignaturePriority.OBJ: val = getattr(obj, attr, arg) if val is None: return ClassMethodSignaturePriority('ARG').get(obj, attr, arg, **kws) # use `arg` as is unless it is None, then try and get `attr` from `obj` case ClassMethodSignaturePriority.ARG: val = arg if val is None: return ClassMethodSignaturePriority('KWS').get(obj, attr, arg, **kws) # use `kws` assuming `attr` is in `kwargs` falling back to arg then try and get `attr` from `obj` case ClassMethodSignaturePriority.KWS: val = kws.get(attr, arg) case _: pass if val is None: val = getattr(obj, attr, arg) if isinstance(val, (list, np.ndarray, )): val = np.asanyarray(val) return val @classmethod def prioritize(cls, obj: object, attr: str, arg: Optional[Any] = None, priority: Literal['obj', 'arg', 'kws'] = 'obj', **kws) -&gt; Union[NPArray, DataFrame, Any]: return cls(priority).get(obj, attr, arg, **kws) @classmethod def _pobj(cls, obj: object, attr: str, arg: Optional[Any] = None, **kws) -&gt; Union[NPArray, DataFrame, Any]: return cls.prioritize(obj, attr, arg, 'obj', **kws) @classmethod def _pargs(cls, obj: object, attr: str, arg: Optional[Any] = None, **kws) -&gt; Union[NPArray, DataFrame, Any]: return cls.prioritize(obj, attr, arg, 'args', **kws) @classmethod def _pkws(cls, obj: object, attr: str, arg: Optional[Any] = None, **kws) -&gt; Union[NPArray, DataFrame, Any]: return cls.prioritize(obj, attr, arg, 'kws', **kws) </code></pre> <h3>Mixin</h3> <pre class="lang-py prettyprint-override"><code>@dataclass class ClassMethodSignatureMixin: def get_val(self, attr: str, arg: Optional[Any] = None, prioritize: Union[Literal['obj', 'arg', 'kws'], ClassMethodSignaturePriority] = 'arg', **kws): # by default we will prioritize `arg` over `self` as `arg` might overwrite `self`'s attribute # arg --(fallbacks to)--&gt; kws --(fallbacks to)--&gt; self priority = ClassMethodSignaturePriority(prioritize) return priority.get(self, attr=attr, arg=arg, **kws) def _prioritize_kws(self, attr: str, arg: Optional[Any] = None, **kws): return self.get_arg(attr, arg, prioritize='kws', **kws) def _prioritize_arg(self, attr: str, arg: Optional[Any] = None, **kws): return self.get_arg(attr, arg, prioritize='arg', **kws) def _prioritize_obj(self, attr: str, arg: Optional[Any] = None, **kws): return self.get_arg(attr, arg, prioritize='obj', **kws) def get_arg(self, attr: str, func: Callable, scope: Dict[str, Any]): args = inspect.getfullargspec(func).args if attr in args and attr in scope: return scope[attr] return None def get_tuple(self, attr: str,func: Callable, scope: Dict[str, Any], **kws) -&gt; Tuple[Any, Any, Any]: obj = getattr(self, attr, None) arg = self.get_arg(attr, func, scope) kwa = kws.get(attr, None) return obj, arg, kwa def update_params(self, **kws): params = self.aparams() for k, v in self.kwargs: v = self.get_val(attr=k, prioritize='kws', **kws) params[k] = v return params </code></pre> <h2>KeywordArguments</h2> <h3>KeywordArgumentsMeta</h3> <pre class="lang-py prettyprint-override"><code>class KeywordArgumentsMeta(type): @staticmethod def get_annots_kws(cls) -&gt; list: '''Get annotated keyword only argument names''' annots = list(cls.__annotations__.keys()) if '_' not in annots: return [] return annots[annots.index('_') + 1:] @staticmethod def get_cls_kws(cls) -&gt; list: ''' NOTES ----- - if using inheritance this will get all keyword only arguments ''' return inspect.getfullargspec(cls.__init__).kwonlyargs @staticmethod def attr_dict(obj: object, attrs: list) -&gt; dict: return dict((k, getattr(obj, k, None)) for k in attrs) @staticmethod def inst_dict(inst: object, attr: str = 'defaults'): attrs = getattr(type(inst), attr).items() return dict((k, getattr(inst, k, v)) for k, v in attrs) @property def keywords(cls) -&gt; list: '''Get current keyword only argument names''' return cls.get_annots_kws(cls) @property def ikeywords(cls) -&gt; list: '''Get inherited keyword only argument names''' ignore = cls.keywords result = list() is_new = lambda kw: kw not in result and kw not in ignore for c in inspect.getmro(cls): if c is not object: new_kws = cls.get_annots_kws(c) result.extend(list(filter(is_new, new_kws))) return result @property def akeywords(cls) -&gt; list: '''Get all keyword only argument names''' result = list() is_new = lambda kw: kw not in result for c in inspect.getmro(cls): if c is not object: new_kws = cls.get_annots_kws(c) result.extend(list(filter(is_new, new_kws))) return result @property def defaults(cls) -&gt; dict: '''Get default keyword arguments only values''' instance = cls() return cls.attr_dict(instance, cls.keywords) @property def idefaults(cls) -&gt; dict: '''Get inherited default keyword arguments only values''' instance = cls() return cls.attr_dict(instance, cls.ikeywords) @property def adefaults(cls) -&gt; dict: '''Get all default keyword arguments only values''' instance = cls() return cls.attr_dict(instance, cls.akeywords) </code></pre> <h3>KeywordArgumentsMixin</h3> <pre class="lang-py prettyprint-override"><code>@dataclass class KeywordArgumentsMixin(metaclass=KeywordArgumentsMeta): _: KW_ONLY @property def kwargs(self) -&gt; dict: '''Get instance specific default keyword arguments only values''' return type(self).inst_dict(self, attr='defaults') @property def ikwargs(self) -&gt; dict: '''Get instance inherited default keyword arguments only values''' return type(self).inst_dict(self, attr='idefaults') @property def akwargs(self) -&gt; dict: '''Get instance all default keyword arguments only values''' return type(self).inst_dict(self, attr='adefaults') def _merge_kws_to_dict(self, params: dict, **kwargs) -&gt; dict: '''Only overwrite values in params with kwargs if key is in params''' values = params.copy() values.update(dict((k, v) for k, v in kwargs.items() if k in values)) return values def params(self, **kwargs) -&gt; dict: '''Get instance default keyword arguments only values but update with kwargs''' return self._merge_kws_to_dict(self.kwargs, **kwargs) def iparams(self, **kwargs) -&gt; dict: '''Get instance inherited keyword arguments only values but update with kwargs''' return self._merge_kws_to_dict(self.ikwargs, **kwargs) def aparams(self, **kwargs) -&gt; dict: '''Get instance all default keyword arguments only values but update with kwargs''' return self._merge_kws_to_dict(self.akwargs, **kwargs) </code></pre> <h2>KeywordArgumentsBaseClass</h2> <pre class="lang-py prettyprint-override"><code>@dataclass class KeywordArgumentsBaseClass(KeywordArgumentsMixin, ClassMethodSignatureMixin): def prepare_params(self, func: Optional[Callable] = None, scope: Optional[Dict[str, Any]] = None, **kws) -&gt; dict: params = self.aparams() for k, v in self.akwargs.items(): arg = None if func and scope: arg = self.get_arg(attr=k, func=func, scope=scope) v = self.get_val(attr=k, arg=arg, prioritize='arg', **kws) params[k] = v return params </code></pre> <h2>SciPyLinkage</h2> <pre class="lang-py prettyprint-override"><code>#| export @dataclass class SciPyLinkage(KeywordArgumentsBaseClass): _: KW_ONLY method: LinkageMethod = LinkageMethod.SINGLE metric: PDistMetric = PDistMetric.CORRELATION optimal_ordering: bool = True def __post_init__(self): self.method = LinkageMethod(self.method) self.metric = PDistMetric(self.metric) def __call__(self, x: NPArray, **kwargs) -&gt; NPArray: l_func = sp.cluster.hierarchy.linkage params = self.prepare_params(func=l_func, scope=locals(), **kwargs) print('LINKAGE', params) # linkage = l_func(x, **params) # return linkage </code></pre>
<python><python-3.x><scipy><metaclass><abc>
2023-08-10 13:22:31
0
5,218
SumNeuron
76,876,104
2,307,570
What is the console equivalent of clicking on a command in a markdown file in PyCharm?
<p>I have a <code>folder</code> with a <code>README.md</code> and a <code>run.py</code> that looks like this:</p> <pre class="lang-py prettyprint-override"><code>import os assert __name__ == '__main__' print('β– ', __file__) print('●', os.getcwd()) try: os.mkdir('DELETE_ME') except FileExistsError: pass </code></pre> <p>The readme contains the code line <code>python -m a001_misc.b006_cwd.folder.run</code>.<br> PyCharm shows a green triangle next to it.<br> When I click on it, the output tells me, that <code>folder</code> is my CWD.</p> <p><a href="https://i.sstatic.net/6XeSj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6XeSj.png" alt="enter image description here" /></a></p> <p>This is the desired behavior. (Above all, <code>DELETE_ME</code> is created in <code>folder</code>.)<br> But I do not find a one-line console command to reproduce this (i.e. without <code>cd</code>).</p> <p>I would like to know, what actually happens, when I do that click.<br> The closest equivalent I have found is to do <code>python -m run</code> in <code>folder</code>.<br> (While running the whole command in <code>folder</code> creates a <code>ModuleNotFoundError</code>.)</p> <p>The readme also contains the code line <code>python run.py</code>.<br> Normally it raises no questions. Clicking it does the same as running the command in <code>folder</code>.<br> But there is a small bug, and maybe it can help to answer the question.<br> I have renamed the parent of <code>folder</code> from <code>b006_mswitch_confusion</code> to <code>b006_cwd</code>.<br> But somehow the old name is still connected with this button in the readme.<br></p> <p><a href="https://i.sstatic.net/ZqHaY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZqHaY.png" alt="enter image description here" /></a></p> <p>Where is that old name still hidden?<br> (I have already deleted the <code>__pycache__</code> in <code>folder</code>.)</p> <p>The example code can also be found <a href="https://github.com/entenschule/examples_py/tree/main/a001_misc/b006_cwd/folder" rel="nofollow noreferrer">here</a>.<br> (The readme file contains the outputs for different ways to run the script.)</p>
<python><pycharm><getcwd>
2023-08-10 13:18:16
3
1,209
Watchduck
76,875,913
10,248,483
How to get the row values for every 30th minute?
<p>I have a dataset, it's a dataframe:</p> <pre><code> item id _value agentstatus type agenttimestamp Unit Calc1 Calc2 0 FCX.FCN2.PM01001_01_SUM.SUM 1 3 Good Double 2022-01-01 09:00:00+00:00 kWh Diff 30min 1 FCX.FCN2.PM01002_01_SUM.SUM 2 3 Good Double 2022-01-01 09:00:00+00:00 kWh Diff 30min 25 FCX.FCN2.PM01001_01_SUM.SUM 26 3 Good Double 2022-01-01 09:10:00+00:00 kWh Diff 30min 26 FCX.FCN2.PM01002_01_SUM.SUM 27 3 Good Double 2022-01-01 09:10:00+00:00 kWh Diff 30min 50 FCX.FCN2.PM01001_01_SUM.SUM 51 3 Good Double 2022-01-01 09:20:00+00:00 kWh Diff 30min 51 FCX.FCN2.PM01002_01_SUM.SUM 52 3 Good Double 2022-01-01 09:20:00+00:00 kWh Diff 30min 75 FCX.FCN2.PM01001_01_SUM.SUM 76 3 Good Double 2022-01-01 09:30:00+00:00 kWh Diff 30min 76 FCX.FCN2.PM01002_01_SUM.SUM 77 3 Good Double 2022-01-01 09:30:00+00:00 kWh Diff 30min 100 FCX.FCN2.PM01001_01_SUM.SUM 101 3 Good Double 2022-01-01 09:40:00+00:00 kWh Diff 30min 101 FCX.FCN2.PM01002_01_SUM.SUM 102 3 Good Double 2022-01-01 09:40:00+00:00 kWh Diff 30min 125 FCX.FCN2.PM01001_01_SUM.SUM 126 3 Good Double 2022-01-01 09:50:00+00:00 kWh Diff 30min 126 FCX.FCN2.PM01002_01_SUM.SUM 127 3 Good Double 2022-01-01 09:50:00+00:00 kWh Diff 30min 150 FCX.FCN2.PM01001_01_SUM.SUM 151 3 Good Double 2022-01-01 10:00:00+00:00 kWh Diff 30min 151 FCX.FCN2.PM01002_01_SUM.SUM 152 3 Good Double 2022-01-01 10:00:00+00:00 kWh Diff 30min 175 FCX.FCN2.PM01001_01_SUM.SUM 176 3 Good Double 2022-01-01 10:10:00+00:00 kWh Diff 30min 176 FCX.FCN2.PM01002_01_SUM.SUM 177 3 Good Double 2022-01-01 10:10:00+00:00 kWh Diff 30min 200 FCX.FCN2.PM01001_01_SUM.SUM 201 3 Good Double 2022-01-01 10:20:00+00:00 kWh Diff 30min 201 FCX.FCN2.PM01002_01_SUM.SUM 202 3 Good Double 2022-01-01 10:20:00+00:00 kWh Diff 30min 225 FCX.FCN2.PM01001_01_SUM.SUM 226 3 Good Double 2022-01-01 10:30:00+00:00 kWh Diff 30min 226 FCX.FCN2.PM01002_01_SUM.SUM 227 3 Good Double 2022-01-01 10:30:00+00:00 kWh Diff 30min 250 FCX.FCN2.PM01001_01_SUM.SUM 251 3 Good Double 2022-01-01 10:40:00+00:00 kWh Diff 30min 251 FCX.FCN2.PM01002_01_SUM.SUM 252 3 Good Double 2022-01-01 10:40:00+00:00 kWh Diff 30min 275 FCX.FCN2.PM01001_01_SUM.SUM 276 3 Good Double 2022-01-01 10:50:00+00:00 kWh Diff 30min 276 FCX.FCN2.PM01002_01_SUM.SUM 277 3 Good Double 2022-01-01 10:50:00+00:00 kWh Diff 30min 300 FCX.FCN2.PM01001_01_SUM.SUM 301 3 Good Double 2022-01-01 11:00:00+00:00 kWh Diff 30min 301 FCX.FCN2.PM01002_01_SUM.SUM 302 3 Good Double 2022-01-01 11:00:00+00:00 kWh Diff 30min 325 FCX.FCN2.PM01001_01_SUM.SUM 326 3 Good Double 2022-01-01 11:10:00+00:00 kWh Diff 30min 326 FCX.FCN2.PM01002_01_SUM.SUM 327 3 Good Double 2022-01-01 11:10:00+00:00 kWh Diff 30min 350 FCX.FCN2.PM01001_01_SUM.SUM 351 3 Good Double 2022-01-01 11:20:00+00:00 kWh Diff 30min 351 FCX.FCN2.PM01002_01_SUM.SUM 352 3 Good Double 2022-01-01 11:20:00+00:00 kWh Diff 30min 375 FCX.FCN2.PM01001_01_SUM.SUM 376 3 Good Double 2022-01-01 11:30:00+00:00 kWh Diff 30min 376 FCX.FCN2.PM01002_01_SUM.SUM 377 3 Good Double 2022-01-01 11:30:00+00:00 kWh Diff 30min 400 FCX.FCN2.PM01001_01_SUM.SUM 401 3 Good Double 2022-01-01 11:40:00+00:00 kWh Diff 30min 401 FCX.FCN2.PM01002_01_SUM.SUM 402 3 Good Double 2022-01-01 11:40:00+00:00 kWh Diff 30min 425 FCX.FCN2.PM01001_01_SUM.SUM 426 3 Good Double 2022-01-01 11:50:00+00:00 kWh Diff 30min 426 FCX.FCN2.PM01002_01_SUM.SUM 427 3 Good Double 2022-01-01 11:50:00+00:00 kWh Diff 30min 450 FCX.FCN2.PM01001_01_SUM.SUM 451 3 Good Double 2022-01-01 12:00:00+00:00 kWh Diff 30min 451 FCX.FCN2.PM01002_01_SUM.SUM 452 3 Good Double 2022-01-01 12:00:00+00:00 kWh Diff 30min 475 FCX.FCN2.PM01001_01_SUM.SUM 476 3 Good Double 2022-01-01 12:10:00+00:00 kWh Diff 30min 476 FCX.FCN2.PM01002_01_SUM.SUM 477 3 Good Double 2022-01-01 12:10:00+00:00 kWh Diff 30min 500 FCX.FCN2.PM01001_01_SUM.SUM 501 3 Good Double 2022-01-01 12:20:00+00:00 kWh Diff 30min 501 FCX.FCN2.PM01002_01_SUM.SUM 502 3 Good Double 2022-01-01 12:20:00+00:00 kWh Diff 30min 525 FCX.FCN2.PM01001_01_SUM.SUM 526 3 Good Double 2022-01-01 12:30:00+00:00 kWh Diff 30min 526 FCX.FCN2.PM01002_01_SUM.SUM 527 3 Good Double 2022-01-01 12:30:00+00:00 kWh Diff 30min 550 FCX.FCN2.PM01001_01_SUM.SUM 551 3 Good Double 2022-01-01 12:40:00+00:00 kWh Diff 30min 551 FCX.FCN2.PM01002_01_SUM.SUM 552 3 Good Double 2022-01-01 12:40:00+00:00 kWh Diff 30min 575 FCX.FCN2.PM01001_01_SUM.SUM 576 3 Good Double 2022-01-01 12:50:00+00:00 kWh Diff 30min 576 FCX.FCN2.PM01002_01_SUM.SUM 577 3 Good Double 2022-01-01 12:50:00+00:00 kWh Diff 30min 600 FCX.FCN2.PM01001_01_SUM.SUM 601 3 Good Double 2022-01-01 13:00:00+00:00 kWh Diff 30min 601 FCX.FCN2.PM01002_01_SUM.SUM 602 3 Good Double 2022-01-01 13:00:00+00:00 kWh Diff 30min </code></pre> <p><strong>Expected output</strong></p> <pre><code> item id _value agentstatus type agenttimestamp Unit Calc1 Calc2 75 FCX.FCN2.PM01001_01_SUM.SUM 76 3 Good Double 2022-01-01 09:30:00+00:00 kWh Diff 30min 76 FCX.FCN2.PM01002_01_SUM.SUM 77 3 Good Double 2022-01-01 09:30:00+00:00 kWh Diff 30min 150 FCX.FCN2.PM01001_01_SUM.SUM 151 3 Good Double 2022-01-01 10:00:00+00:00 kWh Diff 30min 151 FCX.FCN2.PM01002_01_SUM.SUM 152 3 Good Double 2022-01-01 10:00:00+00:00 kWh Diff 30min 225 FCX.FCN2.PM01001_01_SUM.SUM 226 3 Good Double 2022-01-01 10:30:00+00:00 kWh Diff 30min 226 FCX.FCN2.PM01002_01_SUM.SUM 227 3 Good Double 2022-01-01 10:30:00+00:00 kWh Diff 30min 300 FCX.FCN2.PM01001_01_SUM.SUM 301 3 Good Double 2022-01-01 11:00:00+00:00 kWh Diff 30min 301 FCX.FCN2.PM01002_01_SUM.SUM 302 3 Good Double 2022-01-01 11:00:00+00:00 kWh Diff 30min 375 FCX.FCN2.PM01001_01_SUM.SUM 376 3 Good Double 2022-01-01 11:30:00+00:00 kWh Diff 30min 376 FCX.FCN2.PM01002_01_SUM.SUM 377 3 Good Double 2022-01-01 11:30:00+00:00 kWh Diff 30min 450 FCX.FCN2.PM01001_01_SUM.SUM 451 3 Good Double 2022-01-01 12:00:00+00:00 kWh Diff 30min 451 FCX.FCN2.PM01002_01_SUM.SUM 452 3 Good Double 2022-01-01 12:00:00+00:00 kWh Diff 30min 525 FCX.FCN2.PM01001_01_SUM.SUM 526 3 Good Double 2022-01-01 12:30:00+00:00 kWh Diff 30min 526 FCX.FCN2.PM01002_01_SUM.SUM 527 3 Good Double 2022-01-01 12:30:00+00:00 kWh Diff 30min 600 FCX.FCN2.PM01001_01_SUM.SUM 601 3 Good Double 2022-01-01 13:00:00+00:00 kWh Diff 30min 601 FCX.FCN2.PM01002_01_SUM.SUM 602 3 Good Double 2022-01-01 13:00:00+00:00 kWh Diff 30min </code></pre> <p>What I tried and the actual output:</p> <p>I tried with df.resample('30T', on='agenttimestamp') but it isn't working or the output isn't giving the results as expected, the output is same as the input when I apply resample.</p> <p>Please let me know how to do this ?</p>
<python><pandas>
2023-08-10 12:53:35
1
369
Nishad Nazar
76,875,762
4,451,315
Filter on `list(Int64)` dtype in polars
<p>Say I have</p> <pre class="lang-py prettyprint-override"><code>In [20]: df = pl.DataFrame({'a': [[1,2,3], [1,4,2], [1,3,3]], 'b': [4,2,1]}) In [21]: df Out[21]: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════║ β”‚ [1, 2, 3] ┆ 4 β”‚ β”‚ [1, 4, 2] ┆ 2 β”‚ β”‚ [1, 3, 3] ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I'd like to keep rows where <code>'a'</code> equals <code>[1,2,3]</code></p> <p>I've tried</p> <pre class="lang-py prettyprint-override"><code>In [23]: df.filter(pl.col('a')==[1,2,3]) ArrowErrorException: NotYetImplemented(&quot;Casting from Int64 to LargeList(Field { name: \&quot;item\&quot;, data_type: Int64, is_nullable: true, metadata: {} }) not supported&quot;) </code></pre> <p>but it raises</p>
<python><python-polars>
2023-08-10 12:31:35
2
11,062
ignoring_gravity
76,875,718
2,825,570
KeyError: "marketplace" while downloading "amazon_us_reviews" dataset - huggingface datasets
<p>I am trying to download the <code>amazon_us_reviews</code> dataset using the following code:</p> <pre><code>from datasets import load_dataset dataset = load_dataset(&quot;amazon_us_reviews&quot;, &quot;Toys_v1_00&quot;) </code></pre> <p>I am getting the following error:</p> <pre><code>KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -&gt; 1693 self._beam_writers[split_name] = beam_writer 1694 11 frames /usr/local/lib/python3.10/dist-packages/datasets/features/features.py in encode_example(self, example) 1850 ``` -&gt; 1851 &quot;&quot;&quot; 1852 return copy.deepcopy(self) /usr/local/lib/python3.10/dist-packages/datasets/features/features.py in encode_nested_example(schema, obj, level) 1228 &quot;&quot;&quot;Decode a nested example. -&gt; 1229 This is used since some features (in particular Audio and Image) have some logic during decoding. 1230 /usr/local/lib/python3.10/dist-packages/datasets/features/features.py in &lt;dictcomp&gt;(.0) 1228 &quot;&quot;&quot;Decode a nested example. -&gt; 1229 This is used since some features (in particular Audio and Image) have some logic during decoding. 1230 /usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py in zip_dict(*dicts) 321 def __get__(self, obj, objtype=None): --&gt; 322 return self.fget.__get__(None, objtype)() 323 /usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py in &lt;genexpr&gt;(.0) 321 def __get__(self, obj, objtype=None): --&gt; 322 return self.fget.__get__(None, objtype)() 323 KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) &lt;ipython-input-25-341913a5da6a&gt; in &lt;cell line: 1&gt;() ----&gt; 1 dataset = load_dataset(&quot;amazon_us_reviews&quot;, &quot;Toys_v1_00&quot;) /usr/local/lib/python3.10/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 with FileLock(lock_path) if is_local else contextlib.nullcontext(): 953 self.info.write_to_directory(self._output_dir, fs=self._fs) --&gt; 954 955 def _save_infos(self): 956 is_local = not is_remote_filesystem(self._fs) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1047 in_memory=in_memory, 1048 ) -&gt; 1049 if run_post_process: 1050 for resource_file_name in self._post_processing_resources(split).values(): 1051 if os.sep in resource_file_name: /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1553 1554 class BeamBasedBuilder(DatasetBuilder): -&gt; 1555 &quot;&quot;&quot;Beam based Builder.&quot;&quot;&quot; 1556 1557 # BeamBasedBuilder does not have dummy data for tests yet /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) DatasetGenerationError: An error occurred while generating the dataset </code></pre> <p>I tried with the <code>load_dataset_builder</code> it is showing the following features:</p> <pre><code>from datasets import load_dataset_builder ds_builder = load_dataset_builder(&quot;amazon_us_reviews&quot;, &quot;Toys_v1_00&quot;) print(ds_builder.info.features) </code></pre> <p>Output:</p> <pre><code>{'marketplace': Value(dtype='string', id=None), 'customer_id': Value(dtype='string', id=None), 'review_id': Value(dtype='string', id=None), 'product_id': Value(dtype='string', id=None), 'product_parent': Value(dtype='string', id=None), 'product_title': Value(dtype='string', id=None), 'product_category': Value(dtype='string', id=None), 'star_rating': Value(dtype='int32', id=None), 'helpful_votes': Value(dtype='int32', id=None), 'total_votes': Value(dtype='int32', id=None), 'vine': ClassLabel(names=['N', 'Y'], id=None), 'verified_purchase': ClassLabel(names=['N', 'Y'], id=None), 'review_headline': Value(dtype='string', id=None), 'review_body': Value(dtype='string', id=None), 'review_date': Value(dtype='string', id=None)} </code></pre> <p>The <code>datasets</code> version used is <code>2.14.4</code></p> <p>Is this the correct way to download the dataset? Kindly advise.</p>
<python><huggingface-transformers><huggingface><huggingface-datasets>
2023-08-10 12:24:05
1
8,621
Jeril
76,875,642
22,212,435
Is it possible to use update method in a way I am using it?
<p>I have read in many sources that it is better to avoid using update method at all. But I have a situation, that it is the only thing that is working (even update idletask does not help) and looks like the code will not produce memory leak. Here is a simple example:</p> <pre><code>import tkinter as tk root = tk.Tk() def do_something(): # === there I have some code === root.update() root.after_idle(do_something) do_something() root.mainloop() </code></pre> <p>As far as I can judge, it is not possible here to be loops and memory leaks. Because I am using after_idle, it will run function again when there are no events left in mainloop to process, that is, in my opinion, will happen after root.update have finished. Am I right or there could be special cases when this code will fail?</p>
<python><python-3.x><tkinter>
2023-08-10 12:12:55
0
610
Danya K
76,875,486
10,430,394
How to extract a subdir with all it's subsequent files using zipfile
<p>Yes, I have read the other posts on this subject, but I am running into a weird problem:</p> <p>When I extract a certain item from the <code>namelist</code>, it only gives me an empty folder, not the actual files inside.</p> <p>My zip file has the following hierarchy:</p> <p>myzip.zip -&gt; FolderA -&gt; FolderB -&gt; FolderC -&gt; FolderIWantA, FolderIWantB, ... FolderIWantN.</p> <p>So there are a lot of preceeding folders I do not wish to extract. I know how to identify the ones I want from the namelist:</p> <pre class="lang-py prettyprint-override"><code>import os import sys import zipfile try: zip_file_path = sys.argv[1] except IndexError: sys.exit('No zip file provided.') archive = zipfile.ZipFile(zip_file_path) for i,file in enumerate(archive.namelist()): if os.path.basename(file[:-1]).startswith('ABC-'): # identify relevant folders old_name = os.path.basename(file[:-1]) new_name = 'new_%d'%i # Create a new name archive.extract(file, new_name) </code></pre> <p>This does extract the folders I want, however the extracted folders are empty for some reason. And not just that: When I extract the new folders, they contain the preceeding folders A,B and C for some reason.</p> <p>I do not know why it does that... Here's a test zip for your convenience:</p> <pre class="lang-py prettyprint-override"><code>import os import shutil prefolders = r'testzip\FolderA\FolderB\FolderC' try: os.makedirs(prefolders) except FileExistsError: pass for i in 'ABC': try: new_folder = 'ABC-Folder%s'%i os.mkdir(os.path.join(prefolders,new_folder)) except FileExistsError: pass for j in range(2): file_path = os.path.join(prefolders,new_folder,'somefile%s.txt'%j) with open(file_path,'w'): pass shutil.make_archive('testzip', 'zip', 'testzip') shutil.rmtree('testzip') </code></pre> <p>I thought this would take like 10 minutes and I am losing my mind over this...</p>
<python><zip><python-zipfile>
2023-08-10 11:50:12
1
534
J.Doe
76,875,289
2,014,878
Getting error message from tenacity retry_state.outcome.result() results in program termination
<p>I'm using the python tenacity library to do exponential backoff of a funtion.</p> <pre><code>import tenacity def log_attempt_number(retry_state): print(f&quot;Retrying: {retry_state.attempt_number}...&quot;) print(retry_state.outcome) @retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(100), after=log_attempt_number) def throw_exception(): throw Exception(&quot;What is this exception?&quot;) </code></pre> <p>This code gives me:</p> <pre><code>Retrying: 1... &lt;Future at 0x7feeaf6354c0 state=finished raised Exception&gt; Retrying: 2... &lt;Future at 0x7feeaf401580 state=finished raised Exception&gt; Retrying: 3... &lt;Future at 0x7feeaf6354c0 state=finished raised Exception&gt; Retrying: 4... &lt;Future at 0x7feeaf401580 state=finished raised Exception&gt; Retrying: 5... &lt;Future at 0x7feeaf6354c0 state=finished raised Exception&gt; </code></pre> <p>...</p> <p>But I want to see the error message, not the Future object. On their <a href="https://tenacity.readthedocs.io/en/latest/" rel="nofollow noreferrer">website</a>, all I can see as an option to get the error message is the function result() which gives me the error message and then terminates.</p> <pre><code>def log_attempt_number(retry_state): print(f&quot;Retrying: {retry_state.attempt_number}...&quot;) print(retry_state.outcome.result()) #This function terminates the program @retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(100), after=log_attempt_number) def throw_exception(): throw Exception(&quot;What is this exception?&quot;) </code></pre> <p>...</p> <pre><code>Retrying: 1... What is this Exception? worryword@WorryWord:~/Development/SOTests$ </code></pre> <p>So my question is: how do I get the error message without terminating the program? I have an issue where the first error is not necessarily the 10th error, etc.</p>
<python><tenacity>
2023-08-10 11:25:47
3
1,751
Ben Alan
76,875,220
264,136
regex to find all files in a string
<p>I have a string as below:</p> <pre><code>56451 -rw- 1 Aug 10 2023 08:54:53 +00:00 .callhome 56482 -rw- 14846 Jul 4 2023 23:02:34 +00:00 Router-system-report_20230704-230220-UTC-info.txt 56481 -rw- 68333650 Jul 4 2023 23:02:28 +00:00 Router-system-report_20230704-230220-UTC.tar.gz 233859 -rw- 9508 Jul 4 2023 17:47:54 +00:00 Router-system-report_20230704-174745-UTC-info.txt 56480 -rw- 61744940 Jul 4 2023 17:47:48 +00:00 Router- 56450 drwx 4096 Jun 18 2023 06:17:45 +00:00 modules </code></pre> <p>I need a regex to find the file names. It should have extension of <code>tar.gz</code></p> <p>Code I started with:</p> <pre><code> matches = re.finditer(&quot;(\w+)\s+(\d+)&quot;, file_string) for match in matches: filenames.append(match.group(1)) </code></pre>
<python>
2023-08-10 11:17:00
3
5,538
Akshay J
76,875,202
10,353,865
How do I read this floating point representation?
<p>I am using the struct module to obtain the binary representation of a floating point number. However, I have some trouble in making sense of the output</p> <pre><code>import struct def fl_bin(fl): s = struct.pack('!f', fl) b = ''.join(format(c, '08b') for c in s) print(b) # whole representation print(&quot;Exponent: {}&quot;.format(b[1:9])) # Exponent only return b[9:] # significant a = fl_bin(2.1) </code></pre> <p>The call gives an exponent of 10000000 and a significant of 00001100110011001100110. My reasoning was as follows: The length of the representation is 32, hence it should be the single precision format. The first bit should indicate the sign, the next 8 bits the exponent (and the rest for the significant). However, if this is true, I don't really see how to obtain the original number from this. For instance, for the exponent I read: 1 -&gt; positive, 000000 -&gt; 1 as exponent. But then I am stuck making sense of the significant. Maybe someone could elaborate on this issue!</p>
<python><floating-point>
2023-08-10 11:14:48
2
702
P.Jo