content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Pyomo dealing with an indexed index Some context to my optimization: I have a warehouse that has stocked products that can be allocated to several retail stores. Each retail store has a monthly demand that needs to be satisifed but I would like to charge each store the highest possible price. So I have a set of stores and a set of products, but these products can only be used in certain months. Below is the mathematical formulation: Where So x_{i,j} represents the allocation to store n of product i. r_n is just a known risk factor for each store, pi_n is the current profit per store, p_n is the current price and paid by store n and demand_n is the demand for store n. These are all known. So for simplicity's sake, we could simply maximise avgPrice, the rest of the variables I can handle. So the average price paid by a store is the average of the monthly prices they pay for their demanded products. I assume a store will only have demands during a year, hence the division by 12. So for each month I allocate products to n stores, find the total price paid by each store, and then find the average of these values. The issue I am having is how to deal with the part. So P_{t} is the available products I have in month t. This is stored in a dictionary of pandas dataframes that look something like: Product ID Product Amount Price per unit 123 10 2 456 20 6 ... ... ... 999 30 7 And I have one of these for each month (so my dictionary can be indexed like df_dict[jan], df_dict[feb] etc. For example, store n = 1 could have a demand of 30 units in January, so I could allocate product 123 and 456 to them, and get a total price in January of 10 * 2+20 * 6 (c_{i} in the formulation above is the unit price). Some code for reproducibility: demand_dict = {('store_1', 'jan'): 237.2, ('store_1', 'feb'): 239, ('store_1', 'mar'): 216, ('store_1', 'apr'): 119, ('store_2', 'may'): 624} # this constains over 50 stores, each of which have 12 monthly demands This is used to ensure demand constraints. So I create a set of stores by: store_set = demand_dict.keys() And I have a set of months: mon_set = ['jan', 'feb',...'dec'] The product dictionary looks something like: prod_dict['jan'] = {'Product Amount': {123: 50, 456: 31, 789: 50, 101: 31, 102: 70, 103: 33, 104: 30, 105: 14}, 'Unit price': {123: 9, 456: 9, 789: 7.6, 101: 7.2, 102: 6.4, 103: 5.5, 104: 5.2, 105: 5.1} prod_dict['feb'] = {'Product Amount': {200: 50, 201: 31, 202: 50, 203: 31, 204: 70, 205: 33, 206: 30, 207: 14}, 'Unit price': {200: 9, 201: 9, 202: 7, 203: 7, 204: 6, 205: 5, 206: 5, 207: 5} Given this formulation of the product set, I don't understand how I can create this as a pyomo set. I am confused because each product (indexed by its id) can be allocated to each store. So if I have 5 stores, product id 123 can be allocated to each of them, as long as I do not allocate more than what is available in the product amount. This constraint I think I can handle. I am however completely lost when it comes to creating the product set in pyomo because the set itself is indexed by a month. Lastly, I know this does not look like an optimization problem because I could simply allocate the most expensive products to each store. However, the risk factor, r_n, contains variables that make this an actual QP. A: The piece that I think you are missing is an indexed set that indexes which products are available/priced for particular months. That is essentially the P_t piece that you want. So you can create a "set of sets" in pyomo where the inner set is indexed by another set, in this case, you have sets of products that are indexed by another component, months. These can be highly useful, but also tricky to use, and I think it is almost always required to "flatten" this set out after you make it so that you can use it in other contexts. Below is an example. I also showed this concept in this post. Code: import pyomo.environ as pyo # DATA prod_dict = {} prod_dict['jan'] = {'Product Amount': {123: 50, 456: 31, 789: 50, 101: 31, 102: 70, 103: 33, 104: 30, 105: 14}, 'Unit price': {123: 9, 456: 9, 789: 7.6, 101: 7.2, 102: 6.4, 103: 5.5, 104: 5.2, 105: 5.1}} prod_dict['feb'] = {'Product Amount': {200: 50, 201: 31, 202: 50, 203: 31, 204: 70, 205: 33, 206: 30, 207: 14}, 'Unit price': {200: 9, 201: 9, 202: 7, 203: 7, 204: 6, 205: 5, 206: 5, 207: 5}} # helper function def products_by_month(month): products = set(prod_dict[month]['Product Amount']) # sanity check: assert set(prod_dict[month]['Unit price']) == products return products # make set of all products, if not already available... products = set.union(*[products_by_month(m) for m in prod_dict.keys()]) # Model Parts model = pyo.ConcreteModel('sales') # SETS model.M = pyo.Set(initialize=list(prod_dict.keys())) # Set of Months # aside: making a list of the set keeps pyomo # from complaining about unordered collection... model.P = pyo.Set(initialize=list(products)) # Set of all Products model.MP = pyo.Set(model.M, within=model.P, initialize={m: list(products_by_month(m)) for m in model.M}) # a flattened set for convenience... model.MP_flat = pyo.Set(within=model.M * model.P, initialize={(m, p) for m in model.M for p in model.MP[m]}) # PARAMS model.price = pyo.Param(model.MP_flat, initialize={(m, p): prod_dict[m]['Unit price'][p] for m, p in model.MP_flat}) model.inventory = pyo.Param(model.MP_flat, initialize={(m, p): prod_dict[m]['Product Amount'][p] for m, p in model.MP_flat}) # VARS model.deliver = pyo.Var(model.MP_flat, domain=pyo.NonNegativeReals) # CONSTRAINTS # example to limit sale of product by month to available in that month @model.Constraint(model.MP_flat) def delivery_limit(model, month, product): return model.deliver[month, product] <= model.inventory[month, product] # example to limit all sales of product in a month to arbitrary cost (this uses the indexed set that you will need) @model.Constraint(model.M) def cost_limit(model, month): return sum(model.deliver[month, product] * model.price[month, product] for product in model.MP[month]) <= 100 model.pprint() Output (a little long, based on your example data): 5 Set Declarations M : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 1 : Any : 2 : {'jan', 'feb'} MP : Size=2, Index=M, Ordered=Insertion Key : Dimen : Domain : Size : Members feb : 1 : P : 8 : {200, 201, 202, 203, 204, 205, 206, 207} jan : 1 : P : 8 : {101, 102, 103, 456, 104, 105, 789, 123} MP_flat : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 2 : MP_flat_domain : 16 : {('jan', 456), ('feb', 204), ('feb', 200), ('jan', 105), ('jan', 102), ('feb', 202), ('feb', 203), ('feb', 206), ('jan', 101), ('jan', 789), ('jan', 104), ('feb', 205), ('jan', 123), ('jan', 103), ('feb', 201), ('feb', 207)} MP_flat_domain : Size=1, Index=None, Ordered=True Key : Dimen : Domain : Size : Members None : 2 : M*P : 32 : {('jan', 101), ('jan', 102), ('jan', 103), ('jan', 456), ('jan', 104), ('jan', 105), ('jan', 200), ('jan', 201), ('jan', 202), ('jan', 203), ('jan', 204), ('jan', 205), ('jan', 206), ('jan', 207), ('jan', 789), ('jan', 123), ('feb', 101), ('feb', 102), ('feb', 103), ('feb', 456), ('feb', 104), ('feb', 105), ('feb', 200), ('feb', 201), ('feb', 202), ('feb', 203), ('feb', 204), ('feb', 205), ('feb', 206), ('feb', 207), ('feb', 789), ('feb', 123)} P : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 1 : Any : 16 : {101, 102, 103, 456, 104, 105, 200, 201, 202, 203, 204, 205, 206, 207, 789, 123} 2 Param Declarations inventory : Size=16, Index=MP_flat, Domain=Any, Default=None, Mutable=False Key : Value ('feb', 200) : 50 ('feb', 201) : 31 ('feb', 202) : 50 ('feb', 203) : 31 ('feb', 204) : 70 ('feb', 205) : 33 ('feb', 206) : 30 ('feb', 207) : 14 ('jan', 101) : 31 ('jan', 102) : 70 ('jan', 103) : 33 ('jan', 104) : 30 ('jan', 105) : 14 ('jan', 123) : 50 ('jan', 456) : 31 ('jan', 789) : 50 price : Size=16, Index=MP_flat, Domain=Any, Default=None, Mutable=False Key : Value ('feb', 200) : 9 ('feb', 201) : 9 ('feb', 202) : 7 ('feb', 203) : 7 ('feb', 204) : 6 ('feb', 205) : 5 ('feb', 206) : 5 ('feb', 207) : 5 ('jan', 101) : 7.2 ('jan', 102) : 6.4 ('jan', 103) : 5.5 ('jan', 104) : 5.2 ('jan', 105) : 5.1 ('jan', 123) : 9 ('jan', 456) : 9 ('jan', 789) : 7.6 1 Var Declarations deliver : Size=16, Index=MP_flat Key : Lower : Value : Upper : Fixed : Stale : Domain ('feb', 200) : 0 : None : None : False : True : NonNegativeReals ('feb', 201) : 0 : None : None : False : True : NonNegativeReals ('feb', 202) : 0 : None : None : False : True : NonNegativeReals ('feb', 203) : 0 : None : None : False : True : NonNegativeReals ('feb', 204) : 0 : None : None : False : True : NonNegativeReals ('feb', 205) : 0 : None : None : False : True : NonNegativeReals ('feb', 206) : 0 : None : None : False : True : NonNegativeReals ('feb', 207) : 0 : None : None : False : True : NonNegativeReals ('jan', 101) : 0 : None : None : False : True : NonNegativeReals ('jan', 102) : 0 : None : None : False : True : NonNegativeReals ('jan', 103) : 0 : None : None : False : True : NonNegativeReals ('jan', 104) : 0 : None : None : False : True : NonNegativeReals ('jan', 105) : 0 : None : None : False : True : NonNegativeReals ('jan', 123) : 0 : None : None : False : True : NonNegativeReals ('jan', 456) : 0 : None : None : False : True : NonNegativeReals ('jan', 789) : 0 : None : None : False : True : NonNegativeReals 2 Constraint Declarations cost_limit : Size=2, Index=M, Active=True Key : Lower : Body : Upper : Active feb : -Inf : 9*deliver[feb,200] + 9*deliver[feb,201] + 7*deliver[feb,202] + 7*deliver[feb,203] + 6*deliver[feb,204] + 5*deliver[feb,205] + 5*deliver[feb,206] + 5*deliver[feb,207] : 100.0 : True jan : -Inf : 7.2*deliver[jan,101] + 6.4*deliver[jan,102] + 5.5*deliver[jan,103] + 9*deliver[jan,456] + 5.2*deliver[jan,104] + 5.1*deliver[jan,105] + 7.6*deliver[jan,789] + 9*deliver[jan,123] : 100.0 : True delivery_limit : Size=16, Index=MP_flat, Active=True Key : Lower : Body : Upper : Active ('feb', 200) : -Inf : deliver[feb,200] : 50.0 : True ('feb', 201) : -Inf : deliver[feb,201] : 31.0 : True ('feb', 202) : -Inf : deliver[feb,202] : 50.0 : True ('feb', 203) : -Inf : deliver[feb,203] : 31.0 : True ('feb', 204) : -Inf : deliver[feb,204] : 70.0 : True ('feb', 205) : -Inf : deliver[feb,205] : 33.0 : True ('feb', 206) : -Inf : deliver[feb,206] : 30.0 : True ('feb', 207) : -Inf : deliver[feb,207] : 14.0 : True ('jan', 101) : -Inf : deliver[jan,101] : 31.0 : True ('jan', 102) : -Inf : deliver[jan,102] : 70.0 : True ('jan', 103) : -Inf : deliver[jan,103] : 33.0 : True ('jan', 104) : -Inf : deliver[jan,104] : 30.0 : True ('jan', 105) : -Inf : deliver[jan,105] : 14.0 : True ('jan', 123) : -Inf : deliver[jan,123] : 50.0 : True ('jan', 456) : -Inf : deliver[jan,456] : 31.0 : True ('jan', 789) : -Inf : deliver[jan,789] : 50.0 : True 10 Declarations: M P MP MP_flat_domain MP_flat price inventory deliver delivery_limit cost_limit
Pyomo dealing with an indexed index
Some context to my optimization: I have a warehouse that has stocked products that can be allocated to several retail stores. Each retail store has a monthly demand that needs to be satisifed but I would like to charge each store the highest possible price. So I have a set of stores and a set of products, but these products can only be used in certain months. Below is the mathematical formulation: Where So x_{i,j} represents the allocation to store n of product i. r_n is just a known risk factor for each store, pi_n is the current profit per store, p_n is the current price and paid by store n and demand_n is the demand for store n. These are all known. So for simplicity's sake, we could simply maximise avgPrice, the rest of the variables I can handle. So the average price paid by a store is the average of the monthly prices they pay for their demanded products. I assume a store will only have demands during a year, hence the division by 12. So for each month I allocate products to n stores, find the total price paid by each store, and then find the average of these values. The issue I am having is how to deal with the part. So P_{t} is the available products I have in month t. This is stored in a dictionary of pandas dataframes that look something like: Product ID Product Amount Price per unit 123 10 2 456 20 6 ... ... ... 999 30 7 And I have one of these for each month (so my dictionary can be indexed like df_dict[jan], df_dict[feb] etc. For example, store n = 1 could have a demand of 30 units in January, so I could allocate product 123 and 456 to them, and get a total price in January of 10 * 2+20 * 6 (c_{i} in the formulation above is the unit price). Some code for reproducibility: demand_dict = {('store_1', 'jan'): 237.2, ('store_1', 'feb'): 239, ('store_1', 'mar'): 216, ('store_1', 'apr'): 119, ('store_2', 'may'): 624} # this constains over 50 stores, each of which have 12 monthly demands This is used to ensure demand constraints. So I create a set of stores by: store_set = demand_dict.keys() And I have a set of months: mon_set = ['jan', 'feb',...'dec'] The product dictionary looks something like: prod_dict['jan'] = {'Product Amount': {123: 50, 456: 31, 789: 50, 101: 31, 102: 70, 103: 33, 104: 30, 105: 14}, 'Unit price': {123: 9, 456: 9, 789: 7.6, 101: 7.2, 102: 6.4, 103: 5.5, 104: 5.2, 105: 5.1} prod_dict['feb'] = {'Product Amount': {200: 50, 201: 31, 202: 50, 203: 31, 204: 70, 205: 33, 206: 30, 207: 14}, 'Unit price': {200: 9, 201: 9, 202: 7, 203: 7, 204: 6, 205: 5, 206: 5, 207: 5} Given this formulation of the product set, I don't understand how I can create this as a pyomo set. I am confused because each product (indexed by its id) can be allocated to each store. So if I have 5 stores, product id 123 can be allocated to each of them, as long as I do not allocate more than what is available in the product amount. This constraint I think I can handle. I am however completely lost when it comes to creating the product set in pyomo because the set itself is indexed by a month. Lastly, I know this does not look like an optimization problem because I could simply allocate the most expensive products to each store. However, the risk factor, r_n, contains variables that make this an actual QP.
[ "The piece that I think you are missing is an indexed set that indexes which products are available/priced for particular months. That is essentially the P_t piece that you want. So you can create a \"set of sets\" in pyomo where the inner set is indexed by another set, in this case, you have sets of products that are indexed by another component, months. These can be highly useful, but also tricky to use, and I think it is almost always required to \"flatten\" this set out after you make it so that you can use it in other contexts. Below is an example. I also showed this concept in this post.\nCode:\nimport pyomo.environ as pyo\n\n# DATA\nprod_dict = {}\nprod_dict['jan'] = {'Product Amount': {123: 50,\n 456: 31,\n 789: 50,\n 101: 31,\n 102: 70,\n 103: 33,\n 104: 30,\n 105: 14},\n'Unit price': {123: 9,\n 456: 9,\n 789: 7.6,\n 101: 7.2,\n 102: 6.4,\n 103: 5.5,\n 104: 5.2,\n 105: 5.1}}\n\nprod_dict['feb'] = {'Product Amount': {200: 50,\n 201: 31,\n 202: 50,\n 203: 31,\n 204: 70,\n 205: 33,\n 206: 30,\n 207: 14},\n'Unit price': {200: 9,\n 201: 9,\n 202: 7,\n 203: 7,\n 204: 6,\n 205: 5,\n 206: 5,\n 207: 5}}\n\n# helper function\ndef products_by_month(month):\n products = set(prod_dict[month]['Product Amount'])\n # sanity check:\n assert set(prod_dict[month]['Unit price']) == products\n return products\n\n# make set of all products, if not already available...\nproducts = set.union(*[products_by_month(m) for m in prod_dict.keys()])\n\n# Model Parts\nmodel = pyo.ConcreteModel('sales')\n\n# SETS\nmodel.M = pyo.Set(initialize=list(prod_dict.keys())) # Set of Months\n# aside: making a list of the set keeps pyomo \n# from complaining about unordered collection...\nmodel.P = pyo.Set(initialize=list(products)) # Set of all Products\nmodel.MP = pyo.Set(model.M, within=model.P, initialize={m: list(products_by_month(m)) for m in model.M})\n\n# a flattened set for convenience...\nmodel.MP_flat = pyo.Set(within=model.M * model.P, initialize={(m, p) for m in model.M for p in model.MP[m]})\n\n# PARAMS\nmodel.price = pyo.Param(model.MP_flat, initialize={(m, p): prod_dict[m]['Unit price'][p] for m, p in model.MP_flat})\nmodel.inventory = pyo.Param(model.MP_flat, initialize={(m, p): prod_dict[m]['Product Amount'][p] for m, p in model.MP_flat})\n\n# VARS\nmodel.deliver = pyo.Var(model.MP_flat, domain=pyo.NonNegativeReals)\n\n# CONSTRAINTS\n# example to limit sale of product by month to available in that month\n@model.Constraint(model.MP_flat)\ndef delivery_limit(model, month, product):\n return model.deliver[month, product] <= model.inventory[month, product]\n\n# example to limit all sales of product in a month to arbitrary cost (this uses the indexed set that you will need)\n@model.Constraint(model.M)\ndef cost_limit(model, month):\n return sum(model.deliver[month, product] * model.price[month, product] for product in model.MP[month]) <= 100\n\nmodel.pprint()\n\nOutput (a little long, based on your example data):\n5 Set Declarations\n M : Size=1, Index=None, Ordered=Insertion\n Key : Dimen : Domain : Size : Members\n None : 1 : Any : 2 : {'jan', 'feb'}\n MP : Size=2, Index=M, Ordered=Insertion\n Key : Dimen : Domain : Size : Members\n feb : 1 : P : 8 : {200, 201, 202, 203, 204, 205, 206, 207}\n jan : 1 : P : 8 : {101, 102, 103, 456, 104, 105, 789, 123}\n MP_flat : Size=1, Index=None, Ordered=Insertion\n Key : Dimen : Domain : Size : Members\n None : 2 : MP_flat_domain : 16 : {('jan', 456), ('feb', 204), ('feb', 200), ('jan', 105), ('jan', 102), ('feb', 202), ('feb', 203), ('feb', 206), ('jan', 101), ('jan', 789), ('jan', 104), ('feb', 205), ('jan', 123), ('jan', 103), ('feb', 201), ('feb', 207)}\n MP_flat_domain : Size=1, Index=None, Ordered=True\n Key : Dimen : Domain : Size : Members\n None : 2 : M*P : 32 : {('jan', 101), ('jan', 102), ('jan', 103), ('jan', 456), ('jan', 104), ('jan', 105), ('jan', 200), ('jan', 201), ('jan', 202), ('jan', 203), ('jan', 204), ('jan', 205), ('jan', 206), ('jan', 207), ('jan', 789), ('jan', 123), ('feb', 101), ('feb', 102), ('feb', 103), ('feb', 456), ('feb', 104), ('feb', 105), ('feb', 200), ('feb', 201), ('feb', 202), ('feb', 203), ('feb', 204), ('feb', 205), ('feb', 206), ('feb', 207), ('feb', 789), ('feb', 123)}\n P : Size=1, Index=None, Ordered=Insertion\n Key : Dimen : Domain : Size : Members\n None : 1 : Any : 16 : {101, 102, 103, 456, 104, 105, 200, 201, 202, 203, 204, 205, 206, 207, 789, 123}\n\n2 Param Declarations\n inventory : Size=16, Index=MP_flat, Domain=Any, Default=None, Mutable=False\n Key : Value\n ('feb', 200) : 50\n ('feb', 201) : 31\n ('feb', 202) : 50\n ('feb', 203) : 31\n ('feb', 204) : 70\n ('feb', 205) : 33\n ('feb', 206) : 30\n ('feb', 207) : 14\n ('jan', 101) : 31\n ('jan', 102) : 70\n ('jan', 103) : 33\n ('jan', 104) : 30\n ('jan', 105) : 14\n ('jan', 123) : 50\n ('jan', 456) : 31\n ('jan', 789) : 50\n price : Size=16, Index=MP_flat, Domain=Any, Default=None, Mutable=False\n Key : Value\n ('feb', 200) : 9\n ('feb', 201) : 9\n ('feb', 202) : 7\n ('feb', 203) : 7\n ('feb', 204) : 6\n ('feb', 205) : 5\n ('feb', 206) : 5\n ('feb', 207) : 5\n ('jan', 101) : 7.2\n ('jan', 102) : 6.4\n ('jan', 103) : 5.5\n ('jan', 104) : 5.2\n ('jan', 105) : 5.1\n ('jan', 123) : 9\n ('jan', 456) : 9\n ('jan', 789) : 7.6\n\n1 Var Declarations\n deliver : Size=16, Index=MP_flat\n Key : Lower : Value : Upper : Fixed : Stale : Domain\n ('feb', 200) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 201) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 202) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 203) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 204) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 205) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 206) : 0 : None : None : False : True : NonNegativeReals\n ('feb', 207) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 101) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 102) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 103) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 104) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 105) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 123) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 456) : 0 : None : None : False : True : NonNegativeReals\n ('jan', 789) : 0 : None : None : False : True : NonNegativeReals\n\n2 Constraint Declarations\n cost_limit : Size=2, Index=M, Active=True\n Key : Lower : Body : Upper : Active\n feb : -Inf : 9*deliver[feb,200] + 9*deliver[feb,201] + 7*deliver[feb,202] + 7*deliver[feb,203] + 6*deliver[feb,204] + 5*deliver[feb,205] + 5*deliver[feb,206] + 5*deliver[feb,207] : 100.0 : True\n jan : -Inf : 7.2*deliver[jan,101] + 6.4*deliver[jan,102] + 5.5*deliver[jan,103] + 9*deliver[jan,456] + 5.2*deliver[jan,104] + 5.1*deliver[jan,105] + 7.6*deliver[jan,789] + 9*deliver[jan,123] : 100.0 : True\n delivery_limit : Size=16, Index=MP_flat, Active=True\n Key : Lower : Body : Upper : Active\n ('feb', 200) : -Inf : deliver[feb,200] : 50.0 : True\n ('feb', 201) : -Inf : deliver[feb,201] : 31.0 : True\n ('feb', 202) : -Inf : deliver[feb,202] : 50.0 : True\n ('feb', 203) : -Inf : deliver[feb,203] : 31.0 : True\n ('feb', 204) : -Inf : deliver[feb,204] : 70.0 : True\n ('feb', 205) : -Inf : deliver[feb,205] : 33.0 : True\n ('feb', 206) : -Inf : deliver[feb,206] : 30.0 : True\n ('feb', 207) : -Inf : deliver[feb,207] : 14.0 : True\n ('jan', 101) : -Inf : deliver[jan,101] : 31.0 : True\n ('jan', 102) : -Inf : deliver[jan,102] : 70.0 : True\n ('jan', 103) : -Inf : deliver[jan,103] : 33.0 : True\n ('jan', 104) : -Inf : deliver[jan,104] : 30.0 : True\n ('jan', 105) : -Inf : deliver[jan,105] : 14.0 : True\n ('jan', 123) : -Inf : deliver[jan,123] : 50.0 : True\n ('jan', 456) : -Inf : deliver[jan,456] : 31.0 : True\n ('jan', 789) : -Inf : deliver[jan,789] : 50.0 : True\n\n10 Declarations: M P MP MP_flat_domain MP_flat price inventory deliver delivery_limit cost_limit\n\n" ]
[ 1 ]
[]
[]
[ "mathematical_optimization", "optimization", "pyomo", "python" ]
stackoverflow_0074465104_mathematical_optimization_optimization_pyomo_python.txt
Q: Pyspark For Loop Not Creating Dataframes I have an initial dataframe df that looks like this: +-------+---+-----+------------------+----+-------------------+ |gender| pro|share| prediction|week| forecast_units| +------+----+-----+------------------+----+-------------------+ | Male|Polo| 0.01| 258.4054260253906| 37| 1809.0| | Male|Polo| 0.1| 332.4026794433594| 38| 2327.0| | Male|Polo| 0.15|425.97430419921875| 39| 2982.0| | Male|Polo| 0.2| 508.3385314941406| 40| 3558.0| .... I have the following code that attempts to create multiple dataframes from the original dataframe by applying some calculus. Initial I create four empty dataframes and then I want to loop through four different weeks, c_weeks, and save the result from the calculus to each dataframe on the list_dfs: schema = StructType([\ StructField("gender", StringType(),True), \ StructField("pro",StringType(),True), \ StructField("units_1_tpr",DoubleType(),True), \ StructField("units_1'_tpr",DoubleType(),True), \ StructField("units_15_tpr",DoubleType(),True), \ StructField("units_20_tpr",DoubleType(),True)]) df_wk1 = spark.createDataFrame([],schema=schema) df_wk2 = spark.createDataFrame([],schema=schema) df_wk3 = spark.createDataFrame([],schema=schema) df_wk4 = spark.createDataFrame([],schema=schema) list_dfs = [df_wk1, df_wk2, df_wk3, df_wk4] c_weeks = [37, 38, 39, 40] for data,weeknum in zip(list_dfs, campaign_weeks): data = df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) In the end, the dataframes continue empty. How do fix this? If this way is not possible how can I implement what I want? A: If you assign the result of df.filter(... to data it will be lost (actually, that line has no effect). Try this way: df_wk1, df_wk2, df_wk3, df_wk4 = [ df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) for weeknum in [37, 38, 39, 40] ] However, df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) create a DataFrame with a different schema from the one you probably want (looking at your question). This is an example of the DataFrame you get: +------+----+------+ |gender| pro| 0.0| +------+----+------+ | Male|Polo|3558.0| +------+----+------+ and this is its schema: root |-- gender: string (nullable = true) |-- pro: string (nullable = true) |-- 0.0: double (nullable = true)
Pyspark For Loop Not Creating Dataframes
I have an initial dataframe df that looks like this: +-------+---+-----+------------------+----+-------------------+ |gender| pro|share| prediction|week| forecast_units| +------+----+-----+------------------+----+-------------------+ | Male|Polo| 0.01| 258.4054260253906| 37| 1809.0| | Male|Polo| 0.1| 332.4026794433594| 38| 2327.0| | Male|Polo| 0.15|425.97430419921875| 39| 2982.0| | Male|Polo| 0.2| 508.3385314941406| 40| 3558.0| .... I have the following code that attempts to create multiple dataframes from the original dataframe by applying some calculus. Initial I create four empty dataframes and then I want to loop through four different weeks, c_weeks, and save the result from the calculus to each dataframe on the list_dfs: schema = StructType([\ StructField("gender", StringType(),True), \ StructField("pro",StringType(),True), \ StructField("units_1_tpr",DoubleType(),True), \ StructField("units_1'_tpr",DoubleType(),True), \ StructField("units_15_tpr",DoubleType(),True), \ StructField("units_20_tpr",DoubleType(),True)]) df_wk1 = spark.createDataFrame([],schema=schema) df_wk2 = spark.createDataFrame([],schema=schema) df_wk3 = spark.createDataFrame([],schema=schema) df_wk4 = spark.createDataFrame([],schema=schema) list_dfs = [df_wk1, df_wk2, df_wk3, df_wk4] c_weeks = [37, 38, 39, 40] for data,weeknum in zip(list_dfs, campaign_weeks): data = df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) In the end, the dataframes continue empty. How do fix this? If this way is not possible how can I implement what I want?
[ "If you assign the result of df.filter(... to data it will be lost (actually, that line has no effect). Try this way:\ndf_wk1, df_wk2, df_wk3, df_wk4 = [\n df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot(\"share\").agg(first('forecast_units'))\n for weeknum in [37, 38, 39, 40]\n]\n\nHowever, df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot(\"share\").agg(first('forecast_units')) create a DataFrame with a different schema from the one you probably want (looking at your question).\nThis is an example of the DataFrame you get:\n+------+----+------+\n|gender| pro| 0.0|\n+------+----+------+\n| Male|Polo|3558.0|\n+------+----+------+\n\nand this is its schema:\nroot\n |-- gender: string (nullable = true)\n |-- pro: string (nullable = true)\n |-- 0.0: double (nullable = true)\n\n" ]
[ 0 ]
[]
[]
[ "databricks", "dataframe", "loops", "pyspark", "python" ]
stackoverflow_0074464717_databricks_dataframe_loops_pyspark_python.txt
Q: Multiple api calls (in separate functions) Flask, how do I make them asynchronous so they take less time? I am trying to make a Flask app. It has to make calls to different APIs. Each API call is wrapped in a function which gets and processes the response. How do I make these calls asynchronous so my app takes lesser time to load? Thanks. A sample function is here, I have a bunch of similar functions which make calls to other APIs- def api_call(): # Contact API try: url = f"https://example.com" response = requests.get(url) response.raise_for_status() except requests.RequestException: return "Oops, there was an error!" # Parse response try: res = response.json() return res["key"] except (KeyError, TypeError, ValueError): return "Oops, there was an error!" A: Use threading. from threading import Thread def api_caller(): while True: api_call() Thread(target=api_caller).start() app.run(host='0.0.0.0', port=8080) Hope this helps
Multiple api calls (in separate functions) Flask, how do I make them asynchronous so they take less time?
I am trying to make a Flask app. It has to make calls to different APIs. Each API call is wrapped in a function which gets and processes the response. How do I make these calls asynchronous so my app takes lesser time to load? Thanks. A sample function is here, I have a bunch of similar functions which make calls to other APIs- def api_call(): # Contact API try: url = f"https://example.com" response = requests.get(url) response.raise_for_status() except requests.RequestException: return "Oops, there was an error!" # Parse response try: res = response.json() return res["key"] except (KeyError, TypeError, ValueError): return "Oops, there was an error!"
[ "Use threading.\nfrom threading import Thread\n\ndef api_caller():\n while True:\n api_call()\n\nThread(target=api_caller).start()\napp.run(host='0.0.0.0', port=8080)\n\nHope this helps\n" ]
[ 0 ]
[]
[]
[ "flask", "python", "python_asyncio", "python_requests" ]
stackoverflow_0074463472_flask_python_python_asyncio_python_requests.txt
Q: How to determine whether or not a point is in the first quadrant with a function in python enter image description here Creating a function 'first' with input 'point' in x,y form test whether or not a point is in the first quadrant. I am unable to get the variable 'point' into (x,y) form for the function 'first' to determine whether or not the point is in the first quadrant. A: Change the problematic line as follows: x,y = point
How to determine whether or not a point is in the first quadrant with a function in python
enter image description here Creating a function 'first' with input 'point' in x,y form test whether or not a point is in the first quadrant. I am unable to get the variable 'point' into (x,y) form for the function 'first' to determine whether or not the point is in the first quadrant.
[ "Change the problematic line as follows:\nx,y = point\n\n" ]
[ 0 ]
[]
[]
[ "function", "if_statement", "python" ]
stackoverflow_0074466135_function_if_statement_python.txt
Q: How to filter json information into multiple values? I'm looking for a way to filter client information into variables that i can use to send emails. One of the variables im looking for is "nospam1@gmail.com" Can somebody help me with this? The code i tested is: import json with open('notion_data.json') as json_file: data = json.load(json_file) if [x for x in data['properties'] if x.get('plain_text')=='nospam1@gmail.com']: print("IN") else: print("NOT") The error i get: Traceback (most recent call last): File "C:\Users\stijn\PycharmProjects\notion\scrath_notion.py", line 13, in <module> if [x for x in data['properties'] if x.get('plain_text')=='nospam1@gmail.com']: ~~~~^^^^^^^^^^^^^^ KeyError: 'properties' Process finished with exit code 1 Data of the json file: { "object": "list", "results": [ { "object": "page", "id": "a94f4f2d-b965-43db-a8bf-02c1453033ee", "created_time": "2022-11-15T18:53:00.000Z", "last_edited_time": "2022-11-15T18:58:00.000Z", "created_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "last_edited_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "4279b28e-fd9d-4efd-b9f7-957699839dd4" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "nospam2@gmail.com", "link": null }}}}} A: You have to dive through ALL of the intermediate objects. Assuming there are multiple results: for result in data['results']: texttype = result['properties']['email_sender']['type'] email = result['properties']['email_sender'][texttype][0]['text']['content'] if email == 'nospam2@gmail.com': print("winner")
How to filter json information into multiple values?
I'm looking for a way to filter client information into variables that i can use to send emails. One of the variables im looking for is "nospam1@gmail.com" Can somebody help me with this? The code i tested is: import json with open('notion_data.json') as json_file: data = json.load(json_file) if [x for x in data['properties'] if x.get('plain_text')=='nospam1@gmail.com']: print("IN") else: print("NOT") The error i get: Traceback (most recent call last): File "C:\Users\stijn\PycharmProjects\notion\scrath_notion.py", line 13, in <module> if [x for x in data['properties'] if x.get('plain_text')=='nospam1@gmail.com']: ~~~~^^^^^^^^^^^^^^ KeyError: 'properties' Process finished with exit code 1 Data of the json file: { "object": "list", "results": [ { "object": "page", "id": "a94f4f2d-b965-43db-a8bf-02c1453033ee", "created_time": "2022-11-15T18:53:00.000Z", "last_edited_time": "2022-11-15T18:58:00.000Z", "created_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "last_edited_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "4279b28e-fd9d-4efd-b9f7-957699839dd4" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "nospam2@gmail.com", "link": null }}}}}
[ "You have to dive through ALL of the intermediate objects. Assuming there are multiple results:\nfor result in data['results']:\n texttype = result['properties']['email_sender']['type']\n email = result['properties']['email_sender'][texttype][0]['text']['content']\n if email == 'nospam2@gmail.com':\n print(\"winner\")\n\n" ]
[ 0 ]
[]
[]
[ "json", "python", "python_3.x" ]
stackoverflow_0074451186_json_python_python_3.x.txt
Q: Min function in Teradata unlike Python I am doing sort of a code migration from Python to Teradata: The python code is this: max = min(datetime.today(), date + timedelta(days=90)) where date variable holds a date. However, in Teradata, I know this min function won't work the same way. And, I have to get the 'date' using a select statement. SEL min(SELECT CURRENT_TIMESTAMP, SEL MAX(DTM) + INTERVAL '90' DAY FROM BILLS) as max Those select statements individually run correct. Only thing is I want the minimum of those two dates. Also, the 'SELECT CURRENT_TIMESTAMP' is generating output like 2022-11-16 12:18:37.120000+00:00. I only want 2022-11-16 12:18:37. How can this be done in a single query? Thank you. A: Were you looking for this one? SELECT LEAST(13, 6); SELECT LEAST( to_char(date1,'YYYYMMDD'), to_char(date2,'YYYYMMDD') ) ... A: No reason to convert to VARCHAR. Assuming DTM is TIMESTAMP(0), all you need is: SELECT LEAST(CAST(CURRENT_TIMESTAMP(0) AS TIMESTAMP(0)), MAX(DTM) + INTERVAL '90' DAY) FROM BILLS; If DTM has fractional seconds precision but the fractional part is always zero, then you can move the CAST to the outside: SELECT CAST(LEAST(CURRENT_TIMESTAMP(0), MAX(DTM) + INTERVAL '90' DAY) AS TIMESTAMP(0)) FROM BILLS; Teradata will not directly allow truncation of nonzero fractional seconds unless your system has set TruncRoundReturnTimestamp to TRUE, so if DTM potentially has fractional seconds then you may be stuck with a somewhat clumsy workaround like converting to character and back or subtracting the fractional seconds some other way such as DTM - (EXTRACT(SECOND FROM DTM) MOD 1)*INTERVAL '1' SECOND before you can CAST to TIMESTAMP(0)
Min function in Teradata unlike Python
I am doing sort of a code migration from Python to Teradata: The python code is this: max = min(datetime.today(), date + timedelta(days=90)) where date variable holds a date. However, in Teradata, I know this min function won't work the same way. And, I have to get the 'date' using a select statement. SEL min(SELECT CURRENT_TIMESTAMP, SEL MAX(DTM) + INTERVAL '90' DAY FROM BILLS) as max Those select statements individually run correct. Only thing is I want the minimum of those two dates. Also, the 'SELECT CURRENT_TIMESTAMP' is generating output like 2022-11-16 12:18:37.120000+00:00. I only want 2022-11-16 12:18:37. How can this be done in a single query? Thank you.
[ "Were you looking for this one?\nSELECT LEAST(13, 6); \nSELECT LEAST( to_char(date1,'YYYYMMDD'), to_char(date2,'YYYYMMDD') ) ...\n\n", "No reason to convert to VARCHAR. Assuming DTM is TIMESTAMP(0), all you need is:\nSELECT LEAST(CAST(CURRENT_TIMESTAMP(0) AS TIMESTAMP(0)),\n MAX(DTM) + INTERVAL '90' DAY)\nFROM BILLS;\n\nIf DTM has fractional seconds precision but the fractional part is always zero, then you can move the CAST to the outside:\nSELECT CAST(LEAST(CURRENT_TIMESTAMP(0),\n MAX(DTM) + INTERVAL '90' DAY) AS TIMESTAMP(0))\nFROM BILLS;\n\nTeradata will not directly allow truncation of nonzero fractional seconds unless your system has set TruncRoundReturnTimestamp to TRUE, so if DTM potentially has fractional seconds then you may be stuck with a somewhat clumsy workaround like converting to character and back or subtracting the fractional seconds some other way such as\nDTM - (EXTRACT(SECOND FROM DTM) MOD 1)*INTERVAL '1' SECOND\n\nbefore you can CAST to TIMESTAMP(0)\n" ]
[ 1, 0 ]
[]
[]
[ "python", "sql", "teradata", "teradatasql" ]
stackoverflow_0074464896_python_sql_teradata_teradatasql.txt
Q: Python's for loops Pyhton is new to me and i'm having a little problem with the for loops, Im used to for loop in java where you can set integers as you like in the loops but can't get it right in python. the task i was given is to make a function that return True of False. the function get 3 integers: short rope amount, long rope amount and wanted. it's known the short rope length is 1 meter and the long rope length is 5 meters. if the wanted length is in range of the possible lengths of the ropes the function will return True, else false, for example, 1 short rope and 2 long ropes can get you the following length: [1, 5, 6, 10, 11] and if the wanted length that the function got is in this list of lengths it should return True. here is my code: def wantedLength(short_amount, long_amount, wanted_length): short_rope_length = 1 long_rope_length = 5 for i in range(short_amount + 1): for j in range(long_amount + 1): my_length = [short_rope_length * i + long_rope_length * j, ", "] if wanted_length in my_length: return True else: return False but when I run the code I get the following error: TypeError: argument of type 'int' is not iterable what am I doing wrong in the for loop statement? thanks in advance! I tried to change the for loops with other commands like [short_amount] and etc the traceback as requsted: Traceback (most recent call last): File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 89, in <module> print(wantedLength(a,b,c)) File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 73, in wantedLength if wanted_length in my_length: TypeError: argument of type 'int' is not iterable
Python's for loops
Pyhton is new to me and i'm having a little problem with the for loops, Im used to for loop in java where you can set integers as you like in the loops but can't get it right in python. the task i was given is to make a function that return True of False. the function get 3 integers: short rope amount, long rope amount and wanted. it's known the short rope length is 1 meter and the long rope length is 5 meters. if the wanted length is in range of the possible lengths of the ropes the function will return True, else false, for example, 1 short rope and 2 long ropes can get you the following length: [1, 5, 6, 10, 11] and if the wanted length that the function got is in this list of lengths it should return True. here is my code: def wantedLength(short_amount, long_amount, wanted_length): short_rope_length = 1 long_rope_length = 5 for i in range(short_amount + 1): for j in range(long_amount + 1): my_length = [short_rope_length * i + long_rope_length * j, ", "] if wanted_length in my_length: return True else: return False but when I run the code I get the following error: TypeError: argument of type 'int' is not iterable what am I doing wrong in the for loop statement? thanks in advance! I tried to change the for loops with other commands like [short_amount] and etc the traceback as requsted: Traceback (most recent call last): File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 89, in <module> print(wantedLength(a,b,c)) File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 73, in wantedLength if wanted_length in my_length: TypeError: argument of type 'int' is not iterable
[]
[]
[ "The code you posted could not give you that error. On the other hand, the problem you have with the code is that in each iteration you create a new list with the current value (integer) and a string \",\". You need to append values to the list:\ndef wantedLength(short_amount, long_amount, wanted_length):\n short_rope_length = 1\n long_rope_length = 5\n my_length = list()\n for i in range(short_amount + 1):\n for j in range(long_amount + 1):\n my_length.append(short_rope_length * i + long_rope_length * j)\n if wanted_length in my_length:\n return True\n else:\n return False\n\n", "So, the thing is that you were assigning only the last item to length. If what you want is a array of all the possibilities, you can do something like:\ndef wantedLength(short_amount, long_amount, wanted_length):\n short_rope_length = 1\n long_rope_length = 5\n my_length=[]\n for i in range(short_amount + 1):\n for j in range(long_amount + 1):\n my_length.append(short_rope_length * i + long_rope_length * j)\n my_length.remove(0)\n my_length.sort()\n if wanted_length in my_length:\n return True\n else:\n return False\n\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0074466125_python.txt
Q: Difference between SQLAlchemy Select and Query API Not sure if this has been asked before, but in the SQLAlchemy docs they talk about introducing select() as part of the new 2.0 style for the ORM. Previously (1.x style), the query() method were used to fetch data. What is the difference between these two? For example, for querying a Users table for a user with email and name we can do something as followed in Query API: session.query(Users).filter_by(name='name', email='mail@example.com').first() In Select API, the same leads to more code: from sqlalchemy import select query = select(Users).filter_by(name='name', email='mail@example.com') user = session.execute(query).fetchone() Is there any significant advantage of using one compared to other, for example, a performance boost? 2.0 API is still in active development yet it seems like their documentation is favoring the select API more than the "legacy" query API. Is this merely attempting to bridge the gap between the ORM and Core functionalities? A: The biggest difference is how the select statement is constructed. The new method creates a select object which is more dynamic since it can be constructed from other select statements, without explicit subquery definition: # select from a subqeuery styled query q = select(Users).filter_by(name='name', email='mail@example.com') q = select(Users.name, Users.email).select_from(q) The outcome is more "native sql" construction of querying, as per the latest selectable API. Queries can be defined and passed throughout statements in various functionalities such as where clauses, having, select_from, intersect, union, and so on. Performance wise, probably some slight benefit in python run time (compiling of query), but negligible compared to network latency + db work. Great question btw! My response is informed by my experience with the select API. I am curious to hear what others have to say. A: Since 1.4 SQLAlchemy internally has implemented query() by the select() API, so in terms of performance there should be very little difference. In version 1.4, all Core and ORM SELECT statements are rendered from a Select object directly; when the Query object is used, at statement invocation time it copies its state to a Select which is then invoked internally using 2.0 style execution. https://docs.sqlalchemy.org/en/14/changelog/migration_14.html#change-5159 Historically the difference between query() and select() was query() was used for ORM and select() for Core. Version 2.0 removes many differences between ORM and Core and makes working with them more uniform. Comparing select() and query() doesn't really make sense anymore. Although there is some backwards compatibility and you're not forced to adopt the 2.0 style immediately, I think it's wise to start adopting it, both in 1.4 and 2.0. I've been doing so for a while now and found it easy to get used to and soon more intuitively compared to the 1.x style. But I've been using SQLAlchemy only for about a year now and have many more years experience with native SQL.
Difference between SQLAlchemy Select and Query API
Not sure if this has been asked before, but in the SQLAlchemy docs they talk about introducing select() as part of the new 2.0 style for the ORM. Previously (1.x style), the query() method were used to fetch data. What is the difference between these two? For example, for querying a Users table for a user with email and name we can do something as followed in Query API: session.query(Users).filter_by(name='name', email='mail@example.com').first() In Select API, the same leads to more code: from sqlalchemy import select query = select(Users).filter_by(name='name', email='mail@example.com') user = session.execute(query).fetchone() Is there any significant advantage of using one compared to other, for example, a performance boost? 2.0 API is still in active development yet it seems like their documentation is favoring the select API more than the "legacy" query API. Is this merely attempting to bridge the gap between the ORM and Core functionalities?
[ "The biggest difference is how the select statement is constructed. The new method creates a select object which is more dynamic since it can be constructed from other select statements, without explicit subquery definition:\n# select from a subqeuery styled query\nq = select(Users).filter_by(name='name', email='mail@example.com')\nq = select(Users.name, Users.email).select_from(q)\n\nThe outcome is more \"native sql\" construction of querying, as per the latest selectable API. Queries can be defined and passed throughout statements in various functionalities such as where clauses, having, select_from, intersect, union, and so on.\nPerformance wise, probably some slight benefit in python run time (compiling of query), but negligible compared to network latency + db work.\n\n Great question btw! My response is informed by my experience with the select API. I am curious to hear what others have to say.\n\n", "Since 1.4 SQLAlchemy internally has implemented query() by the select() API, so in terms of performance there should be very little difference.\n\nIn version 1.4, all Core and ORM SELECT statements are rendered from a\nSelect object directly; when the Query object is used, at statement\ninvocation time it copies its state to a Select which is then invoked\ninternally using 2.0 style execution.\n\nhttps://docs.sqlalchemy.org/en/14/changelog/migration_14.html#change-5159\nHistorically the difference between query() and select() was query() was used for ORM and select() for Core. Version 2.0 removes many differences between ORM and Core and makes working with them more uniform. Comparing select() and query() doesn't really make sense anymore.\nAlthough there is some backwards compatibility and you're not forced to adopt the 2.0 style immediately, I think it's wise to start adopting it, both in 1.4 and 2.0. I've been doing so for a while now and found it easy to get used to and soon more intuitively compared to the 1.x style. But I've been using SQLAlchemy only for about a year now and have many more years experience with native SQL.\n" ]
[ 7, 2 ]
[]
[]
[ "orm", "python", "sql", "sqlalchemy" ]
stackoverflow_0072828293_orm_python_sql_sqlalchemy.txt
Q: Concatenate two txt files Python I have two txt files, the first file contains strings separated by space, as follows: M N T F S Q V W V F S D T P S R L P E L M N G A Q A L A N Q I N T F V L N D A D G A Q A I Q L G A N H V W K L N G K P D D N T F S Q V W V F S D T P S R L P E L M N G A Q A L A N Q I N T F V L N D A D G A Q A I Q L G A N H V W K L N G K P D D R The second file contains strings of 0 and 1s, as follows: 0000000000000000000000000001000000000000000000000000000000000 0000000000010000000000000000000000000000000000000000000000000 I want to get a new file that join the first row of file1 with the first row of file2 and so on separated by TAB. How could I do that? I have this function for reading the files. with open("/home/darteagam/diploma/bert/files/bert_aa10.txt") as f1,open("/home/darteagam/diploma/bert/files/out_bert_10.txt") as f2: def read(f1,f2): for x in f1: print(x) for y in f2: print(y) read(f1,f2) A: Just zip the two. with open("/home/darteagam/diploma/bert/files/bert_aa10.txt") as f1,open("/home/darteagam/diploma/bert/files/out_bert_10.txt") as f2: for a,b in zip(f1,f2): print('\t'.join([a.strip(), b.strip()]) As a side note, it's bad practice to embed full pathnames in your code. Some day, you will want to run this on some other computer where that path doesn't work You should manage your current directory so you can use simple file names or relative paths.
Concatenate two txt files Python
I have two txt files, the first file contains strings separated by space, as follows: M N T F S Q V W V F S D T P S R L P E L M N G A Q A L A N Q I N T F V L N D A D G A Q A I Q L G A N H V W K L N G K P D D N T F S Q V W V F S D T P S R L P E L M N G A Q A L A N Q I N T F V L N D A D G A Q A I Q L G A N H V W K L N G K P D D R The second file contains strings of 0 and 1s, as follows: 0000000000000000000000000001000000000000000000000000000000000 0000000000010000000000000000000000000000000000000000000000000 I want to get a new file that join the first row of file1 with the first row of file2 and so on separated by TAB. How could I do that? I have this function for reading the files. with open("/home/darteagam/diploma/bert/files/bert_aa10.txt") as f1,open("/home/darteagam/diploma/bert/files/out_bert_10.txt") as f2: def read(f1,f2): for x in f1: print(x) for y in f2: print(y) read(f1,f2)
[ "Just zip the two.\nwith open(\"/home/darteagam/diploma/bert/files/bert_aa10.txt\") as f1,open(\"/home/darteagam/diploma/bert/files/out_bert_10.txt\") as f2:\n for a,b in zip(f1,f2):\n print('\\t'.join([a.strip(), b.strip()])\n\nAs a side note, it's bad practice to embed full pathnames in your code. Some day, you will want to run this on some other computer where that path doesn't work You should manage your current directory so you can use simple file names or relative paths.\n" ]
[ 3 ]
[]
[]
[ "concatenation", "file", "python", "python_3.x", "string" ]
stackoverflow_0074466246_concatenation_file_python_python_3.x_string.txt
Q: how can implement crunch wordlist generator This is what I wrote... def brute(m,pattern=None): letters = 'abcdefghijklmnopqrstuvwxyz' spec = '#@&$%*()+' upper = letters.upper() number = '1234567890' info = {'@':spec,'^':upper,'%':letters,'*':number} chars = [info.get(p,letters) for _,p in zip(range(m),pattern or letters)] def inner(m): if m: for l in chars[~m]: for j in inner(m-1): yield(l+j) else: for l in chars[~m]: yield l for i in inner(m-1): print(i) I want to know how to write a tool similar to crunch in kali... I would be grateful if you could implement it in Python. And why is my code so slow even when I write the output to file?? How to make it faster?? A: Here is an itertools based approach which might do what you want: import itertools, string def brute(m,pattern=None): if pattern is None: pattern = '%'*m letters = string.ascii_lowercase upper = string.ascii_uppercase spec = '#@&$%*()+' number = '1234567890' info = {'@':spec,'^':upper,'%':letters,'*':number} chars = [info.get(d,letters) for d in pattern] return [''.join(p) for p in itertools.product(*chars)] For example, words = brute(6,'@%%*@^') takes about 2 seconds to evaluate to a list of 14236560 words.
how can implement crunch wordlist generator
This is what I wrote... def brute(m,pattern=None): letters = 'abcdefghijklmnopqrstuvwxyz' spec = '#@&$%*()+' upper = letters.upper() number = '1234567890' info = {'@':spec,'^':upper,'%':letters,'*':number} chars = [info.get(p,letters) for _,p in zip(range(m),pattern or letters)] def inner(m): if m: for l in chars[~m]: for j in inner(m-1): yield(l+j) else: for l in chars[~m]: yield l for i in inner(m-1): print(i) I want to know how to write a tool similar to crunch in kali... I would be grateful if you could implement it in Python. And why is my code so slow even when I write the output to file?? How to make it faster??
[ "Here is an itertools based approach which might do what you want:\nimport itertools, string\n\ndef brute(m,pattern=None):\n if pattern is None:\n pattern = '%'*m\n letters = string.ascii_lowercase\n upper = string.ascii_uppercase\n spec = '#@&$%*()+'\n number = '1234567890'\n info = {'@':spec,'^':upper,'%':letters,'*':number}\n chars = [info.get(d,letters) for d in pattern]\n return [''.join(p) for p in itertools.product(*chars)]\n\nFor example, words = brute(6,'@%%*@^') takes about 2 seconds to evaluate to a list of 14236560 words.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074465867_python.txt
Q: FileNotFoundError: [WinError 3] The system cannot find the path specified when the files actually exist I am trying to work on copying files to a different directory based on a specific file name listed in excel. I am using shutil to copy files from one directory to another directory, but it keep showing the FileNotFound. This is the error message: Traceback (most recent call last): File "C:\Python\HellWorld\TestCopyPaste.py", line 20, in <module> shutil.copytree(i, output_file, dirs_exist_ok=True) File "C:\Users\Asus\Anaconda3\envs\untitled\lib\shutil.py", line 556, in copytree with os.scandir(src) as itr: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'Test.pdf' I am still new to python, please let me know if there's any part can be enhanced :) Below are my codes: import os import shutil import pandas as pd #Set file path input_file = "C:\\Users\\Asus\\Desktop\\Python\\Input\\" output_file = "C:\\Users\\Asus\\Desktop\\Python\\Output\\" #Set new variable for the file path to store the list of files file_list = os.listdir(input_file) #search the required file name that need to copy to another location #Create loop to search the files condition = pd.read_excel(r'C:\\Users\\Asus\\Desktop\\Python\Condition.xlsx') for i in file_list: for filename in condition: if filename in i: print(i) shutil.copytree(i, output_file, dirs_exist_ok=True) A: As mentioned in comments one issue is that you aren't joining the filename to the full filepath ("input_file"). I'm not really familiar with shutil but I believe the function you want to use is shutil.copy not shutil.copytree. It looks like copytree copies the directory structure of a specified source directory and you are specifically only looking at a list of files within a top level directory. Another issue is how you are reading the excel file. Assuming the files are listed in a single column it should be something like: condition = pd.read_excel("C:\\Users\\Asus\\Desktop\\Python\\Condition.xlsx",index_col=None,header=None) (I also removed your 'r' prefix to the string in this part) Then to get the items in the first column: condition[0].tolist() I also believe the second for loop is unnecessary. You can use the same if statement you already have in a single loop. The following is my solution, just change he paths to what you want. I changed variable names to make it a little more readable as well. (assumes all files are listed in a single column in excel with no header. And all file are in the input file directory with no subdirectories) import os import shutil import pandas as pd #Set file path input_file_dir = "C:\\Users\\myusername\\py\\input\\" output_file_dir = "C:\\Users\\myusername\\py\\output\\" #Set new variable for the file path to store the list of files file_list_from_dir = os.listdir(input_file_dir) #search the required file name that need to copy to another location #Create loop to search the files file_list_from_excel = pd.read_excel("C:\\Users\\myusername\\py\\Condition.xlsx",index_col=None,header=None) file_list_from_excel = file_list_from_excel[0].tolist() for thefileNameinDir in file_list_from_dir: if thefileNameinDir in file_list_from_excel: print(f"File matched: {thefileNameinDir}") tempSourcePath = os.path.join(input_file_dir,thefileNameinDir) shutil.copy(tempSourcePath, output_file_dir)
FileNotFoundError: [WinError 3] The system cannot find the path specified when the files actually exist
I am trying to work on copying files to a different directory based on a specific file name listed in excel. I am using shutil to copy files from one directory to another directory, but it keep showing the FileNotFound. This is the error message: Traceback (most recent call last): File "C:\Python\HellWorld\TestCopyPaste.py", line 20, in <module> shutil.copytree(i, output_file, dirs_exist_ok=True) File "C:\Users\Asus\Anaconda3\envs\untitled\lib\shutil.py", line 556, in copytree with os.scandir(src) as itr: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'Test.pdf' I am still new to python, please let me know if there's any part can be enhanced :) Below are my codes: import os import shutil import pandas as pd #Set file path input_file = "C:\\Users\\Asus\\Desktop\\Python\\Input\\" output_file = "C:\\Users\\Asus\\Desktop\\Python\\Output\\" #Set new variable for the file path to store the list of files file_list = os.listdir(input_file) #search the required file name that need to copy to another location #Create loop to search the files condition = pd.read_excel(r'C:\\Users\\Asus\\Desktop\\Python\Condition.xlsx') for i in file_list: for filename in condition: if filename in i: print(i) shutil.copytree(i, output_file, dirs_exist_ok=True)
[ "As mentioned in comments one issue is that you aren't joining the filename to the full filepath (\"input_file\").\nI'm not really familiar with shutil but I believe the function you want to use is shutil.copy not shutil.copytree. It looks like copytree copies the directory structure of a specified source directory and you are specifically only looking at a list of files within a top level directory. Another issue is how you are reading the excel file.\nAssuming the files are listed in a single column it should be something like:\ncondition = pd.read_excel(\"C:\\\\Users\\\\Asus\\\\Desktop\\\\Python\\\\Condition.xlsx\",index_col=None,header=None) \n(I also removed your 'r' prefix to the string in this part)\nThen to get the items in the first column: condition[0].tolist()\nI also believe the second for loop is unnecessary. You can use the same if statement you already have in a single loop.\nThe following is my solution, just change he paths to what you want. I changed variable names to make it a little more readable as well.\n(assumes all files are listed in a single column in excel with no header. And all file are in the input file directory with no subdirectories)\nimport os\nimport shutil\nimport pandas as pd\n\n#Set file path\ninput_file_dir = \"C:\\\\Users\\\\myusername\\\\py\\\\input\\\\\"\noutput_file_dir = \"C:\\\\Users\\\\myusername\\\\py\\\\output\\\\\"\n\n#Set new variable for the file path to store the list of files\nfile_list_from_dir = os.listdir(input_file_dir)\n\n#search the required file name that need to copy to another location\n#Create loop to search the files\nfile_list_from_excel = pd.read_excel(\"C:\\\\Users\\\\myusername\\\\py\\\\Condition.xlsx\",index_col=None,header=None)\n\nfile_list_from_excel = file_list_from_excel[0].tolist()\n\nfor thefileNameinDir in file_list_from_dir: \n if thefileNameinDir in file_list_from_excel:\n print(f\"File matched: {thefileNameinDir}\")\n tempSourcePath = os.path.join(input_file_dir,thefileNameinDir)\n shutil.copy(tempSourcePath, output_file_dir)\n\n" ]
[ 0 ]
[]
[]
[ "file_copying", "filenotfounderror", "loops", "python", "shutil" ]
stackoverflow_0074463662_file_copying_filenotfounderror_loops_python_shutil.txt
Q: How to interrupt a grpc call gracefully in the client side? I wrote a client which starts multiple connections to a grpc server to request something. I want to stop all the other grpc call once I got a reply. I use an Event to control this process. However, I don't know how to terminate a grpc call gracefully. The below is what I did. The code will cause an error: too many open files. Can somebody help me? How to terminate a grpc call gracefully? def request_something(event): with grpc.insecure_channel(ip) as channel: stub = GrpcServiceStub(channel) req = Request() response_future = stub.GetResponse.future(req) while not response_future.done() and not event.is_set(): time.sleep(0.1) if event.is_set(): # try to interrupt grpc call if not response_future.cancel(): while not response_future.done(): time.sleep(0.1) print("Stop request") channel.close() return response = response_future.result() return response event = Event() with futures.ThreadPoolExecutor(max_workers=...) as executor: res = [] for _ in range(...): future = executor.submit(request_something, event) res.append(future) for future in futures.as_completed(res): print("now we get the first response") event.set() executor.shutdown(wait=False) A: You could use the future API on your client calls (https://grpc.github.io/grpc/python/grpc.html#grpc.UnaryUnaryMultiCallable.future) and then call cancel on the futures (https://grpc.github.io/grpc/python/grpc.html#grpc.Future.cancel). Full cancellation example in Python: https://github.com/grpc/grpc/tree/master/examples/python/cancellation Hope this helps!
How to interrupt a grpc call gracefully in the client side?
I wrote a client which starts multiple connections to a grpc server to request something. I want to stop all the other grpc call once I got a reply. I use an Event to control this process. However, I don't know how to terminate a grpc call gracefully. The below is what I did. The code will cause an error: too many open files. Can somebody help me? How to terminate a grpc call gracefully? def request_something(event): with grpc.insecure_channel(ip) as channel: stub = GrpcServiceStub(channel) req = Request() response_future = stub.GetResponse.future(req) while not response_future.done() and not event.is_set(): time.sleep(0.1) if event.is_set(): # try to interrupt grpc call if not response_future.cancel(): while not response_future.done(): time.sleep(0.1) print("Stop request") channel.close() return response = response_future.result() return response event = Event() with futures.ThreadPoolExecutor(max_workers=...) as executor: res = [] for _ in range(...): future = executor.submit(request_something, event) res.append(future) for future in futures.as_completed(res): print("now we get the first response") event.set() executor.shutdown(wait=False)
[ "You could use the future API on your client calls (https://grpc.github.io/grpc/python/grpc.html#grpc.UnaryUnaryMultiCallable.future) and then call cancel on the futures (https://grpc.github.io/grpc/python/grpc.html#grpc.Future.cancel).\nFull cancellation example in Python: https://github.com/grpc/grpc/tree/master/examples/python/cancellation\nHope this helps!\n" ]
[ 0 ]
[]
[]
[ "grpc_python", "multithreading", "python" ]
stackoverflow_0074384177_grpc_python_multithreading_python.txt
Q: Is it possible to use RPi.GPIO library in docker? I used the official docker image of flask. And installed the rpi.gpio library in the container pip install rpi.gpio It's succeeded: root@e31ba5814e51:/app# pip install rpi.gpio Collecting rpi.gpio Downloading RPi.GPIO-0.7.0.tar.gz (30 kB) Building wheels for collected packages: rpi.gpio Building wheel for rpi.gpio (setup.py) ... done Created wheel for rpi.gpio: filename=RPi.GPIO-0.7.0-cp39-cp39-linux_armv7l.whl size=68495 sha256=0c2c43867c304f2ca21da6cc923b13e4ba22a60a77f7309be72d449c51c669db Stored in directory: /root/.cache/pip/wheels/09/be/52/39b324bfaf72ab9a47e81519994b2be5ddae1e99ddacd7a18e Successfully built rpi.gpio Installing collected packages: rpi.gpio Successfully installed rpi.gpio-0.7.0 But it prompted the following error: Traceback (most recent call last): File "/app/hello/app2.py", line 2, in <module> import RPi.GPIO as GPIO File "/usr/local/lib/python3.9/site-packages/RPi/GPIO/__init__.py", line 23, in <module> from RPi._GPIO import * RuntimeError: This module can only be run on a Raspberry Pi! I tried the method in this link, but it didn't work: Docker Access to Raspberry Pi GPIO Pins I want to know if this can be done, and if so, how to proceed. A: First make sure you're running Docker container as "privileged" like so: docker run --privileged -it debian:latest Also, double check that you're running an image that is meant to run on your processor. For example, if you try to run "debian:latest" on your Raspberry Pi 4 it will actually pull "arm32v7/debian:latest". A: Yes it is! To extend the answer of Ari M.: It's more safe to run $ docker run --device /dev/gpiomem -d whatever as it avoids full privileged host access. It's also necessary to build your image on RaspberyPi. This answers showed me the way: How to enable wiringpi GPIO control inside a Docker container Docker Access to Raspberry Pi GPIO Pins
Is it possible to use RPi.GPIO library in docker?
I used the official docker image of flask. And installed the rpi.gpio library in the container pip install rpi.gpio It's succeeded: root@e31ba5814e51:/app# pip install rpi.gpio Collecting rpi.gpio Downloading RPi.GPIO-0.7.0.tar.gz (30 kB) Building wheels for collected packages: rpi.gpio Building wheel for rpi.gpio (setup.py) ... done Created wheel for rpi.gpio: filename=RPi.GPIO-0.7.0-cp39-cp39-linux_armv7l.whl size=68495 sha256=0c2c43867c304f2ca21da6cc923b13e4ba22a60a77f7309be72d449c51c669db Stored in directory: /root/.cache/pip/wheels/09/be/52/39b324bfaf72ab9a47e81519994b2be5ddae1e99ddacd7a18e Successfully built rpi.gpio Installing collected packages: rpi.gpio Successfully installed rpi.gpio-0.7.0 But it prompted the following error: Traceback (most recent call last): File "/app/hello/app2.py", line 2, in <module> import RPi.GPIO as GPIO File "/usr/local/lib/python3.9/site-packages/RPi/GPIO/__init__.py", line 23, in <module> from RPi._GPIO import * RuntimeError: This module can only be run on a Raspberry Pi! I tried the method in this link, but it didn't work: Docker Access to Raspberry Pi GPIO Pins I want to know if this can be done, and if so, how to proceed.
[ "First make sure you're running Docker container as \"privileged\" like so:\ndocker run --privileged -it debian:latest\n\nAlso, double check that you're running an image that is meant to run on your processor.\nFor example, if you try to run \"debian:latest\" on your Raspberry Pi 4 it will actually pull \"arm32v7/debian:latest\".\n", "Yes it is! To extend the answer of Ari M.:\nIt's more safe to run\n$ docker run --device /dev/gpiomem -d whatever\n\nas it avoids full privileged host access.\nIt's also necessary to build your image on RaspberyPi.\nThis answers showed me the way:\nHow to enable wiringpi GPIO control inside a Docker container\nDocker Access to Raspberry Pi GPIO Pins\n" ]
[ 0, 0 ]
[]
[]
[ "docker", "python", "raspberry_pi4" ]
stackoverflow_0064926963_docker_python_raspberry_pi4.txt
Q: How to insert table name into query as variable? I'm trying to make a query to select a table from database. I created a list of table names and exported it to a list, saved necessary list fields as variables, then inserted these variables into a database query to export data. I do not initially know name of table but find it through logic and write it to a variable. It gives me an error: uch = "_uch" kam = "_kamera" pot = "_uzvvod" conn = sqlite3.connect("kotelnaya.sqlite") table = pd.read_sql_query("SELECT name FROM sqlite_master WHERE type='table'", conn) l = len(table) m = [0] * l i = 0 k = 1 for k in range(l): m[i] = table.at[i, "name"] i = i + 1 for num in m: if uch in str(num): stroka_uch = num for num in m: if kam in str(num): stroka_kam = num for num in m: if pot in str(num): stroka_pot = num table = pd.read_sql_query("SELECT * FROM {}".format(stroka_uch), conn) Error: cur.execute(*args, **kwargs) sqlite3.OperationalError: near "7": syntax error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Z_Python_TGRaschet\venv\database_from_to.py", line 70, in <module> table = pd.read_sql_query("SELECT * FROM {}".format(stroka_uch), conn) File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 397, in read_sql_query return pandas_sql.read_query( File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 2078, in read_query cursor = self.execute(*args) File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 2030, in execute raise ex from exc pandas.errors.DatabaseError: Execution failed on sql 'SELECT * FROM Datatable 7 Test 2_uch': near "7": syntax error
How to insert table name into query as variable?
I'm trying to make a query to select a table from database. I created a list of table names and exported it to a list, saved necessary list fields as variables, then inserted these variables into a database query to export data. I do not initially know name of table but find it through logic and write it to a variable. It gives me an error: uch = "_uch" kam = "_kamera" pot = "_uzvvod" conn = sqlite3.connect("kotelnaya.sqlite") table = pd.read_sql_query("SELECT name FROM sqlite_master WHERE type='table'", conn) l = len(table) m = [0] * l i = 0 k = 1 for k in range(l): m[i] = table.at[i, "name"] i = i + 1 for num in m: if uch in str(num): stroka_uch = num for num in m: if kam in str(num): stroka_kam = num for num in m: if pot in str(num): stroka_pot = num table = pd.read_sql_query("SELECT * FROM {}".format(stroka_uch), conn) Error: cur.execute(*args, **kwargs) sqlite3.OperationalError: near "7": syntax error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Z_Python_TGRaschet\venv\database_from_to.py", line 70, in <module> table = pd.read_sql_query("SELECT * FROM {}".format(stroka_uch), conn) File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 397, in read_sql_query return pandas_sql.read_query( File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 2078, in read_query cursor = self.execute(*args) File "C:\Z_Python_TGRaschet\venv\lib\site-packages\pandas\io\sql.py", line 2030, in execute raise ex from exc pandas.errors.DatabaseError: Execution failed on sql 'SELECT * FROM Datatable 7 Test 2_uch': near "7": syntax error
[]
[]
[ "This would be the easiest solution I guess:\nsql = \"select * from \" + stroka_uch\ntable = pd.read_sql_query(sql = sql, con = conn)\n" ]
[ -1 ]
[ "python", "sql", "sqlite" ]
stackoverflow_0074465414_python_sql_sqlite.txt
Q: Search and replace all cells with a certain value with openpyxl For my job I have large amount of excel files in which I have to replace certain values. I just started with openpyxl and tried the following code: import openpyxl from openpyxl import load_workbook wb1 = load_workbook(filename = 'testfile.xlsx') ws1 = wb1.active i = 0 for r in range(1,ws1.max_row+1): for c in range(1,ws1.max_column+1): s = ws1.cell(r,c).value if s != None or 'NM181841' in s: ws1.cell(r,c).value = s.replace("hello","hi") print("row {} col {} : {}".format(r,c,s)) i += 1 wb.save('targetfile.xlsx') print("{} cells updated".format(i)) On which I get following error "TypeError: argument of type 'NoneType' is not iterable" this happends in line five: if s != None or 'NM181841' in s: Does anyone have an idea what I did wrong? Thanks! A: You are trying to iterate through a type which is not iterable in the following: or 'NM181841' in s: What this line practically says is: "find 'NM181841' in 's'" thus it would required to loop through 's' which is not possible since TypeError: argument of type 'NoneType' is not iterable A: I found my own mistake, instead of: s = ws1.cell(r,c).value I had to use s = str(ws1.cell(r,c).value) With the help of the @MwBakker
Search and replace all cells with a certain value with openpyxl
For my job I have large amount of excel files in which I have to replace certain values. I just started with openpyxl and tried the following code: import openpyxl from openpyxl import load_workbook wb1 = load_workbook(filename = 'testfile.xlsx') ws1 = wb1.active i = 0 for r in range(1,ws1.max_row+1): for c in range(1,ws1.max_column+1): s = ws1.cell(r,c).value if s != None or 'NM181841' in s: ws1.cell(r,c).value = s.replace("hello","hi") print("row {} col {} : {}".format(r,c,s)) i += 1 wb.save('targetfile.xlsx') print("{} cells updated".format(i)) On which I get following error "TypeError: argument of type 'NoneType' is not iterable" this happends in line five: if s != None or 'NM181841' in s: Does anyone have an idea what I did wrong? Thanks!
[ "You are trying to iterate through a type which is not iterable in the following:\nor 'NM181841' in s:\n\nWhat this line practically says is: \"find 'NM181841' in 's'\" thus it would required to loop through 's' which is not possible since\nTypeError: argument of type 'NoneType' is not iterable\n\n", "I found my own mistake, instead of:\ns = ws1.cell(r,c).value\nI had to use\ns = str(ws1.cell(r,c).value)\nWith the help of the @MwBakker\n" ]
[ 1, 0 ]
[]
[]
[ "jupyter_notebook", "openpyxl", "python" ]
stackoverflow_0074466196_jupyter_notebook_openpyxl_python.txt
Q: ModuleNotFoundError: No module named 'pydantic' from pydantic import BaseModel on debug mode with PyCharm also after install pydantic print ModuleNotFoundError: No module named 'pydantic' A: I found the solution: open PyCharm preferences and install from Pycharm the package. A: Try this: sudo pip3 install pydantic and it works. A: If you are getting the error while using pipenv then you need to install pydantic by using pipenv install pydantic command.
ModuleNotFoundError: No module named 'pydantic'
from pydantic import BaseModel on debug mode with PyCharm also after install pydantic print ModuleNotFoundError: No module named 'pydantic'
[ "I found the solution: open PyCharm preferences and install from Pycharm the package.\n", "Try this:\nsudo pip3 install pydantic\n\nand it works.\n", "If you are getting the error while using pipenv then you need to install pydantic by using pipenv install pydantic command.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0064257411_python.txt
Q: TypeError: generatecode() takes 0 positional arguments but 1 was given I have the code below: from tkinter import * class Window(Frame): def __init__(self, master = None): Frame.__init__(self, master) self.master = master self.init_window() def init_window(self): self.master.title("COD:WWII Codes") self.pack(fill=BOTH, expand=1) codeButton = Button( self, text="Generate Code", command=self.generatecode ) codeButton.place(x=0, y=0) def generatecode(self): f = open("C:/Programs/codes.txt", "r") t.insert(1.0. f.red()) root = Tk() root.geometry("400x300") app = Window(root) root.mainloop() Then, I got the error below: TypeError: generatecode() takes 0 positional arguments but 1 was given So, how can I solve the error? A: When you call a method on a class (such as generatecode() in this case), Python automatically passes self as the first argument to the function. So when you call self.my_func(), it's more like calling MyClass.my_func(self). So when Python tells you "generatecode() takes 0 positional arguments but 1 was given", it's telling you that your method is set up to take no arguments, but the self argument is still being passed when the method is called, so in fact it is receiving one argument. Adding self to your method definition should resolve the problem. def generatecode(self): pass # Do stuff here Alternatively, you can make the method static, in which case Python will not pass self as the first argument: @staticmethod def generatecode(): pass # Do stuff here A: I got the same error: TypeError: test() takes 0 positional arguments but 1 was given When defining an instance method without self and I called it as shown below: class Person: # ↓↓ Without "self" def test(): print("Test") obj = Person() obj.test() # Here So, I put self to the instance method and called it: class Person: # ↓↓ Put "self" def test(self): print("Test") obj = Person() obj.test() # Here Then, the error was solved: Test In addition, even if defining an instance method with self, we cannot call it directly by class name as shown below: class Person: # Here def test(self): print("Test") Person.test() # Cannot call it directly by class name Then, the error below occurs: TypeError: test() missing 1 required positional argument: 'self' But, if defining an instance method without self, we can call it directly by class name as shown below: class Person: # ↓↓ Without "self" def test(): print("Test") Person.test() # Can call it directly by class name Then, we can get the result below without any errors: Test In detail, I explain about instance method in my answer for What is an "instance method" in Python? and also explain about @staticmethod and @classmethod in my answer for @classmethod vs @staticmethod in Python. A: The most upvoted answer does solve this issue, And just in case anyone is doing this inside of a jupyternotebook. You must restart the kernel of the jupyternotebook in order for changes to update in the notebook
TypeError: generatecode() takes 0 positional arguments but 1 was given
I have the code below: from tkinter import * class Window(Frame): def __init__(self, master = None): Frame.__init__(self, master) self.master = master self.init_window() def init_window(self): self.master.title("COD:WWII Codes") self.pack(fill=BOTH, expand=1) codeButton = Button( self, text="Generate Code", command=self.generatecode ) codeButton.place(x=0, y=0) def generatecode(self): f = open("C:/Programs/codes.txt", "r") t.insert(1.0. f.red()) root = Tk() root.geometry("400x300") app = Window(root) root.mainloop() Then, I got the error below: TypeError: generatecode() takes 0 positional arguments but 1 was given So, how can I solve the error?
[ "When you call a method on a class (such as generatecode() in this case), Python automatically passes self as the first argument to the function. So when you call self.my_func(), it's more like calling MyClass.my_func(self).\nSo when Python tells you \"generatecode() takes 0 positional arguments but 1 was given\", it's telling you that your method is set up to take no arguments, but the self argument is still being passed when the method is called, so in fact it is receiving one argument.\nAdding self to your method definition should resolve the problem.\ndef generatecode(self):\n pass # Do stuff here\n\nAlternatively, you can make the method static, in which case Python will not pass self as the first argument:\n@staticmethod\ndef generatecode():\n pass # Do stuff here\n\n", "I got the same error:\n\nTypeError: test() takes 0 positional arguments but 1 was given\n\nWhen defining an instance method without self and I called it as shown below:\nclass Person:\n # ↓↓ Without \"self\" \n def test(): \n print(\"Test\")\n\nobj = Person()\nobj.test() # Here\n\nSo, I put self to the instance method and called it:\nclass Person:\n # ↓↓ Put \"self\" \n def test(self): \n print(\"Test\")\n\nobj = Person()\nobj.test() # Here\n\nThen, the error was solved:\nTest\n\nIn addition, even if defining an instance method with self, we cannot call it directly by class name as shown below:\nclass Person:\n # Here \n def test(self): \n print(\"Test\")\n\nPerson.test() # Cannot call it directly by class name\n\nThen, the error below occurs:\n\nTypeError: test() missing 1 required positional argument: 'self'\n\nBut, if defining an instance method without self, we can call it directly by class name as shown below:\nclass Person:\n # ↓↓ Without \"self\" \n def test(): \n print(\"Test\")\n\nPerson.test() # Can call it directly by class name\n\nThen, we can get the result below without any errors:\nTest\n\nIn detail, I explain about instance method in my answer for What is an \"instance method\" in Python? and also explain about @staticmethod and @classmethod in my answer for @classmethod vs @staticmethod in Python.\n", "The most upvoted answer does solve this issue,\nAnd just in case anyone is doing this inside of a jupyternotebook. You must restart the kernel of the jupyternotebook in order for changes to update in the notebook\n" ]
[ 72, 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0043839536_python_tkinter.txt
Q: Converting Dataframe column to datetime doesn't complete I am trying to convert a column of a large dataset (660k rows) into datetime type in Jupyter notebook. I have found two ways to do it: pd.to_datetime(df['local_time'],format='%d/%m/%Y') df['local_time'].astype("datetime64[ns]") but none of them complete even in couple hours. Is there a way to make it faster? It doesn't look that any of the laptop's resources would be used 100%. My laptop is Acer S7. Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz. Ram 8Gb A: I am not sure what was the reason behind it, but I was converting multiple columns at once and the time increased many many times. df[['date_1', 'date_2', 'date_3', 'date_4']] = df[['date_1', 'date_2', 'date_3', 'date_4']].astype('datetime64[ns]') after doing everything in separate steps, time became decent df['date_1'] = df['date_1'].astype('datetime64[ns]') df['date_2'] = df['date_2'].astype('datetime64[ns]') df['date_3'] = df['date_3'].astype('datetime64[ns]') df['date_4'] = df['date_4'].astype('datetime64[ns]')
Converting Dataframe column to datetime doesn't complete
I am trying to convert a column of a large dataset (660k rows) into datetime type in Jupyter notebook. I have found two ways to do it: pd.to_datetime(df['local_time'],format='%d/%m/%Y') df['local_time'].astype("datetime64[ns]") but none of them complete even in couple hours. Is there a way to make it faster? It doesn't look that any of the laptop's resources would be used 100%. My laptop is Acer S7. Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz. Ram 8Gb
[ "I am not sure what was the reason behind it, but I was converting multiple columns at once and the time increased many many times.\ndf[['date_1', 'date_2', 'date_3', 'date_4']] = df[['date_1', 'date_2', 'date_3', 'date_4']].astype('datetime64[ns]')\n\nafter doing everything in separate steps, time became decent\ndf['date_1'] = df['date_1'].astype('datetime64[ns]')\ndf['date_2'] = df['date_2'].astype('datetime64[ns]')\ndf['date_3'] = df['date_3'].astype('datetime64[ns]')\ndf['date_4'] = df['date_4'].astype('datetime64[ns]')\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074447407_dataframe_datetime_pandas_python.txt
Q: Add new column with calculated values I want to add a new column called 'NormalizedAnnualCompensation' to my df and populate the column with values from one of three calculations: keep value 2 if value 1 is labeled "Yearly", or multiply it by 12 if labeled "Monthly", or multiply it by 52 if labeled "Weekly." The two existing columns have dtype INT64. The first called column contains values [Yearly, Monthly, Weekly]; the second called column contains salary totals. I running Python 3 in a Jup Notebook. Is the code for each calculation correct? How do I trigger the calculations to run through the new column? I tried writing an if statement and later placed it in a for loop. Neither worked. df.insert(31,['NormalizedAnnualCompensation'], # (also tried:) for x in df['CompFreq']: if df['CompFreq'] == "Yearly": df['NormalizedAnnualCompensation'] = df['CompTotal'] elif df['CompFreq'] == "Monthly": df['NormalizedAnnualCompensation'] = df['CompTotal']*12 elif df['CompFreq'] == "Weekly": df['NormalizedAnnualCompensation'] = df['CompTotal']*52 else: print(df['CompFreq'].index "not valid") ) A: Try using DataFrame.replaceto compute the factor for "CompTotal": import pandas as pd df = pd.DataFrame([ {"CompFreq": "Yearly", "CompTotal": 100}, {"CompFreq": "Monthly", "CompTotal": 10}, {"CompFreq": "Weekly", "CompTotal": 1}, ]) factor = df["CompFreq"].replace({"Yearly": 1, "Monthly": 12, "Weekly": 52}) normalized = factor * df["CompTotal"] df["NormalizedAnnualCompensation"] = normalized If you want something more like if, you can use DataFrame.where: normalized = df["CompTotal"].where( df["CompFreq"] == "Yearly", (12 * df["CompTotal"]).where( df["CompFreq"] == "Monthly", 52 * df["CompTotal"] ) ) However, I'd recommend the first option with replace for readability.
Add new column with calculated values
I want to add a new column called 'NormalizedAnnualCompensation' to my df and populate the column with values from one of three calculations: keep value 2 if value 1 is labeled "Yearly", or multiply it by 12 if labeled "Monthly", or multiply it by 52 if labeled "Weekly." The two existing columns have dtype INT64. The first called column contains values [Yearly, Monthly, Weekly]; the second called column contains salary totals. I running Python 3 in a Jup Notebook. Is the code for each calculation correct? How do I trigger the calculations to run through the new column? I tried writing an if statement and later placed it in a for loop. Neither worked. df.insert(31,['NormalizedAnnualCompensation'], # (also tried:) for x in df['CompFreq']: if df['CompFreq'] == "Yearly": df['NormalizedAnnualCompensation'] = df['CompTotal'] elif df['CompFreq'] == "Monthly": df['NormalizedAnnualCompensation'] = df['CompTotal']*12 elif df['CompFreq'] == "Weekly": df['NormalizedAnnualCompensation'] = df['CompTotal']*52 else: print(df['CompFreq'].index "not valid") )
[ "Try using DataFrame.replaceto compute the factor for \"CompTotal\":\nimport pandas as pd\n\ndf = pd.DataFrame([\n {\"CompFreq\": \"Yearly\", \"CompTotal\": 100}, \n {\"CompFreq\": \"Monthly\", \"CompTotal\": 10}, \n {\"CompFreq\": \"Weekly\", \"CompTotal\": 1},\n])\n\nfactor = df[\"CompFreq\"].replace({\"Yearly\": 1, \"Monthly\": 12, \"Weekly\": 52})\nnormalized = factor * df[\"CompTotal\"]\ndf[\"NormalizedAnnualCompensation\"] = normalized\n\nIf you want something more like if, you can use DataFrame.where:\nnormalized = df[\"CompTotal\"].where(\n df[\"CompFreq\"] == \"Yearly\", \n (12 * df[\"CompTotal\"]).where(\n df[\"CompFreq\"] == \"Monthly\", \n 52 * df[\"CompTotal\"]\n )\n)\n\nHowever, I'd recommend the first option with replace for readability.\n" ]
[ 0 ]
[]
[]
[ "calculated_columns", "python" ]
stackoverflow_0074465142_calculated_columns_python.txt
Q: write multi dimensional numpy array to many files I was wondering if there was a more efficient way of doing the following without using loops. I have a numpy array with the shape (i, x, y, z). Essentially I have i elements of the shape (x, y, z). I want to write each element to a separate file so that I have i files, each with the data from a single element. In my case, each element is an image, but I'm sure a solution can be format agnostic. I'm currently looping through each of the i elements and writing them out one at a time. As i gets really large, this takes a progressively longer time. Is there a better way or a useful library which could make this more efficient? Update I tried the suggestion to use multiprocessing by using concurrent.futures both the thread pool and then also trying the process pool. It was simpler in the code but the time to complete was 4x slower. i in this case is approximately 10000 while x and y are approximately 750 A: This sounds very suitable for multiprocessing, as the different elements need to be processed separately and can be save to disk independantly. Python has a usefull package for this, called multiprocessing, with a variety of pooling, processing, and other options. Here's a simple (and comment-documented) example of usage: from multiprocessing import Process import numpy as np # This should be your existing function def write_file(element): # write file pass # You'll still be looping of course, but in parallel over batches. This is a helper function for looping over a "batch" def write_list_of_files(elements_list): for element in elements_list: write_file(element) # You're data goes here... all_elements = np.ones((1000, 256, 256, 3)) num_procs = 10 # Depends on system limitations, number of cpu-cores, etc. procs = [Process(target=write_list_of_files, args=[all_elements[k::num_procs, ...]]) for k in range(num_procs)] # Each of these processes in the list is going to run the "write_list_of_files" function, but have separate inputs, due to the indexing trick of using "k::num_procs"... for p in procs: p.start() # Each process starts running independantly for p in procs: p.join() # assures the code won't continue until all are "joined" and done. Optional obviously... print('All done!') # This only runs onces all procs are done, due to "p.join"
write multi dimensional numpy array to many files
I was wondering if there was a more efficient way of doing the following without using loops. I have a numpy array with the shape (i, x, y, z). Essentially I have i elements of the shape (x, y, z). I want to write each element to a separate file so that I have i files, each with the data from a single element. In my case, each element is an image, but I'm sure a solution can be format agnostic. I'm currently looping through each of the i elements and writing them out one at a time. As i gets really large, this takes a progressively longer time. Is there a better way or a useful library which could make this more efficient? Update I tried the suggestion to use multiprocessing by using concurrent.futures both the thread pool and then also trying the process pool. It was simpler in the code but the time to complete was 4x slower. i in this case is approximately 10000 while x and y are approximately 750
[ "This sounds very suitable for multiprocessing, as the different elements need to be processed separately and can be save to disk independantly.\nPython has a usefull package for this, called multiprocessing, with a variety of pooling, processing, and other options.\nHere's a simple (and comment-documented) example of usage:\nfrom multiprocessing import Process\nimport numpy as np \n\n\n# This should be your existing function\ndef write_file(element):\n # write file\n pass\n\n\n# You'll still be looping of course, but in parallel over batches. This is a helper function for looping over a \"batch\"\ndef write_list_of_files(elements_list):\n for element in elements_list:\n write_file(element)\n\n\n# You're data goes here...\nall_elements = np.ones((1000, 256, 256, 3))\n\nnum_procs = 10 # Depends on system limitations, number of cpu-cores, etc.\nprocs = [Process(target=write_list_of_files, args=[all_elements[k::num_procs, ...]]) for k in range(num_procs)] # Each of these processes in the list is going to run the \"write_list_of_files\" function, but have separate inputs, due to the indexing trick of using \"k::num_procs\"...\n\nfor p in procs:\n p.start() # Each process starts running independantly\n\nfor p in procs:\n p.join() # assures the code won't continue until all are \"joined\" and done. Optional obviously...\n \nprint('All done!') # This only runs onces all procs are done, due to \"p.join\"\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "multidimensional_array", "numpy", "python" ]
stackoverflow_0074466262_arrays_multidimensional_array_numpy_python.txt
Q: multiprocessing: instances unaffected when iterating over them I'm trying to use the multiprocessing module to run in parallel the same method over a list object instances. The closest question that I've found is "apply-a-method-to-a-list-of-objects-in-parallel-using-multi-processing". However the solution given there seems to not work in my problem. Here is an example of what I'm trying to achieve: class Foo: def __init__(self): self.bar = None def put_bar(self): self.bar = 1.0 if __name__ == "__main__": instances = [Foo() for _ in range(100)] for instance in instances: instance.put_bar() # correctly prints 1.0 print(instances[0].bar) However, trying to parallelize this with the multiprocessing module, the variable bar gets unaffected: import os from multiprocessing import Pool class Foo: def __init__(self): self.bar = None def put_bar(self): self.bar = 1.0 def worker(instance): return instance.put_bar() if __name__ == "__main__": instances = [Foo() for _ in range(100)] with Pool(os.cpu_count()) as pool: pool.map(worker, (instance for instance in instances)) # prints None print(instances[0].bar) Any help on figuring it out where is the wrong step(s) is highly appreciated. A: You can create managed objects from your Foo class just like the multiprocessing.managers.SyncManager instance created with a call to multiptocessing.Manager() can create certain managed objects such as a list or dict. What is returned is a special proxy object that is shareable among processes. When method calls are made on such a proxy, the name of the method and its arguments are sent via a pipe or socket to a process created by the manager and the specified method is invoked on the actual object residing in the manager's address space. In effect, you are making something similar to a remote method call. This clearly is much slower than directly operating on the object but if you have to you have to. Your coded example, which just a bit too artificial, doesn't leave much alternatives. Therefore, I will modify your example slightly so that Foo.put_bar takes an argument and your worker function worker will determine what value to pass to put_bar based on some calculation. In that way, the value to be used as the argument to Foo.put_bar is returned back to the main process, which does all the actual updating of the instances: Example Without Using a Managed Object with a Special Proxy import os from multiprocessing import Pool class Foo: def __init__(self): self.bar = None def put_bar(self, value): self.bar = value def worker(instance): # Code to compute a result omitted. # We will for demo purposes always use 1.0: return 1.0 if __name__ == "__main__": instances = [Foo() for _ in range(100)] with Pool(os.cpu_count()) as pool: # (instance for instance in instances) instead of instances below # doesn't accomplish anything: for idx, result in enumerate(pool.map(worker, instances)): instances[idx].put_bar(result) # prints 1.0 print(instances[0].bar) Example Using a Managed Object import os from multiprocessing import Pool from multiprocessing.managers import NamespaceProxy, BaseManager class Foo: def __init__(self): self.bar = None def put_bar(self, value): self.bar = value def worker(instance): # Code to compute a result omitted. # We will for demo purposes always use 1.0: return instance.put_bar(1.0) # If we did not need to expose attributes such as bar, then we could # let Python automatically generate a proxy that would expose just the # methods. But here we do need to access directly the `bar` attribute. # The alternative would be for Foo to define method get_bar that returns # self.bar. class FooProxy(NamespaceProxy): _exposed_ = ('__getattribute__', '__setattr__', '__delattr__', 'put_bar', 'bar') def put_bar(self, value): return self._callmethod('put_bar', args=(value,)) class FooManager(BaseManager): pass if __name__ == "__main__": FooManager.register('Foo', Foo, FooProxy) with FooManager() as manager: instances = [manager.Foo() for _ in range(100)] with Pool(os.cpu_count()) as pool: # (instance for instance in instances) instead of instances below # doesn't accomplish anything: pool.map(worker, instances) # We must do all access to the proxy while the manager process # is still running, i.e. before this block is exited: # prints 1.0 print(instances[0].bar) Example Using a Managed Object Without a Special Proxy Here we do not need to access attributes directly on a managed object because we have defined method get_bar: import os from multiprocessing import Pool from multiprocessing.managers import NamespaceProxy, BaseManager class Foo: def __init__(self): self._bar = None def put_bar(self, value): self._bar = value def get_bar(self): return self._bar def worker(instance): # Code to compute a result omitted. # We will for demo purposes always use 1.0: return instance.put_bar(1.0) class FooManager(BaseManager): pass if __name__ == "__main__": FooManager.register('Foo', Foo) with FooManager() as manager: instances = [manager.Foo() for _ in range(100)] with Pool(os.cpu_count()) as pool: # (instance for instance in instances) instead of instances below # doesn't accomplish anything: pool.map(worker, instances) # We must do all access to the proxy while the manager process # is still running, i.e. before this block is exited: # prints 1.0 print(instances[0].get_bar())
multiprocessing: instances unaffected when iterating over them
I'm trying to use the multiprocessing module to run in parallel the same method over a list object instances. The closest question that I've found is "apply-a-method-to-a-list-of-objects-in-parallel-using-multi-processing". However the solution given there seems to not work in my problem. Here is an example of what I'm trying to achieve: class Foo: def __init__(self): self.bar = None def put_bar(self): self.bar = 1.0 if __name__ == "__main__": instances = [Foo() for _ in range(100)] for instance in instances: instance.put_bar() # correctly prints 1.0 print(instances[0].bar) However, trying to parallelize this with the multiprocessing module, the variable bar gets unaffected: import os from multiprocessing import Pool class Foo: def __init__(self): self.bar = None def put_bar(self): self.bar = 1.0 def worker(instance): return instance.put_bar() if __name__ == "__main__": instances = [Foo() for _ in range(100)] with Pool(os.cpu_count()) as pool: pool.map(worker, (instance for instance in instances)) # prints None print(instances[0].bar) Any help on figuring it out where is the wrong step(s) is highly appreciated.
[ "You can create managed objects from your Foo class just like the multiprocessing.managers.SyncManager instance created with a call to multiptocessing.Manager() can create certain managed objects such as a list or dict. What is returned is a special proxy object that is shareable among processes. When method calls are made on such a proxy, the name of the method and its arguments are sent via a pipe or socket to a process created by the manager and the specified method is invoked on the actual object residing in the manager's address space. In effect, you are making something similar to a remote method call. This clearly is much slower than directly operating on the object but if you have to you have to. Your coded example, which just a bit too artificial, doesn't leave much alternatives.\nTherefore, I will modify your example slightly so that Foo.put_bar takes an argument and your worker function worker will determine what value to pass to put_bar based on some calculation. In that way, the value to be used as the argument to Foo.put_bar is returned back to the main process, which does all the actual updating of the instances:\nExample Without Using a Managed Object with a Special Proxy\nimport os\nfrom multiprocessing import Pool\n\n\nclass Foo:\n\n def __init__(self):\n self.bar = None\n\n def put_bar(self, value):\n self.bar = value\n\n\ndef worker(instance):\n # Code to compute a result omitted.\n # We will for demo purposes always use 1.0:\n return 1.0\n\n\nif __name__ == \"__main__\":\n\n instances = [Foo() for _ in range(100)]\n\n with Pool(os.cpu_count()) as pool:\n # (instance for instance in instances) instead of instances below\n # doesn't accomplish anything:\n for idx, result in enumerate(pool.map(worker, instances)):\n instances[idx].put_bar(result)\n\n # prints 1.0\n print(instances[0].bar)\n\nExample Using a Managed Object\nimport os\nfrom multiprocessing import Pool\nfrom multiprocessing.managers import NamespaceProxy, BaseManager\n\nclass Foo:\n\n def __init__(self):\n self.bar = None\n\n def put_bar(self, value):\n self.bar = value\n\n\ndef worker(instance):\n # Code to compute a result omitted.\n # We will for demo purposes always use 1.0:\n return instance.put_bar(1.0)\n\n\n# If we did not need to expose attributes such as bar, then we could\n# let Python automatically generate a proxy that would expose just the\n# methods. But here we do need to access directly the `bar` attribute.\n# The alternative would be for Foo to define method get_bar that returns\n# self.bar.\nclass FooProxy(NamespaceProxy):\n _exposed_ = ('__getattribute__', '__setattr__', '__delattr__', 'put_bar', 'bar')\n\n def put_bar(self, value):\n return self._callmethod('put_bar', args=(value,))\n\nclass FooManager(BaseManager):\n pass\n\nif __name__ == \"__main__\":\n\n FooManager.register('Foo', Foo, FooProxy)\n with FooManager() as manager:\n instances = [manager.Foo() for _ in range(100)]\n\n with Pool(os.cpu_count()) as pool:\n # (instance for instance in instances) instead of instances below\n # doesn't accomplish anything:\n pool.map(worker, instances)\n # We must do all access to the proxy while the manager process\n # is still running, i.e. before this block is exited:\n # prints 1.0\n print(instances[0].bar)\n\nExample Using a Managed Object Without a Special Proxy\nHere we do not need to access attributes directly on a managed object because we have defined method get_bar:\nimport os\nfrom multiprocessing import Pool\nfrom multiprocessing.managers import NamespaceProxy, BaseManager\n\nclass Foo:\n\n def __init__(self):\n self._bar = None\n\n def put_bar(self, value):\n self._bar = value\n\n def get_bar(self):\n return self._bar\n\n\ndef worker(instance):\n # Code to compute a result omitted.\n # We will for demo purposes always use 1.0:\n return instance.put_bar(1.0)\n\nclass FooManager(BaseManager):\n pass\n\nif __name__ == \"__main__\":\n\n FooManager.register('Foo', Foo)\n with FooManager() as manager:\n instances = [manager.Foo() for _ in range(100)]\n\n with Pool(os.cpu_count()) as pool:\n # (instance for instance in instances) instead of instances below\n # doesn't accomplish anything:\n pool.map(worker, instances)\n # We must do all access to the proxy while the manager process\n # is still running, i.e. before this block is exited:\n # prints 1.0\n print(instances[0].get_bar())\n\n" ]
[ 2 ]
[]
[]
[ "multiprocessing", "oop", "python" ]
stackoverflow_0074464496_multiprocessing_oop_python.txt
Q: Open GUI while algo is running in the background I am attempting to keep Output running in the background while having an open GUI. The GUI displays the finding from the Algo just fine. But it does not continue to run in the background. Also, I am trying to get the Output to repeat from new, not continue. Hope you can help. Output = Output[Output['Match_Acc.'] >= 1] import PySimpleGUI as sg import pandas as pd font = ('Areal', 11) sg.theme('BrownBlue') data = Output headings = ['Result', 'Column1', 'Column2', 'Column3'] df = pd.DataFrame(data) headings = df.columns.tolist() data = df.values.tolist() layout = [[sg.Table(data, headings=headings, justification='left', key='-TABLE-')], [sg.Button('Run'), sg.Button('Exit')]] sg.Window("Overview", layout).read(close=True) def job(): Output schedule.every(5).seconds.do(job) while True: schedule.run_pending() time.sleep(1) I have tried to move the schedule.run on the end and the start and the result is the same. A: Window closed after statement sg.Window("Overview", layout).read(close=True). With method window.hide() to hide the window, window.un_hide to show the window again. from random import randint from time import sleep import threading import PySimpleGUI as sg def algo(window): global running while running: sleep(5) # The code for algo window.write_event_value('Algo Data', randint(0, 4)) # Event to update GUI window.write_event_value('Algo Done', None) # Event to close GUI headings = ['President', 'Date of Birth'] data = [ ['Ronald Reagan', 'February 6'], ['Abraham Lincoln', 'February 12'], ['George Washington', 'February 22'], ['Andrew Jackson', 'March 15'], ['Thomas Jefferson', 'April 13'], ] sg.theme('DarkBlue4') layout = [ [sg.Table(data, headings=headings, justification='left', key='-TABLE-')], [sg.Push(), sg.Button('OK')], ] window = sg.Window("ALGO", layout, enable_close_attempted_event=True) running = True # Flag to quit Algo threading.Thread(target=algo, args=(window,), daemon=True).start() # Run algo in thread while True: event, values = window.read() if event == sg.WIN_CLOSE_ATTEMPTED_EVENT: # Close button to confirm if exit if sg.popup_yes_no("Are you sure to exit ?", title='Warning') == 'Yes': # Stop thread first running = False else: continue elif event == 'OK': # Hide GUI window.hide() elif event == 'Algo Data': index = values[event] window.un_hide() window['-TABLE-'].update(select_rows=[index]) elif event == 'Algo Done': # Thread end break window.close()
Open GUI while algo is running in the background
I am attempting to keep Output running in the background while having an open GUI. The GUI displays the finding from the Algo just fine. But it does not continue to run in the background. Also, I am trying to get the Output to repeat from new, not continue. Hope you can help. Output = Output[Output['Match_Acc.'] >= 1] import PySimpleGUI as sg import pandas as pd font = ('Areal', 11) sg.theme('BrownBlue') data = Output headings = ['Result', 'Column1', 'Column2', 'Column3'] df = pd.DataFrame(data) headings = df.columns.tolist() data = df.values.tolist() layout = [[sg.Table(data, headings=headings, justification='left', key='-TABLE-')], [sg.Button('Run'), sg.Button('Exit')]] sg.Window("Overview", layout).read(close=True) def job(): Output schedule.every(5).seconds.do(job) while True: schedule.run_pending() time.sleep(1) I have tried to move the schedule.run on the end and the start and the result is the same.
[ "Window closed after statement sg.Window(\"Overview\", layout).read(close=True). With method window.hide() to hide the window, window.un_hide to show the window again.\nfrom random import randint\nfrom time import sleep\nimport threading\nimport PySimpleGUI as sg\n\ndef algo(window):\n global running\n while running:\n sleep(5) # The code for algo\n window.write_event_value('Algo Data', randint(0, 4)) # Event to update GUI\n window.write_event_value('Algo Done', None) # Event to close GUI\n\nheadings = ['President', 'Date of Birth']\ndata = [\n ['Ronald Reagan', 'February 6'],\n ['Abraham Lincoln', 'February 12'],\n ['George Washington', 'February 22'],\n ['Andrew Jackson', 'March 15'],\n ['Thomas Jefferson', 'April 13'],\n]\n\nsg.theme('DarkBlue4')\nlayout = [\n [sg.Table(data, headings=headings, justification='left', key='-TABLE-')],\n [sg.Push(), sg.Button('OK')],\n]\nwindow = sg.Window(\"ALGO\", layout, enable_close_attempted_event=True)\nrunning = True # Flag to quit Algo\nthreading.Thread(target=algo, args=(window,), daemon=True).start() # Run algo in thread\n\nwhile True:\n\n event, values = window.read()\n\n if event == sg.WIN_CLOSE_ATTEMPTED_EVENT:\n # Close button to confirm if exit\n if sg.popup_yes_no(\"Are you sure to exit ?\", title='Warning') == 'Yes':\n # Stop thread first\n running = False\n else:\n continue\n\n elif event == 'OK':\n # Hide GUI\n window.hide()\n\n elif event == 'Algo Data':\n index = values[event]\n window.un_hide()\n window['-TABLE-'].update(select_rows=[index])\n\n elif event == 'Algo Done':\n # Thread end\n break\n\nwindow.close()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "pysimplegui", "python", "schedule" ]
stackoverflow_0074466049_dataframe_pandas_pysimplegui_python_schedule.txt
Q: python 3 datetime difference in microseconds giving wrong answer for a long operation I'm doing a delete operation of 3000 elements from a binary search tree of size 6000 ( sorted therefore one sided tree). I need to calculate the time taken for completing all the deletes I did this bst2 = foo.BinarySearchTree() #init insert_all_to_tree(bst2,insert_lines) #insert 6000 elements start = datetime.now() #start time for idx, line in enumerate(lines): bst2.delete(line) #deleting if (idx%10 == 0): print("deleted ", (idx+1), "th element - ", line) end = datetime.now() #completion time duration = end - start print(duration.microseconds) #duration in microseconds I got the answer 761716 microseconds which is less than even a minute when my actual code ran for about 5 hours. I expected something in the ranges of 10^9 - 10^10. I even checked the max integer allowed in python to see if it's related to that but apparently that's not the problem. Why I'm I getting a wrong answer for the duration? A: datetime.now() returns a datetime, so doing math with it doesn't work out. You want to either use time.time() (Python < v3.3), time.perf_counter() (Python v3.3 until v3.7) or time.perf_counter_ns() (Python > v3.7). time.time() and time.perf_counter() both return float, and time.perf_counter_ns() returns int.
python 3 datetime difference in microseconds giving wrong answer for a long operation
I'm doing a delete operation of 3000 elements from a binary search tree of size 6000 ( sorted therefore one sided tree). I need to calculate the time taken for completing all the deletes I did this bst2 = foo.BinarySearchTree() #init insert_all_to_tree(bst2,insert_lines) #insert 6000 elements start = datetime.now() #start time for idx, line in enumerate(lines): bst2.delete(line) #deleting if (idx%10 == 0): print("deleted ", (idx+1), "th element - ", line) end = datetime.now() #completion time duration = end - start print(duration.microseconds) #duration in microseconds I got the answer 761716 microseconds which is less than even a minute when my actual code ran for about 5 hours. I expected something in the ranges of 10^9 - 10^10. I even checked the max integer allowed in python to see if it's related to that but apparently that's not the problem. Why I'm I getting a wrong answer for the duration?
[ "datetime.now() returns a datetime, so doing math with it doesn't work out. You want to either use time.time() (Python < v3.3), time.perf_counter() (Python v3.3 until v3.7) or time.perf_counter_ns() (Python > v3.7).\ntime.time() and time.perf_counter() both return float, and time.perf_counter_ns() returns int.\n" ]
[ 0 ]
[]
[]
[ "binary_search_tree", "datetime", "python", "python_3.x", "python_datetime" ]
stackoverflow_0074466406_binary_search_tree_datetime_python_python_3.x_python_datetime.txt
Q: How to count sum of prime numbers without a number 3? I have to count the sum of all prime numbers that are less than 1000 and do not contain the digit 3. My code: def primes_sum(lower, upper): total = 0 for num in range(lower, upper + 1): if not num % 3 and num % 10: continue elif num > 1: for i in range(2, num): if num % i == 0: break else: total += num return total total_value = primes_sum(0, 1000) print(total_value) But still I don't have right result A: def primes_sum(lower, upper): """Assume upper>=lower>2""" primes = [2] answer = 2 for num in range(lower, upper+1): if any(num%p==0 for p in primes): continue # not a prime primes.append(num) if '3' in str(num): continue answer += num return answer The issue in your code was that you were checking for num%3, which checks whether num is divisible by 3, not whether it contains a 3.
How to count sum of prime numbers without a number 3?
I have to count the sum of all prime numbers that are less than 1000 and do not contain the digit 3. My code: def primes_sum(lower, upper): total = 0 for num in range(lower, upper + 1): if not num % 3 and num % 10: continue elif num > 1: for i in range(2, num): if num % i == 0: break else: total += num return total total_value = primes_sum(0, 1000) print(total_value) But still I don't have right result
[ "def primes_sum(lower, upper):\n \"\"\"Assume upper>=lower>2\"\"\"\n primes = [2]\n answer = 2\n for num in range(lower, upper+1):\n if any(num%p==0 for p in primes): continue # not a prime\n\n primes.append(num)\n\n if '3' in str(num): continue\n answer += num\n\n return answer\n\nThe issue in your code was that you were checking for num%3, which checks whether num is divisible by 3, not whether it contains a 3.\n" ]
[ 0 ]
[]
[]
[ "function", "primes", "python", "python_3.x" ]
stackoverflow_0074466398_function_primes_python_python_3.x.txt
Q: convert nested dictionary into pandas dataframe example dictionary: sample_dict = {'doctor': {'docter_a': 26, 'docter_b': 40, 'docter_c': 42}, 'teacher': {'teacher_x': 21, 'teacher_y': 45, 'teacher_z': 33}} output dataframe: job person age doctor |doctor_a | 26 doctor |doctor_b | 40 doctor |doctor_c | 42 teacher|teacher_x| 21 teacher|teacher_y| 45 teacher|teacher_z| 33 I have tried: df = pd.dataFrame.from_dict(sample_dict) => doctor teacher doctor_a | 26 | Nah doctor_b | 40 | Nah doctor_c | 42 | Nah teacher_x | Nah | 21 teacher_y | Nah | 45 teacher_z | Nah | 33 Could someone help me figure this out? A: Use a nested list comprehension: pd.DataFrame([[k1, k2, v] for k1,d in sample_dict.items() for k2,v in d.items()], columns=['job', 'person', 'age']) Output: job person age 0 doctor docter_a 26 1 doctor docter_b 40 2 doctor docter_c 42 3 teacher teacher_x 21 4 teacher teacher_y 45 5 teacher teacher_z 33 A: You can construct a zip of length 3 elements, and feed them to pd.DataFrame after reshaping: zip_list = [list(zip([key]*len(sample_dict['doctor']), sample_dict[key], sample_dict[key].values())) for key in sample_dict.keys()] col_len = len(sample_dict['doctor']) # or use any other valid key output = pd.DataFrame(np.ravel(zip_list).reshape(col_len**2, col_len))
convert nested dictionary into pandas dataframe
example dictionary: sample_dict = {'doctor': {'docter_a': 26, 'docter_b': 40, 'docter_c': 42}, 'teacher': {'teacher_x': 21, 'teacher_y': 45, 'teacher_z': 33}} output dataframe: job person age doctor |doctor_a | 26 doctor |doctor_b | 40 doctor |doctor_c | 42 teacher|teacher_x| 21 teacher|teacher_y| 45 teacher|teacher_z| 33 I have tried: df = pd.dataFrame.from_dict(sample_dict) => doctor teacher doctor_a | 26 | Nah doctor_b | 40 | Nah doctor_c | 42 | Nah teacher_x | Nah | 21 teacher_y | Nah | 45 teacher_z | Nah | 33 Could someone help me figure this out?
[ "Use a nested list comprehension:\npd.DataFrame([[k1, k2, v]\n for k1,d in sample_dict.items() \n for k2,v in d.items()],\n columns=['job', 'person', 'age'])\n\nOutput:\n job person age\n0 doctor docter_a 26\n1 doctor docter_b 40\n2 doctor docter_c 42\n3 teacher teacher_x 21\n4 teacher teacher_y 45\n5 teacher teacher_z 33\n\n", "You can construct a zip of length 3 elements, and feed them to pd.DataFrame after reshaping:\nzip_list = [list(zip([key]*len(sample_dict['doctor']), \n sample_dict[key], \n sample_dict[key].values())) \n for key in sample_dict.keys()]\n\ncol_len = len(sample_dict['doctor']) # or use any other valid key\noutput = pd.DataFrame(np.ravel(zip_list).reshape(col_len**2, col_len))\n\n" ]
[ 4, 1 ]
[]
[]
[ "dataframe", "dictionary", "pandas", "python" ]
stackoverflow_0074466086_dataframe_dictionary_pandas_python.txt
Q: How to reduce a fraction within a class? I'm trying to reduce(self) to return fractions which have the lowest value. This is the code I have: class fraction: def __init__(self,numerator,denominator): self.numerator = numerator self.denominator = denominator self.reduce() def get_numerator(self): return self.numerator def get_denominator(self): return self.denominator def reduce(self): pass def __str__(self): return str(self.numerator) + "/" + str(self.denominator) And this is the test code: # y = fraction(2*7,7*2) # z = fraction(13,14) # a = fraction(13*2*7,14) # print(x) # print(y) # print(z) # print(a) I don't want to use math.gcd or import fractions but rather do it by hand. I'm not sure what to try without these operators. Would it be perhaps a while loop?
How to reduce a fraction within a class?
I'm trying to reduce(self) to return fractions which have the lowest value. This is the code I have: class fraction: def __init__(self,numerator,denominator): self.numerator = numerator self.denominator = denominator self.reduce() def get_numerator(self): return self.numerator def get_denominator(self): return self.denominator def reduce(self): pass def __str__(self): return str(self.numerator) + "/" + str(self.denominator) And this is the test code: # y = fraction(2*7,7*2) # z = fraction(13,14) # a = fraction(13*2*7,14) # print(x) # print(y) # print(z) # print(a) I don't want to use math.gcd or import fractions but rather do it by hand. I'm not sure what to try without these operators. Would it be perhaps a while loop?
[]
[]
[ "You can implement reduce() using Greatest Common Divisor. As @NickODell said in comment this GCD algorithm is described in Euclidean Algorithm Wiki. And implemented in my code below:\nTry it online!\nclass fraction:\n def __init__(self, numerator, denominator):\n self.numerator = numerator\n self.denominator = denominator\n self.reduce()\n\n def get_numerator(self):\n return self.numerator\n\n def get_denominator(self):\n return self.denominator\n\n @staticmethod\n def gcd(a, b):\n while b != 0:\n a, b = b, a % b\n return a\n\n def reduce(self):\n if self.numerator == 0 or self.denominator == 0:\n return\n g = self.gcd(self.numerator, self.denominator)\n self.numerator //= g\n self.denominator //= g\n\n def __str__(self):\n return str(self.numerator) + \"/\" + str(self.denominator)\n\ny = fraction(2*7,7*2)\nz = fraction(13,14)\na = fraction(13*2*7,14)\nprint(y)\nprint(z)\nprint(a)\nprint(fraction(15, 35))\n\nOutput:\n1/1\n13/14\n13/1\n3/7\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074466397_python.txt
Q: Python: Assign Value if None Exists I am a RoR programmer new to Python. I am trying to find the syntax that will allow me to set a variable to a specific value only if it wasn't previously assigned. Basically I want: # only if var1 has not been previously assigned var1 = 4 A: You should initialize variables to None and then check it: var1 = None if var1 is None: var1 = 4 Which can be written in one line as: var1 = 4 if var1 is None else var1 or using shortcut (but checking against None is recommended) var1 = var1 or 4 alternatively if you will not have anything assigned to variable that variable name doesn't exist and hence using that later will raise NameError, and you can also use that knowledge to do something like this try: var1 except NameError: var1 = 4 but I would advise against that. A: var1 = var1 or 4 The only issue this might have is that if var1 is a falsey value, like False or 0 or [], it will choose 4 instead. That might be an issue. A: This is a very different style of programming, but I always try to rewrite things that looked like bar = None if foo(): bar = "Baz" if bar is None: bar = "Quux" into just: if foo(): bar = "Baz" else: bar = "Quux" That is to say, I try hard to avoid a situation where some code paths define variables but others don't. In my code, there is never a path which causes an ambiguity of the set of defined variables (In fact, I usually take it a step further and make sure that the types are the same regardless of code path). It may just be a matter of personal taste, but I find this pattern, though a little less obvious when I'm writing it, much easier to understand when I'm later reading it. A: I'm also coming from Ruby so I love the syntax foo ||= 7. This is the closest thing I can find. foo = foo if 'foo' in vars() else 7 I've seen people do this for a dict: try: foo['bar'] except KeyError: foo['bar'] = 7 Upadate: However, I recently found this gem: foo['bar'] = foo.get('bar', 7) If you like that, then for a regular variable you could do something like this: vars()['foo'] = vars().get('foo', 7) A: Here is the easiest way I use, hope works for you, var1 = var1 or 4 This assigns 4 to var1 only if var1 is None , False or 0 A: One-liner solution here: var1 = locals().get("var1", "default value") Instead of having NameError, this solution will set var1 to default value if var1 hasn't been defined yet. Here's how it looks like in Python interactive shell: >>> var1 Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'var1' is not defined >>> var1 = locals().get("var1", "default value 1") >>> var1 'default value 1' >>> var1 = locals().get("var1", "default value 2") >>> var1 'default value 1' >>> A: IfLoop's answer (and MatToufoutu's comment) work great for standalone variables, but I wanted to provide an answer for anyone trying to do something similar for individual entries in lists, tuples, or dictionaries. Dictionaries existing_dict = {"spam": 1, "eggs": 2} existing_dict["foo"] = existing_dict["foo"] if "foo" in existing_dict else 3 Returns {"spam": 1, "eggs": 2, "foo": 3} Lists existing_list = ["spam","eggs"] existing_list = existing_list if len(existing_list)==3 else existing_list + ["foo"] Returns ["spam", "eggs", "foo"] Tuples existing_tuple = ("spam","eggs") existing_tuple = existing_tuple if len(existing_tuple)==3 else existing_tuple + ("foo",) Returns ("spam", "eggs", "foo") (Don't forget the comma in ("foo",) to define a "single" tuple.) The lists and tuples solution will be more complicated if you want to do more than just check for length and append to the end. Nonetheless, this gives a flavor of what you can do. A: If you mean a variable at the module level then you can use "globals": if "var1" not in globals(): var1 = 4 but the common Python idiom is to initialize it to say None (assuming that it's not an acceptable value) and then testing with if var1 is not None. A: Just use not condition in if condition var1=None if not var1: var1=4
Python: Assign Value if None Exists
I am a RoR programmer new to Python. I am trying to find the syntax that will allow me to set a variable to a specific value only if it wasn't previously assigned. Basically I want: # only if var1 has not been previously assigned var1 = 4
[ "You should initialize variables to None and then check it:\nvar1 = None\nif var1 is None:\n var1 = 4\n\nWhich can be written in one line as:\nvar1 = 4 if var1 is None else var1\n\nor using shortcut (but checking against None is recommended)\nvar1 = var1 or 4\n\nalternatively if you will not have anything assigned to variable that variable name doesn't exist and hence using that later will raise NameError, and you can also use that knowledge to do something like this\ntry:\n var1\nexcept NameError:\n var1 = 4\n\nbut I would advise against that.\n", "var1 = var1 or 4\n\nThe only issue this might have is that if var1 is a falsey value, like False or 0 or [], it will choose 4 instead. That might be an issue.\n", "This is a very different style of programming, but I always try to rewrite things that looked like\nbar = None\nif foo():\n bar = \"Baz\"\n\nif bar is None:\n bar = \"Quux\"\n\ninto just:\nif foo():\n bar = \"Baz\"\nelse:\n bar = \"Quux\"\n\nThat is to say, I try hard to avoid a situation where some code paths define variables but others don't. In my code, there is never a path which causes an ambiguity of the set of defined variables (In fact, I usually take it a step further and make sure that the types are the same regardless of code path). It may just be a matter of personal taste, but I find this pattern, though a little less obvious when I'm writing it, much easier to understand when I'm later reading it.\n", "I'm also coming from Ruby so I love the syntax foo ||= 7.\nThis is the closest thing I can find.\nfoo = foo if 'foo' in vars() else 7\n\nI've seen people do this for a dict:\ntry:\n foo['bar']\nexcept KeyError:\n foo['bar'] = 7\n\nUpadate:\nHowever, I recently found this gem:\nfoo['bar'] = foo.get('bar', 7)\n\nIf you like that, then for a regular variable you could do something like this:\nvars()['foo'] = vars().get('foo', 7)\n\n", "Here is the easiest way I use, hope works for you,\nvar1 = var1 or 4\nThis assigns 4 to var1 only if var1 is None , False or 0\n", "One-liner solution here:\nvar1 = locals().get(\"var1\", \"default value\")\n\nInstead of having NameError, this solution will set var1 to default value if var1 hasn't been defined yet.\nHere's how it looks like in Python interactive shell:\n>>> var1\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'var1' is not defined\n>>> var1 = locals().get(\"var1\", \"default value 1\")\n>>> var1\n'default value 1'\n>>> var1 = locals().get(\"var1\", \"default value 2\")\n>>> var1\n'default value 1'\n>>>\n\n", "IfLoop's answer (and MatToufoutu's comment) work great for standalone variables, but I wanted to provide an answer for anyone trying to do something similar for individual entries in lists, tuples, or dictionaries.\nDictionaries\nexisting_dict = {\"spam\": 1, \"eggs\": 2}\nexisting_dict[\"foo\"] = existing_dict[\"foo\"] if \"foo\" in existing_dict else 3\n\nReturns {\"spam\": 1, \"eggs\": 2, \"foo\": 3}\nLists\nexisting_list = [\"spam\",\"eggs\"]\nexisting_list = existing_list if len(existing_list)==3 else \n existing_list + [\"foo\"]\n\nReturns [\"spam\", \"eggs\", \"foo\"]\nTuples\nexisting_tuple = (\"spam\",\"eggs\")\nexisting_tuple = existing_tuple if len(existing_tuple)==3 else \n existing_tuple + (\"foo\",)\n\nReturns (\"spam\", \"eggs\", \"foo\")\n(Don't forget the comma in (\"foo\",) to define a \"single\" tuple.)\nThe lists and tuples solution will be more complicated if you want to do more than just check for length and append to the end. Nonetheless, this gives a flavor of what you can do.\n", "If you mean a variable at the module level then you can use \"globals\":\nif \"var1\" not in globals():\n var1 = 4\n\nbut the common Python idiom is to initialize it to say None (assuming that it's not an acceptable value) and then testing with if var1 is not None.\n", "Just use not condition in if condition\nvar1=None\nif not var1:\n var1=4\n\n\n" ]
[ 151, 55, 37, 26, 17, 7, 4, 0, 0 ]
[]
[]
[ "language_comparisons", "python", "python_2.7", "variable_assignment" ]
stackoverflow_0007338501_language_comparisons_python_python_2.7_variable_assignment.txt
Q: How to plot day in x axis, time in y axis and a heatmap plot for the values in python as shown in the figure? I want a heat map plot as can be seen in the attached image day in x axis, time in y axis and a heatmap plot data- https://1drv.ms/x/s!Av8bxRzsdiR7tEYmXDBWSUKriCSJ?e=m2objJ I tried plotting the data, but its leading to daily plots of the values A: Because the data is wrapped by row, you need to do some work to reshape it into the correct structure. For a 2D Contour like you linked, you need a 2D array of data, so after loading in your data-set, all I did was manipulate it into the correct shape, and then plot. import numpy as np import matplotlib.pyplot as plt import pandas as pd path = r'<your path here>\data.csv' data = np.array(pd.read_csv(path, header=0, delimiter=',', index_col=None, dtype=float, )) # print(data.shape) # Gives (8760, 3) day, hour, value = data[:,0], data[:,1], data[:,2] value = np.reshape(value, (365, len(value)//365)) # print(value.shape) # Gives (365, 24) fig, ax = plt.subplots(ncols =1, nrows = 1, figsize = (5,5)) ax.set_xlabel('Hour') ax.set_ylabel('Day') plot = ax.imshow(value, origin='lower', aspect='auto', extent=[hour[0], hour[-1], day[0], day[-1]], interpolation='gaussian', cmap='jet') fig.subplots_adjust(right=0.84) cbar_ax = fig.add_axes([0.89, 0.125, 0.05, 0.755]) cb = fig.colorbar(plot, cax=cbar_ax, extend='both', ticks=[0,20,40,60,80,100]) cb.ax.tick_params(axis='y', direction='in', size=0) cb.set_label('Annual AC Power in Year 1 [kW]',rotation=270, labelpad=18) To get
How to plot day in x axis, time in y axis and a heatmap plot for the values in python as shown in the figure?
I want a heat map plot as can be seen in the attached image day in x axis, time in y axis and a heatmap plot data- https://1drv.ms/x/s!Av8bxRzsdiR7tEYmXDBWSUKriCSJ?e=m2objJ I tried plotting the data, but its leading to daily plots of the values
[ "Because the data is wrapped by row, you need to do some work to reshape it into the correct structure. For a 2D Contour like you linked, you need a 2D array of data, so after loading in your data-set, all I did was manipulate it into the correct shape, and then plot.\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\npath = r'<your path here>\\data.csv'\ndata = np.array(pd.read_csv(path, header=0, delimiter=',', index_col=None, dtype=float, ))\n# print(data.shape) # Gives (8760, 3)\n\nday, hour, value = data[:,0], data[:,1], data[:,2]\nvalue = np.reshape(value, (365, len(value)//365))\n# print(value.shape) # Gives (365, 24)\n\nfig, ax = plt.subplots(ncols =1, nrows = 1, figsize = (5,5))\nax.set_xlabel('Hour')\nax.set_ylabel('Day')\n\nplot = ax.imshow(value, origin='lower', aspect='auto',\n extent=[hour[0], hour[-1], day[0], day[-1]],\n interpolation='gaussian',\n cmap='jet')\n\nfig.subplots_adjust(right=0.84)\ncbar_ax = fig.add_axes([0.89, 0.125, 0.05, 0.755])\ncb = fig.colorbar(plot, cax=cbar_ax, extend='both', ticks=[0,20,40,60,80,100])\ncb.ax.tick_params(axis='y', direction='in', size=0)\ncb.set_label('Annual AC Power in Year 1 [kW]',rotation=270, labelpad=18)\n\nTo get\n\n" ]
[ 0 ]
[]
[]
[ "data_analysis", "data_science", "heatmap", "python", "timeserieschart" ]
stackoverflow_0074465808_data_analysis_data_science_heatmap_python_timeserieschart.txt
Q: OpenCV - Reading a 16 bit grayscale image I'm trying to read a 16 bit grayscale image using OpenCV 2.4 in Python, but it seems to be loading it as 8 bit. I'm doing: im = cv2.imread(path,0) print im [[25 25 28 ..., 0 0 0] [ 0 0 0 ..., 0 0 0] [ 0 0 0 ..., 0 0 0] ..., How do I get it as 16 bit? A: Figured it out. In case anyone else runs into this problem: im = cv2.imread(path,-1) Setting the flag to 0, to load as grayscale, seems to default to 8 bit. Setting the flag to -1 loads the image as is. A: To improve readability use the flag cv2.IMREAD_ANYDEPTH image = cv2.imread( path, cv2.IMREAD_ANYDEPTH ) A: I had the same issue (16-bit .tif loading as 8-bit using cv2.imread). However, using the -1 flag didn't help. Instead, I was able to load 16-bit images using the tifffile package. A: This question suggests that image = cv2.imread('16bit.png', cv2.IMREAD_UNCHANGED) will also solve your problem.
OpenCV - Reading a 16 bit grayscale image
I'm trying to read a 16 bit grayscale image using OpenCV 2.4 in Python, but it seems to be loading it as 8 bit. I'm doing: im = cv2.imread(path,0) print im [[25 25 28 ..., 0 0 0] [ 0 0 0 ..., 0 0 0] [ 0 0 0 ..., 0 0 0] ..., How do I get it as 16 bit?
[ "Figured it out. In case anyone else runs into this problem:\nim = cv2.imread(path,-1)\n\nSetting the flag to 0, to load as grayscale, seems to default to 8 bit. Setting the flag to -1 loads the image as is.\n", "To improve readability use the flag cv2.IMREAD_ANYDEPTH\nimage = cv2.imread( path, cv2.IMREAD_ANYDEPTH )\n\n", "I had the same issue (16-bit .tif loading as 8-bit using cv2.imread). However, using the -1 flag didn't help. Instead, I was able to load 16-bit images using the tifffile package.\n", "This question suggests that image = cv2.imread('16bit.png', cv2.IMREAD_UNCHANGED) will also solve your problem.\n" ]
[ 45, 34, 9, 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0010969585_opencv_python.txt
Q: How to solve Python TypeError: type not understood I am creating a recommendation system and when I run this code I'm getting an error: from scipy.sparse.linalg import svds # Singular Value Decomposition U, sigma, Vt = svds(pivot_df, k = 10) And I'm getting this error: "TypeError: type not understood". What could be the reason for this error and how should I solve it? A: svds() takes a sparse matrix or an ndarray as input. But what you are passing is a Dataframe. Check the type by using the below command. type(pivot_df) Hence, you need to convert the Dataframe to np.ndarray while passing it to svds(). U, sigma, Vt = svds(pivot_df.to_numpy(), k=10)
How to solve Python TypeError: type not understood
I am creating a recommendation system and when I run this code I'm getting an error: from scipy.sparse.linalg import svds # Singular Value Decomposition U, sigma, Vt = svds(pivot_df, k = 10) And I'm getting this error: "TypeError: type not understood". What could be the reason for this error and how should I solve it?
[ "svds() takes a sparse matrix or an ndarray as input.\nBut what you are passing is a Dataframe. Check the type by using the below command.\ntype(pivot_df)\n\nHence, you need to convert the Dataframe to np.ndarray while passing it to svds().\nU, sigma, Vt = svds(pivot_df.to_numpy(), k=10)\n\n" ]
[ 0 ]
[]
[]
[ "python", "recommendation_engine", "scipy", "typeerror" ]
stackoverflow_0071941099_python_recommendation_engine_scipy_typeerror.txt
Q: Random indexing of large Json file compressed as Gzip I have a large json file (Wikidata dump, to be more specific) compressed as gzip. What I want to achieve is build an index, such that I can do random access and retrieve the line/entity I desire. The brute force way to find a line (entity) of interest would be: from gzip import GzipFile with GzipFile("path-to-wikidata/latest-all.json.gz", "r") as dump: for line in dump: # .... An alternative that I know of is to use hdf5, do one pass over the dump, and store everything of interest in the hdf5 file. However, the issue with approach is that even one pass over Wikidata is super slow, and writing millions of entries in the hdf5 file takes a while. Finally, I looked into indexed_gzip, using which I can seek to a random location of the file, and then read a sequence of bytes from it, as import indexed_gzip as igzip wikidata = igzip.IndexedGzipFile("path-to-wikidata/latest-all.json.gz") # Seek to a location towards the end of the file offset = 10000000000 # Seek to the desired location wikidata.seek(offset) # Read a sequence of bytes length_of_sequence = 100000 data_bytes = wikidata.read(length_of_sequence) however, the seeking takes extremely long in certain cases, e.g., when indexing chunks further from the start of the file. Note that this occurs only the first time I index the location, every subsequent index is same as indexing the 0 element. Evidence bellow: # Example of entity2index mapping: Q31 --> [offset, length] # File is ordered based on how the dump is iterated, e.g., # the first entity in the dictionary is first in Wikidata entity2index: OrderedDict[str, Tuple[int, int]] = json.load(open("path-to-wikidata/wikidata_index.json")) # Wikidata dump wikidata = igzip.IndexedGzipFile("path-to-wikidata/latest-all.json.gz") # List of entities entities = list(entity2index.keys()) # Testing starts entity = entities[0] offset, _ = entity2index[entity] # 367 µs ± 139 µs per loop (mean ± std. dev. of 7 runs, 2 loops each) %timeit -n 2 wikidata.seek(offset) entity = entities[1000000] offset, _ = entity2index[entity] # The slowest run took 92861.95 times longer than the fastest. This # could mean that an intermediate result is being cached. # 2.18 s ± 5.33 s per loop (mean ± std. dev. of 7 runs, 2 loops each) %timeit -n 2 wikidata.seek(offset) With that said, I am interested in (1) either overcoming the issue of the first indexing being significantly slower than every subsequent one, (2) any alternatives which could be better? A: Thanks to the comment by Mark Adler, I was able to resolve the issue by pre-computing and storing two index files on disk. The first one being a dictionary, mentioned in the question, where I can map from each entity id, e.g., Q31, to the offset and length in the latest-all.json.gz file. The second, helps to achieve fast seeks, which I obtained as per the documentation of igzip: wikidata = igzip.IndexedGzipFile("path-to-wikidata/path-to-wikidata/latest-all.json.gz") wikidata.build_full_index() wikidata.export_index("path-to-wikidata/wikidata_seek_index.gzidx") Then, if when I want to retrieve the data for a corresponding Wikidata entity, I do: # First index file, mapping from Q31 --> offset and length of the chunk of data for that entity entity2index = json.load(open("path-to-wikidata/wikidata_index.json")) # Wikidata load + seeking index wikidata = igzip.IndexedGzipFile("path-to-wikidata/latest-all.json.gz", index_file="path-to-wikidata/wikidata_seek_index.gzidx") # Get the offset and length of the entity offset, length = entity2index["Q41421"] # Seek to the location wikidata.seek(offset) # Obtain the data chunk data_bytes = wikidata.read(length) # Load the data from the byte array data = json.loads(data_bytes)
Random indexing of large Json file compressed as Gzip
I have a large json file (Wikidata dump, to be more specific) compressed as gzip. What I want to achieve is build an index, such that I can do random access and retrieve the line/entity I desire. The brute force way to find a line (entity) of interest would be: from gzip import GzipFile with GzipFile("path-to-wikidata/latest-all.json.gz", "r") as dump: for line in dump: # .... An alternative that I know of is to use hdf5, do one pass over the dump, and store everything of interest in the hdf5 file. However, the issue with approach is that even one pass over Wikidata is super slow, and writing millions of entries in the hdf5 file takes a while. Finally, I looked into indexed_gzip, using which I can seek to a random location of the file, and then read a sequence of bytes from it, as import indexed_gzip as igzip wikidata = igzip.IndexedGzipFile("path-to-wikidata/latest-all.json.gz") # Seek to a location towards the end of the file offset = 10000000000 # Seek to the desired location wikidata.seek(offset) # Read a sequence of bytes length_of_sequence = 100000 data_bytes = wikidata.read(length_of_sequence) however, the seeking takes extremely long in certain cases, e.g., when indexing chunks further from the start of the file. Note that this occurs only the first time I index the location, every subsequent index is same as indexing the 0 element. Evidence bellow: # Example of entity2index mapping: Q31 --> [offset, length] # File is ordered based on how the dump is iterated, e.g., # the first entity in the dictionary is first in Wikidata entity2index: OrderedDict[str, Tuple[int, int]] = json.load(open("path-to-wikidata/wikidata_index.json")) # Wikidata dump wikidata = igzip.IndexedGzipFile("path-to-wikidata/latest-all.json.gz") # List of entities entities = list(entity2index.keys()) # Testing starts entity = entities[0] offset, _ = entity2index[entity] # 367 µs ± 139 µs per loop (mean ± std. dev. of 7 runs, 2 loops each) %timeit -n 2 wikidata.seek(offset) entity = entities[1000000] offset, _ = entity2index[entity] # The slowest run took 92861.95 times longer than the fastest. This # could mean that an intermediate result is being cached. # 2.18 s ± 5.33 s per loop (mean ± std. dev. of 7 runs, 2 loops each) %timeit -n 2 wikidata.seek(offset) With that said, I am interested in (1) either overcoming the issue of the first indexing being significantly slower than every subsequent one, (2) any alternatives which could be better?
[ "Thanks to the comment by Mark Adler, I was able to resolve the issue by pre-computing and storing two index files on disk. The first one being a dictionary, mentioned in the question, where I can map from each entity id, e.g., Q31, to the offset and length in the latest-all.json.gz file. The second, helps to achieve fast seeks, which I obtained as per the documentation of igzip:\nwikidata = igzip.IndexedGzipFile(\"path-to-wikidata/path-to-wikidata/latest-all.json.gz\")\nwikidata.build_full_index()\nwikidata.export_index(\"path-to-wikidata/wikidata_seek_index.gzidx\")\n\nThen, if when I want to retrieve the data for a corresponding Wikidata entity, I do:\n# First index file, mapping from Q31 --> offset and length of the chunk of data for that entity\nentity2index = json.load(open(\"path-to-wikidata/wikidata_index.json\"))\n# Wikidata load + seeking index\nwikidata = igzip.IndexedGzipFile(\"path-to-wikidata/latest-all.json.gz\", index_file=\"path-to-wikidata/wikidata_seek_index.gzidx\")\n\n# Get the offset and length of the entity\noffset, length = entity2index[\"Q41421\"]\n# Seek to the location\nwikidata.seek(offset)\n# Obtain the data chunk\ndata_bytes = wikidata.read(length)\n# Load the data from the byte array\ndata = json.loads(data_bytes)\n\n" ]
[ 0 ]
[]
[]
[ "gzip", "json", "python", "wikidata" ]
stackoverflow_0074460186_gzip_json_python_wikidata.txt
Q: How to get the list of children and grandchildren from a nested structure? Given this dictionary of parent-children relations, { 2: [8, 7], 8: [9, 10], 10: [11], 15: [16, 17], } I'd like to get the list of all children, grandchildren, great-grandchildren, etc. -- e.g. given a parent with an ID 2 I want to get the following list: [8, 7, 9, 10, 11]. The number of nesting levels can be infinitely long. Cycles are not possible. So far I was able to achieve this function but I don't know how to return from it: links = { 2: [8, 7], 8: [9, 10], 10: [11], 15: [16, 17], } def get_nested_children(parent_uid, acc = []): if parent_uid in links: acc = acc + links[parent_uid] print("[in loop]", acc) for child_uid in links[parent_uid]: get_nested_children(child_uid, acc) else: return acc print(get_nested_children(2)) Which outputs: [in loop] [8, 7] [in loop] [8, 7, 9, 10] [in loop] [8, 7, 9, 10, 11] None A: Since cycles aren't possible and the order is not important, the easiest way to do this is with a generator function. Just yield the children and yield from the results of recursion. This will give you a depth first result: links = { 2: [8, 7], 8: [9, 10], 10: [11], 15: [16, 17], } def get_nested_children(parent_uid): for child_uid in links.get(parent_uid, []): yield child_uid yield from get_nested_children(child_uid) list(get_nested_children(2)) # [8, 9, 10, 11, 7] If you want a traditional function you can just append each child, then extend the results of recursion onto a local list, which you can return: def get_nested_children(parent_uid): res = [] for child_uid in links.get(parent_uid, []): res.append(child_uid) res.extend(get_nested_children(child_uid)) return res get_nested_children(2) # [8, 9, 10, 11, 7]
How to get the list of children and grandchildren from a nested structure?
Given this dictionary of parent-children relations, { 2: [8, 7], 8: [9, 10], 10: [11], 15: [16, 17], } I'd like to get the list of all children, grandchildren, great-grandchildren, etc. -- e.g. given a parent with an ID 2 I want to get the following list: [8, 7, 9, 10, 11]. The number of nesting levels can be infinitely long. Cycles are not possible. So far I was able to achieve this function but I don't know how to return from it: links = { 2: [8, 7], 8: [9, 10], 10: [11], 15: [16, 17], } def get_nested_children(parent_uid, acc = []): if parent_uid in links: acc = acc + links[parent_uid] print("[in loop]", acc) for child_uid in links[parent_uid]: get_nested_children(child_uid, acc) else: return acc print(get_nested_children(2)) Which outputs: [in loop] [8, 7] [in loop] [8, 7, 9, 10] [in loop] [8, 7, 9, 10, 11] None
[ "Since cycles aren't possible and the order is not important, the easiest way to do this is with a generator function. Just yield the children and yield from the results of recursion. This will give you a depth first result:\nlinks = {\n 2: [8, 7],\n 8: [9, 10],\n 10: [11],\n 15: [16, 17],\n}\n\ndef get_nested_children(parent_uid):\n for child_uid in links.get(parent_uid, []):\n yield child_uid\n yield from get_nested_children(child_uid)\n\n\nlist(get_nested_children(2))\n# [8, 9, 10, 11, 7]\n\nIf you want a traditional function you can just append each child, then extend the results of recursion onto a local list, which you can return:\ndef get_nested_children(parent_uid):\n res = []\n for child_uid in links.get(parent_uid, []):\n res.append(child_uid)\n res.extend(get_nested_children(child_uid))\n return res\n\n\nget_nested_children(2)\n# [8, 9, 10, 11, 7]\n\n" ]
[ 1 ]
[]
[]
[ "grandchild", "parent_child", "python", "recursion" ]
stackoverflow_0074466128_grandchild_parent_child_python_recursion.txt
Q: Install MySQL Client in Django Show Error Hi I am trying to install Mysqlclient in Django and I got this message collecting mysqlclient Using cached https://files.pythonhosted.org/packages/f4/f1/3bb6f64ca7a429729413e6556b7ba5976df06019a5245a43d36032f1061e/mysqlclient-1.4.2.post1.tar.gz Building wheels for collected packages: mysqlclient Building wheel for mysqlclient (setup.py) ... error ERROR: Complete output from command 'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\userMO~1\AppData\Local\Temp\pip-wheel-rd_6y67h' --python-tag cp37: ERROR: running bdist_wheel running build running build_py creating build creating build\lib.win32-3.7 creating build\lib.win32-3.7\MySQLdb copying MySQLdb\__init__.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\_exceptions.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\compat.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\connections.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\converters.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\cursors.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\release.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\times.py -> build\lib.win32-3.7\MySQLdb creating build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.7\MySQLdb\constants running build_ext building 'MySQLdb._mysql' extension creating build\temp.win32-3.7 creating build\temp.win32-3.7\Release creating build\temp.win32-3.7\Release\MySQLdb C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dversion_info=(1,4,2,'post',1) -D__version__=1.4.2.post1 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.1\include\mariadb" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /TcMySQLdb/_mysql.c /Fobuild\temp.win32-3.7\Release\MySQLdb/_mysql.obj /Zl /D_CRT_SECURE_NO_WARNINGS _mysql.c MySQLdb/_mysql.c(29): fatal error C1083: Cannot open include file: 'mysql.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2 ---------------------------------------- ERROR: Failed building wheel for mysqlclient Running setup.py clean for mysqlclient Failed to build mysqlclient Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error ERROR: Complete output from command 'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\userMO~1\AppData\Local\Temp\pip-record-ea_7lykd\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\usermo~1\virtua~1\tmsv2_~2\include\site\python3.7\mysqlclient': ERROR: running install running build running build_py creating build creating build\lib.win32-3.7 creating build\lib.win32-3.7\MySQLdb copying MySQLdb\__init__.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\_exceptions.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\compat.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\connections.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\converters.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\cursors.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\release.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\times.py -> build\lib.win32-3.7\MySQLdb creating build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.7\MySQLdb\constants running build_ext building 'MySQLdb._mysql' extension creating build\temp.win32-3.7 creating build\temp.win32-3.7\Release creating build\temp.win32-3.7\Release\MySQLdb C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dversion_info=(1,4,2,'post',1) -D__version__=1.4.2.post1 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.1\include\mariadb" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /TcMySQLdb/_mysql.c /Fobuild\temp.win32-3.7\Release\MySQLdb/_mysql.obj /Zl /D_CRT_SECURE_NO_WARNINGS _mysql.c MySQLdb/_mysql.c(29): fatal error C1083: Cannot open include file: 'mysql.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2 ---------------------------------------- ERROR: Command "'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\userMO~1\AppData\Local\Temp\pip-record-ea_7lykd\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\usermo~1\virtua~1\tmsv2_~2\include\site\python3.7\mysqlclient'" failed with error code 1 in C:\Users\userMO~1\AppData\Local\Temp\pip-install-vzfx29bg\mysqlclient\ I already tried several way pip install opencv-contrib-python (can install no work) pip install mysqlclient==1.3.12 (show same error) I install the mycsqlclient from Wheel Link (show this error mysqlclient-1.4.2-cp38-cp38m-win_amd64.whl is not a supported wheel on this platform.) My python version is: 3.7.3 (I come from desktop environment and while I read the Django, its say "Ridiculously fast" but now for MySQL connection problem took 4 days already). A: You have downloaded the wrong wheel. The error message says you tried to install mysqlclient-1.4.2-cp38-cp38m-win_amd64.whl, which is for Python 3.8. Since you are using Python 3.7, you should use either mysqlclient‑1.4.2‑cp37‑cp37m‑win32.whl or mysqlclient‑1.4.2‑cp37‑cp37m‑win_amd64.whl depending on whether you have installed 32-bit or 64-bit Python. A: It seems like your Build Tools cannot handle some .h files. You can use unofficial precompiled package database to get already compiled mysqlclient. After downloading it run pip install name-of-whl-file.whl If the python+win version of whl file fails to install, try using another version. Always works for me if Build Tools fail. A: I got the same error when I try to install mysqlclient in windows. after much time spent in this error. I found a result in the result install the latest visual studio and visual studio build tools. if programmer and only work on coding then transfer windows to ubuntu.i transfer on Ubuntu and enjoy coding A: try this: pip install mysqlclient-1.4.4-cp38-cp38-win_amd64.whl if it did'nt work, then use the 32bit version pip install mysqlclient-1.4.4-cp38-cp38-win32.whl A: The best way to install mysql-client is to go to https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient and you will find many mysql-client wheels. Download the first one #do not change the name of the file,, open command prompt, use cd to go to the directory where your wheel has been installed, then use pip install (name of file) #use .whl as well if it doesn't work, download the second one and repeat the process, if that doesn't work as well, keep on downloading until I guarantee you that at least one of them would be installed in your computer A: You can use pymysql 1.Install pymysql pip install pymysql Modify your init.py file import pymysql pymysql.install_as_MySQLdb() You can start your project
Install MySQL Client in Django Show Error
Hi I am trying to install Mysqlclient in Django and I got this message collecting mysqlclient Using cached https://files.pythonhosted.org/packages/f4/f1/3bb6f64ca7a429729413e6556b7ba5976df06019a5245a43d36032f1061e/mysqlclient-1.4.2.post1.tar.gz Building wheels for collected packages: mysqlclient Building wheel for mysqlclient (setup.py) ... error ERROR: Complete output from command 'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\userMO~1\AppData\Local\Temp\pip-wheel-rd_6y67h' --python-tag cp37: ERROR: running bdist_wheel running build running build_py creating build creating build\lib.win32-3.7 creating build\lib.win32-3.7\MySQLdb copying MySQLdb\__init__.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\_exceptions.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\compat.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\connections.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\converters.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\cursors.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\release.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\times.py -> build\lib.win32-3.7\MySQLdb creating build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.7\MySQLdb\constants running build_ext building 'MySQLdb._mysql' extension creating build\temp.win32-3.7 creating build\temp.win32-3.7\Release creating build\temp.win32-3.7\Release\MySQLdb C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dversion_info=(1,4,2,'post',1) -D__version__=1.4.2.post1 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.1\include\mariadb" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /TcMySQLdb/_mysql.c /Fobuild\temp.win32-3.7\Release\MySQLdb/_mysql.obj /Zl /D_CRT_SECURE_NO_WARNINGS _mysql.c MySQLdb/_mysql.c(29): fatal error C1083: Cannot open include file: 'mysql.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2 ---------------------------------------- ERROR: Failed building wheel for mysqlclient Running setup.py clean for mysqlclient Failed to build mysqlclient Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error ERROR: Complete output from command 'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\userMO~1\AppData\Local\Temp\pip-record-ea_7lykd\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\usermo~1\virtua~1\tmsv2_~2\include\site\python3.7\mysqlclient': ERROR: running install running build running build_py creating build creating build\lib.win32-3.7 creating build\lib.win32-3.7\MySQLdb copying MySQLdb\__init__.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\_exceptions.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\compat.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\connections.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\converters.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\cursors.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\release.py -> build\lib.win32-3.7\MySQLdb copying MySQLdb\times.py -> build\lib.win32-3.7\MySQLdb creating build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.7\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.7\MySQLdb\constants running build_ext building 'MySQLdb._mysql' extension creating build\temp.win32-3.7 creating build\temp.win32-3.7\Release creating build\temp.win32-3.7\Release\MySQLdb C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dversion_info=(1,4,2,'post',1) -D__version__=1.4.2.post1 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.1\include\mariadb" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-Ic:\users\user moe\appdata\local\programs\python\python37-32\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /TcMySQLdb/_mysql.c /Fobuild\temp.win32-3.7\Release\MySQLdb/_mysql.obj /Zl /D_CRT_SECURE_NO_WARNINGS _mysql.c MySQLdb/_mysql.c(29): fatal error C1083: Cannot open include file: 'mysql.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2 ---------------------------------------- ERROR: Command "'c:\users\usermo~1\virtua~1\tmsv2_~2\scripts\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\userMO~1\\AppData\\Local\\Temp\\pip-install-vzfx29bg\\mysqlclient\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\userMO~1\AppData\Local\Temp\pip-record-ea_7lykd\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\usermo~1\virtua~1\tmsv2_~2\include\site\python3.7\mysqlclient'" failed with error code 1 in C:\Users\userMO~1\AppData\Local\Temp\pip-install-vzfx29bg\mysqlclient\ I already tried several way pip install opencv-contrib-python (can install no work) pip install mysqlclient==1.3.12 (show same error) I install the mycsqlclient from Wheel Link (show this error mysqlclient-1.4.2-cp38-cp38m-win_amd64.whl is not a supported wheel on this platform.) My python version is: 3.7.3 (I come from desktop environment and while I read the Django, its say "Ridiculously fast" but now for MySQL connection problem took 4 days already).
[ "You have downloaded the wrong wheel. The error message says you tried to install mysqlclient-1.4.2-cp38-cp38m-win_amd64.whl, which is for Python 3.8. \nSince you are using Python 3.7, you should use either mysqlclient‑1.4.2‑cp37‑cp37m‑win32.whl or mysqlclient‑1.4.2‑cp37‑cp37m‑win_amd64.whl depending on whether you have installed 32-bit or 64-bit Python.\n", "It seems like your Build Tools cannot handle some .h files. You can use unofficial precompiled package database to get already compiled mysqlclient. \nAfter downloading it run\npip install name-of-whl-file.whl\n\nIf the python+win version of whl file fails to install, try using another version. Always works for me if Build Tools fail.\n", "I got the same error when I try to install mysqlclient in windows. after much time spent in this error. I found a result in the result install the latest visual studio and visual studio build tools. if programmer and only work on coding then transfer windows to ubuntu.i transfer on Ubuntu and enjoy coding \n", "try this:\npip install mysqlclient-1.4.4-cp38-cp38-win_amd64.whl\n\nif it did'nt work, then use the 32bit version\n pip install mysqlclient-1.4.4-cp38-cp38-win32.whl\n\n", "The best way to install mysql-client is to go to https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient and you will find many mysql-client wheels.\n\nDownload the first one #do not change the name of the file,,\nopen command prompt,\nuse cd to go to the directory where your wheel has been installed,\nthen use pip install (name of file) #use .whl as well\n\nif it doesn't work, download the second one and repeat the process, if that doesn't work as well, keep on downloading until I guarantee you that at least one of them would be installed in your computer\n", "You can use pymysql\n1.Install pymysql\npip install pymysql\n\n\nModify your init.py file\nimport pymysql\npymysql.install_as_MySQLdb()\n\n\nYou can start your project\n" ]
[ 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0056643488_django_mysql_python.txt
Q: Can you compare strings in python like in Java with .equals? Can you compare strings in Python in any other way apart from ==? Is there anything like .equals in Java? A: There are two ways to do this. The first is to use the operator module, which contains functions for all of the mathematical operators: >>> from operator import eq >>> x = "a" >>> y = "a" >>> eq(x, y) True >>> y = "b" >>> eq(x, y) False >>> The other is to use the __eq__ method of a string, which is called when you use ==: >>> x = "a" >>> y = "a" >>> x.__eq__(y) True >>> y = "b" >>> x.__eq__(y) False >>> A: You could do: import operator a = "string1" b = "string2" print operator.eq(a, b) This is similar to Java in that you're not using an explicit operator. However in Java you're using a method call on the String class (i.e., myString.equals(otherString)) but in Python eq is just a function which you import from a module called operator (see operator.eq in the documentation). A: According to the docs: eq(a, b) is equivalent to a == b So == is just like .equals in Java (except it works when the left side is null). The equivalent of Java's == operator is the is operator, as in: if a is b A: What is the need for using other than '==' as python strings are immutable and memoized by default? As pointed in other answers you can use 'is' for reference(id) comparison. A: In Java, .equals() is used instead of ==, which checks if they are the same object, not the same value. .equals() is used for comparing the actual values of 2 strings. However, in Python, "==" by default checks if they have the same value so it is better to use in general. As other solutions pointed out, you can also __eq__ as another way to get the same result.
Can you compare strings in python like in Java with .equals?
Can you compare strings in Python in any other way apart from ==? Is there anything like .equals in Java?
[ "There are two ways to do this. The first is to use the operator module, which contains functions for all of the mathematical operators:\n>>> from operator import eq\n>>> x = \"a\"\n>>> y = \"a\"\n>>> eq(x, y)\nTrue\n>>> y = \"b\"\n>>> eq(x, y)\nFalse\n>>>\n\nThe other is to use the __eq__ method of a string, which is called when you use ==:\n>>> x = \"a\"\n>>> y = \"a\"\n>>> x.__eq__(y)\nTrue\n>>> y = \"b\"\n>>> x.__eq__(y)\nFalse\n>>>\n\n", "You could do:\nimport operator\na = \"string1\"\nb = \"string2\"\nprint operator.eq(a, b)\n\nThis is similar to Java in that you're not using an explicit operator.\nHowever in Java you're using a method call on the String class (i.e., myString.equals(otherString)) but in Python eq is just a function which you import from a module called operator (see operator.eq in the documentation).\n", "According to the docs:\n\neq(a, b) is equivalent to a == b\n\nSo == is just like .equals in Java (except it works when the left side is null).\nThe equivalent of Java's == operator is the is operator, as in:\nif a is b\n\n", "What is the need for using other than '==' as python strings are immutable and memoized by default?\nAs pointed in other answers you can use 'is' for reference(id) comparison.\n", "In Java, .equals() is used instead of ==, which checks if they are the same object, not the same value. .equals() is used for comparing the actual values of 2 strings.\nHowever, in Python, \"==\" by default checks if they have the same value so it is better to use in general.\nAs other solutions pointed out, you can also __eq__ as another way to get the same result.\n" ]
[ 4, 2, 2, 0, 0 ]
[]
[]
[ "comparison", "python", "string" ]
stackoverflow_0019965595_comparison_python_string.txt
Q: How can I create a desktop shortcut for Jupyter Notebook(Anaconda) on Mac? I am new to Jupyter Notebook. I mainly use it for my Python class. I installed Jupyter Notebook via Anaconda. So, to open Jupyter Notebook, I have to open the anaconda navigator every time. Is there any way to bypass this in MacOS and open Notebook directly? I have tried making a terminal shell script with the following code /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command ; exit; But it gave this error (base) utkarsharyan@Utkarshs-MacBook-Air ~ % /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command ; exit; /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command: /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter-notebook: /opt/concourse/worker/volumes/live/09f385b3-041f-4619-6576-50f6b5465a28/volume: bad interpreter: No such file or directory Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] What should I do? A: Jupyter App Issue: There are many ways you might go about doing this. All of them will be more or less complicated to do because Jupyter itself isn't built to be used as a desktop app. If you do want to try a few DIY ways, this one has a few answers that might be helpful: Open an ipython notebook via double-click on osx If you prefer not to deal with extra complications, a mac app for Jupyter would be your best bet. Desktop app: Callisto for Jupyter Notebooks This app is built for macOS and iOS and makes it super easy to use Jupyter notebooks and Python. It's currently in the beta stage so it's definitely something you can try for a Python class.
How can I create a desktop shortcut for Jupyter Notebook(Anaconda) on Mac?
I am new to Jupyter Notebook. I mainly use it for my Python class. I installed Jupyter Notebook via Anaconda. So, to open Jupyter Notebook, I have to open the anaconda navigator every time. Is there any way to bypass this in MacOS and open Notebook directly? I have tried making a terminal shell script with the following code /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command ; exit; But it gave this error (base) utkarsharyan@Utkarshs-MacBook-Air ~ % /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command ; exit; /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter_mac.command: /Users/utkarsharyan/opt/anaconda3/pkgs/notebook-6.4.2-py38hecd8cb5_0/bin/jupyter-notebook: /opt/concourse/worker/volumes/live/09f385b3-041f-4619-6576-50f6b5465a28/volume: bad interpreter: No such file or directory Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] What should I do?
[ "Jupyter App Issue:\nThere are many ways you might go about doing this. All of them will be more or less complicated to do because Jupyter itself isn't built to be used as a desktop app.\nIf you do want to try a few DIY ways, this one has a few answers that might be helpful: Open an ipython notebook via double-click on osx\nIf you prefer not to deal with extra complications, a mac app for Jupyter would be your best bet.\nDesktop app:\nCallisto for Jupyter Notebooks\nThis app is built for macOS and iOS and makes it super easy to use Jupyter notebooks and Python. It's currently in the beta stage so it's definitely something you can try for a Python class.\n" ]
[ 1 ]
[]
[]
[ "anaconda", "jupyter", "jupyter_notebook", "macos", "python" ]
stackoverflow_0068993034_anaconda_jupyter_jupyter_notebook_macos_python.txt
Q: How to use Python with Selenium to click the "Load More" button on "https://github.com/topics"? I just need to click the load more button once to reveal a bunch more information so that I can scrape more HTML than what is loaded. The following "should" go to github.com/topics and find the one and only button element and click it one time. from selenium import webdriver from selenium.webdriver.common.by import By import time driver = webdriver.Edge() driver.get("https://github.com/topics") time.sleep(5) btn = driver.find_element(By.TAG_NAME, "button") btn.click() time.sleep(3) driver.quit() I'm told Message: element not interactable so I'm obviously doing something wrong but I'm not sure what. A: There are several issues with your code: The "Load more" button is initially out of the view, so you have to scroll the page in order to click it. Your locator is bad. You need to wait for elements to appear on the page before accessing them. WebDriverWait expected_conditions explicit waits should be used for that, not hardcoded sleeps. The following code works, it scrolls the page and clicks "Load more" 1 time. import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://github.com/topics" driver.get(url) load_more = wait.until(EC.presence_of_element_located((By.XPATH, "//button[contains(.,'Load more')]"))) load_more.location_once_scrolled_into_view time.sleep(1) load_more.click() UPD You can simply modify the above code to make it clicking Load more button while it presented. I implemented this with infinite while loop making a break if Load more button not found. This code works. import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 5) url = "https://github.com/topics" driver.get(url) while True: try: load_more = wait.until(EC.presence_of_element_located((By.XPATH, "//button[contains(.,'Load more')]"))) load_more.location_once_scrolled_into_view time.sleep(1) load_more.click() except: break A: use btn = driver.findElementsByXPath("//button[contains(text(),'Load more')]"); You are not finding the right element. This is the reason why it is not "interactable"
How to use Python with Selenium to click the "Load More" button on "https://github.com/topics"?
I just need to click the load more button once to reveal a bunch more information so that I can scrape more HTML than what is loaded. The following "should" go to github.com/topics and find the one and only button element and click it one time. from selenium import webdriver from selenium.webdriver.common.by import By import time driver = webdriver.Edge() driver.get("https://github.com/topics") time.sleep(5) btn = driver.find_element(By.TAG_NAME, "button") btn.click() time.sleep(3) driver.quit() I'm told Message: element not interactable so I'm obviously doing something wrong but I'm not sure what.
[ "There are several issues with your code:\n\nThe \"Load more\" button is initially out of the view, so you have to scroll the page in order to click it.\nYour locator is bad.\nYou need to wait for elements to appear on the page before accessing them. WebDriverWait expected_conditions explicit waits should be used for that, not hardcoded sleeps.\n\nThe following code works, it scrolls the page and clicks \"Load more\" 1 time.\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\n\nurl = \"https://github.com/topics\"\ndriver.get(url)\n\nload_more = wait.until(EC.presence_of_element_located((By.XPATH, \"//button[contains(.,'Load more')]\")))\nload_more.location_once_scrolled_into_view\ntime.sleep(1)\nload_more.click()\n\nUPD\nYou can simply modify the above code to make it clicking Load more button while it presented.\nI implemented this with infinite while loop making a break if Load more button not found. This code works.\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\n\nurl = \"https://github.com/topics\"\ndriver.get(url)\n\nwhile True:\n try:\n load_more = wait.until(EC.presence_of_element_located((By.XPATH, \"//button[contains(.,'Load more')]\")))\n load_more.location_once_scrolled_into_view\n time.sleep(1)\n load_more.click()\n except:\n break\n\n", "use\nbtn = driver.findElementsByXPath(\"//button[contains(text(),'Load more')]\");\nYou are not finding the right element. This is the reason why it is not \"interactable\"\n" ]
[ 0, 0 ]
[]
[]
[ "html", "python", "selenium", "selenium_edgedriver", "web_scraping" ]
stackoverflow_0074464469_html_python_selenium_selenium_edgedriver_web_scraping.txt
Q: Replace all cells with "-1" in DataFrame I have a dataframe like so: RANK COUNT '2020-01-01' 100 -1 '2020-01-02' 50 -1 '2020-01-03' -1 75 How can I replace all occurrences of -1 with None and still preserve both the RANK and COUNT as ints? The result should look like: RANK COUNT '2020-01-01' 100 '2020-01-02' 50 '2020-01-03' 75 If this isn't possible, how can I dump the original data into a .csv file that looks like the desired result? A: using replace, replace -1 with "" out = df.replace(-1, "") RANK COUNT '2020-01-01' 100 '2020-01-02' 50 '2020-01-03' 75
Replace all cells with "-1" in DataFrame
I have a dataframe like so: RANK COUNT '2020-01-01' 100 -1 '2020-01-02' 50 -1 '2020-01-03' -1 75 How can I replace all occurrences of -1 with None and still preserve both the RANK and COUNT as ints? The result should look like: RANK COUNT '2020-01-01' 100 '2020-01-02' 50 '2020-01-03' 75 If this isn't possible, how can I dump the original data into a .csv file that looks like the desired result?
[ "using replace, replace -1 with \"\"\nout = df.replace(-1, \"\")\n\n RANK COUNT\n'2020-01-01' 100 \n'2020-01-02' 50 \n'2020-01-03' 75\n\n" ]
[ 1 ]
[ "df = df.replace(-1, \"\")\n\nSecond Method\ndf['RANK'] = df['RANK'].astype(str)\ndf['COUNT'] = df['COUNT'].astype(str)\ndf = df.replace('-1', \"\")\ndf['RANK'] = df['RANK'].astype(int)\ndf['COUNT'] = df['COUNT'].astype(int)\n\n" ]
[ -2 ]
[ "pandas", "python" ]
stackoverflow_0074466654_pandas_python.txt
Q: How to use multiple exceptions conditions properly? I am working with many files and this is an example of a smaller portion. Imagine I have my file names inside a list like this: filelist = ["file1.csv", "file2.csv", "file3.csv"] I would like to import them as a dataframe. If I am not able to do this condition, I would try another way... and if I still don't get it, I would like to add this filename to another list (errorfiles). errorfiles = [] for file in filelist: try: df = pd.read_csv(file) except: df = pd.read_csv(file + ".csv") else: errorfiles.append(file) I tried this code above, but it raises the following error and don't add the file names with error to my list: FileNotFoundError: [Errno 2] No such file or directory: 'file1.csv.csv' I think I am not doing this try-except correctly. In this example, all these files should be in errorfiles as I can't import them. Anyone could help me? A: You need a nested try/except for the case where the second file is not found. errorfiles = [] for file in filelist: try: df = pd.read_csv(file) except FileNotFoundError: try: df = pd.read_csv(file + ".csv") except FileNotFoundError: errorfiles.append(file)
How to use multiple exceptions conditions properly?
I am working with many files and this is an example of a smaller portion. Imagine I have my file names inside a list like this: filelist = ["file1.csv", "file2.csv", "file3.csv"] I would like to import them as a dataframe. If I am not able to do this condition, I would try another way... and if I still don't get it, I would like to add this filename to another list (errorfiles). errorfiles = [] for file in filelist: try: df = pd.read_csv(file) except: df = pd.read_csv(file + ".csv") else: errorfiles.append(file) I tried this code above, but it raises the following error and don't add the file names with error to my list: FileNotFoundError: [Errno 2] No such file or directory: 'file1.csv.csv' I think I am not doing this try-except correctly. In this example, all these files should be in errorfiles as I can't import them. Anyone could help me?
[ "You need a nested try/except for the case where the second file is not found.\nerrorfiles = []\nfor file in filelist:\n try:\n df = pd.read_csv(file)\n except FileNotFoundError:\n try:\n df = pd.read_csv(file + \".csv\")\n except FileNotFoundError:\n errorfiles.append(file)\n\n" ]
[ 1 ]
[]
[]
[ "python", "try_except" ]
stackoverflow_0074466465_python_try_except.txt
Q: How do I return an array of pixel values using kernel to condense them (blur)? *Python* So, what I'm trying to do is take an image (let's say 100x100) and do a 5x5 kernel over the image: kernel = np.ones((5, 5), np.float32)/25 and then output an array for each iteration of the kernel (like in cv2.filter2D) like: kernel_vals.append(np.array([[indexOfKernelIteration], [newArrayOfEditedKernelValues]])) What I'm missing is how to get it to iterate across the image and output the pixel values of the new "image" that would be produced by: img = cv2.filter2D(image, -1, kernel) I just want, for each kernel, the output that is displayed on the new image to be put into the "kernel_vals" array. ^NOT INTO AN IMAGE Attached image for visual reference. A: imread returns an np.array, so if i understand what you want to do, you have the solution in the question. For completeness sake, see the code below. import cv2 img = cv2.imread("image.png", cv2.IMREAD_GRAYSCALE) print(type(img)) print(img[:10, :10]) kernel = np.ones((5, 5), np.float32)/25 kernel_vals = cv2.filter2D(img, -1, kernel) print(kernel_vals[:10, :10]) And the output is (with added newlines for readability) <class 'numpy.ndarray'> [[255 255 255 255 255 255 255 255 255 255] [255 255 255 255 255 255 255 255 255 255] [255 255 255 255 255 255 255 255 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255] [255 255 255 0 255 255 255 0 255 255]] [[255 255 255 255 255 255 255 255 255 255] [255 245 245 245 245 235 245 245 245 235] [255 235 235 235 235 214 235 235 235 214] [255 224 224 224 224 194 224 224 224 194] [255 214 214 214 214 173 214 214 214 173] [255 204 204 204 204 153 204 204 204 153] [255 204 204 204 204 153 204 204 204 153] [255 204 204 204 204 153 204 204 204 153] [255 204 204 204 204 153 204 204 204 153] [255 204 204 204 204 153 204 204 204 153]] Now, since kernel_vals is an np.array, you can flatten it, turn it into a list, or manipulate it in any other way you want
How do I return an array of pixel values using kernel to condense them (blur)? *Python*
So, what I'm trying to do is take an image (let's say 100x100) and do a 5x5 kernel over the image: kernel = np.ones((5, 5), np.float32)/25 and then output an array for each iteration of the kernel (like in cv2.filter2D) like: kernel_vals.append(np.array([[indexOfKernelIteration], [newArrayOfEditedKernelValues]])) What I'm missing is how to get it to iterate across the image and output the pixel values of the new "image" that would be produced by: img = cv2.filter2D(image, -1, kernel) I just want, for each kernel, the output that is displayed on the new image to be put into the "kernel_vals" array. ^NOT INTO AN IMAGE Attached image for visual reference.
[ "imread returns an np.array, so if i understand what you want to do, you have the solution in the question. For completeness sake, see the code below.\nimport cv2\n\nimg = cv2.imread(\"image.png\", cv2.IMREAD_GRAYSCALE)\nprint(type(img))\nprint(img[:10, :10])\n\nkernel = np.ones((5, 5), np.float32)/25\nkernel_vals = cv2.filter2D(img, -1, kernel)\nprint(kernel_vals[:10, :10])\n\nAnd the output is (with added newlines for readability)\n<class 'numpy.ndarray'>\n\n[[255 255 255 255 255 255 255 255 255 255]\n [255 255 255 255 255 255 255 255 255 255]\n [255 255 255 255 255 255 255 255 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]\n [255 255 255 0 255 255 255 0 255 255]]\n\n[[255 255 255 255 255 255 255 255 255 255]\n [255 245 245 245 245 235 245 245 245 235]\n [255 235 235 235 235 214 235 235 235 214]\n [255 224 224 224 224 194 224 224 224 194]\n [255 214 214 214 214 173 214 214 214 173]\n [255 204 204 204 204 153 204 204 204 153]\n [255 204 204 204 204 153 204 204 204 153]\n [255 204 204 204 204 153 204 204 204 153]\n [255 204 204 204 204 153 204 204 204 153]\n [255 204 204 204 204 153 204 204 204 153]]\n\nNow, since kernel_vals is an np.array, you can flatten it, turn it into a list, or manipulate it in any other way you want\n" ]
[ 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074466216_opencv_python.txt
Q: Matplotlib print values on bars in subplots Using the above code, I have created 5 five subplots: values = {"x_values" : ["ENN", "CNN", "ENN-CNN"], "eu" : [11, 79.97, 91], "man" : [11, 80, 90], "min3" : [11, 79.70, 90], "min4" : [11, 79.50, 90], "che" : [12, 78, 89]} df = pd.DataFrame(data=values) fig, axs = plt.subplots(2, 3, figsize=(10,6)) eu = axs[0, 0].bar(df["x_values"], df["eu"] man = axs[0, 1].bar(df["x_values"], df["man"]) min3 = axs[0, 2].bar(df["x_values"], df["min3"]) min4 = axs[1, 0].bar(df["x_values"], df["min4"]) che = axs[1, 1].bar(df["x_values"], df["che"]) fig.delaxes(axs[1, 2]) They print as they should, but I also want to add to the bars the y value of every bar. Just like in the picture enter image description here I have tried the code below, but it doesn't print anything, no error but also no print for index, value in enumerate(df["corresponding_df"]): plt.text(value, index, str(value)) If I try variable-name.text(value, index, str(value)) I get error 'BarContainer' object has no attribute 'text'. If fig.text again not print. If axs[subplot-index].text I can only see a number at the end of the window outside the plots. Any suggestion? A: Try this using bar_label in matplotlib 3.4.0+: values = {"x_values" : ["ENN", "CNN", "ENN-CNN"], "eu" : [11, 79.97, 91], "man" : [11, 80, 90], "min3" : [11, 79.70, 90], "min4" : [11, 79.50, 90], "che" : [12, 78, 89]} df = pd.DataFrame(data=values) fig, axs = plt.subplots(2, 3, figsize=(10,6)) eu = axs[0, 0].bar(df["x_values"], df["eu"]) axs[0,0].bar_label(eu) man = axs[0, 1].bar(df["x_values"], df["man"]) axs[0,1].bar_label(man) min3 = axs[0, 2].bar(df["x_values"], df["min3"]) axs[0,2].bar_label(min3) min4 = axs[1, 0].bar(df["x_values"], df["min4"]) axs[1,0].bar_label(min4) che = axs[1, 1].bar(df["x_values"], df["che"]) axs[1,1].bar_label(che) fig.delaxes(axs[1, 2]) Output: A: with texts it can be done like this: for ax in axs.flatten(): for bar in ax.patches: ax.text(bar.get_x() + bar.get_width() / 2, bar.get_height()-7, bar.get_height(), ha='center', color='w')
Matplotlib print values on bars in subplots
Using the above code, I have created 5 five subplots: values = {"x_values" : ["ENN", "CNN", "ENN-CNN"], "eu" : [11, 79.97, 91], "man" : [11, 80, 90], "min3" : [11, 79.70, 90], "min4" : [11, 79.50, 90], "che" : [12, 78, 89]} df = pd.DataFrame(data=values) fig, axs = plt.subplots(2, 3, figsize=(10,6)) eu = axs[0, 0].bar(df["x_values"], df["eu"] man = axs[0, 1].bar(df["x_values"], df["man"]) min3 = axs[0, 2].bar(df["x_values"], df["min3"]) min4 = axs[1, 0].bar(df["x_values"], df["min4"]) che = axs[1, 1].bar(df["x_values"], df["che"]) fig.delaxes(axs[1, 2]) They print as they should, but I also want to add to the bars the y value of every bar. Just like in the picture enter image description here I have tried the code below, but it doesn't print anything, no error but also no print for index, value in enumerate(df["corresponding_df"]): plt.text(value, index, str(value)) If I try variable-name.text(value, index, str(value)) I get error 'BarContainer' object has no attribute 'text'. If fig.text again not print. If axs[subplot-index].text I can only see a number at the end of the window outside the plots. Any suggestion?
[ "Try this using bar_label in matplotlib 3.4.0+:\nvalues = {\"x_values\" : [\"ENN\", \"CNN\", \"ENN-CNN\"],\n\"eu\" : [11, 79.97, 91],\n\"man\" : [11, 80, 90],\n\"min3\" : [11, 79.70, 90],\n\"min4\" : [11, 79.50, 90],\n\"che\" : [12, 78, 89]}\n\ndf = pd.DataFrame(data=values)\n\nfig, axs = plt.subplots(2, 3, figsize=(10,6))\n\neu = axs[0, 0].bar(df[\"x_values\"], df[\"eu\"])\naxs[0,0].bar_label(eu)\nman = axs[0, 1].bar(df[\"x_values\"], df[\"man\"])\naxs[0,1].bar_label(man)\nmin3 = axs[0, 2].bar(df[\"x_values\"], df[\"min3\"])\naxs[0,2].bar_label(min3)\nmin4 = axs[1, 0].bar(df[\"x_values\"], df[\"min4\"])\naxs[1,0].bar_label(min4)\nche = axs[1, 1].bar(df[\"x_values\"], df[\"che\"])\naxs[1,1].bar_label(che)\nfig.delaxes(axs[1, 2])\n\nOutput:\n\n", "with texts it can be done like this:\nfor ax in axs.flatten():\n for bar in ax.patches:\n ax.text(bar.get_x() + bar.get_width() / 2, \n bar.get_height()-7,\n bar.get_height(), \n ha='center',\n color='w')\n\n\n" ]
[ 2, 1 ]
[]
[]
[ "matplotlib", "python", "python_3.x" ]
stackoverflow_0074466407_matplotlib_python_python_3.x.txt
Q: IN clause for Oracle Prepared Statement in Python cx_Oracle I'd like to use the IN clause with a prepared Oracle statement using cx_Oracle in Python. E.g. query - select name from employee where id in ('101', '102', '103') On python side, I have a list [101, 102, 103] which I converted to a string like this ('101', '102', '103') and used the following code in python - import cx_Oracle ids = [101, 102, 103] ALL_IDS = "('{0}')".format("','".join(map(str, ids))) conn = cx_Oracle.connect('username', 'pass', 'schema') cursor = conn.cursor() results = cursor.execute('select name from employee where id in :id_list', id_list=ALL_IDS) names = [x[0] for x in cursor.description] rows = results.fetchall() This doesn't work. Am I doing something wrong? A: This concept is not supported by Oracle -- and you are definitely not the first person to try this approach either! You must either: create separate bind variables for each in value -- something that is fairly easy and straightforward to do in Python create a subquery using the cast operator on Oracle types as is shown in this post: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:210612357425 use a stored procedure to accept the array and perform multiple queries directly within PL/SQL or do something else entirely! A: Just transform your list into a tuple and format the sql string with it ids = [101, 102, 103] param = tuple(ids) results = cursor.execute("select name from employee where id IN {}".format(param))
IN clause for Oracle Prepared Statement in Python cx_Oracle
I'd like to use the IN clause with a prepared Oracle statement using cx_Oracle in Python. E.g. query - select name from employee where id in ('101', '102', '103') On python side, I have a list [101, 102, 103] which I converted to a string like this ('101', '102', '103') and used the following code in python - import cx_Oracle ids = [101, 102, 103] ALL_IDS = "('{0}')".format("','".join(map(str, ids))) conn = cx_Oracle.connect('username', 'pass', 'schema') cursor = conn.cursor() results = cursor.execute('select name from employee where id in :id_list', id_list=ALL_IDS) names = [x[0] for x in cursor.description] rows = results.fetchall() This doesn't work. Am I doing something wrong?
[ "This concept is not supported by Oracle -- and you are definitely not the first person to try this approach either! You must either:\n\ncreate separate bind variables for each in value -- something that is fairly easy and straightforward to do in Python\ncreate a subquery using the cast operator on Oracle types as is shown in this post: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:210612357425\nuse a stored procedure to accept the array and perform multiple queries directly within PL/SQL\nor do something else entirely!\n\n", "Just transform your list into a tuple and format the sql string with it\nids = [101, 102, 103]\nparam = tuple(ids)\nresults = cursor.execute(\"select name from employee where id IN {}\".format(param))\n\n" ]
[ 5, 0 ]
[ "Otra opción es dar formato a una cadena con la consulta.\nimport cx_Oracle\nids = [101, 102, 103]\nALL_IDS = \"('{0}')\".format(\"','\".join(map(str, ids)))\nconn = cx_Oracle.connect('username', 'pass', 'schema')\ncursor = conn.cursor()\n\nquery = \"\"\"\nselect name from employee where id in ('{}')\n\"\"\".format(\"','\".join(map(str, ids)))\n\nresults = cursor.execute(query)\nnames = [x[0] for x in cursor.description]\nrows = results.fetchall()\n\n\n", "Since you created the string, you're almost there. This should work:\nresults = cursor.execute('select name from employee where id in ' + ALL_IDS)\n\n" ]
[ -1, -3 ]
[ "cx_oracle", "oracle", "prepared_statement", "python" ]
stackoverflow_0040954293_cx_oracle_oracle_prepared_statement_python.txt
Q: Calling Stored Procedures is much slower than just calling insert and bulk insert is basically the same, Why? I have a table and a stored procedure like following, CREATE TABLE `inspect_call` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `task_id` bigint(20) unsigned NOT NULL DEFAULT '0', `cc_number` varchar(63) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `created_at` bigint(20) unsigned NOT NULL DEFAULT '0', `updated_at` bigint(20) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`id`), KEY `task_id` (`task_id`) ) ENGINE=InnoDB AUTO_INCREMENT=234031 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci CREATE PROCEDURE inspect_proc(IN task bigint,IN number varchar(63)) INSERT INTO inspect_call(task_id,cc_number) values (task, number) I had assumed that calling the stored procedure would be (much) faster than just calling insert. But to my surprised that is NOT the case at all. When I insert 10000 rows records, the insert command takes around 4 minutes while the stored procedure takes around 15 minutes. I have run the test many times to confirm that. The MySQL server is not a high end server but I don't understand why calling the stored procedure is much slower. #using mysql-connector-python 8.0.31 command = ("INSERT INTO inspect_call (task_id,cc_number)" "VALUES (%s, %s)") for i in range(rows): cursor.execute(command, (task_id,f"{cc}{i}")) # cursor.callproc("inspect_proc", (task_id,f"{cc}{i}")) cnx.commit() BTW, I read some articles saying I can set innodb_flush_log_at_trx_commit = 2 to improve the insert speed but I don't plan to do that. --- update --- From the answers I got I tried bulk insert(executemany) to see if there any improvement, but to my surprise there isn't. cursor = cnx.cursor(buffered=True) for i in range(int(rows/1000)): data = [] for j in range(1000): data.append((task_id,f"{cc}{i*1000+j}")) cursor.executemany(command,data) cnx.commit() # no improvement compared to cursor = cnx.cursor() for i in range(rows): cursor.execute(command, (task_id,f"{cc}{i}")) I tried many times (also tried 100 record for one executemany shot) and find their performances are basically the same. Why is that ? --- update 2 --- I finally figure out why insert is so slow! Because I run the script from my laptop and access the database from its external host name. Once I uploaded the script to the server and access the DB from inside the intranet, it is much faster. Inserting 10000 records takes around 3 to 4 seconds; inserting 100,000 records takes around 36 seconds. I did not network can cause such a difference! BUT executemany didn't improve the performance in my case though. A: Your example won't give credit to stored procedure because it won't use any advantages of stored procedure. Main advantages of stored procedures are : it's compiled it saves network exchanges (as computations operate on the server side) Imagine you have a logic enough complex not to be operated by UPDATE and you'd like to operate e.g. in Python, it requires : select rows -> network traffic [server -> client] update rows -> quite slow : Python is interpreted, maybe even slower if you use an ORM like SQLAlchemy (objets have to be created in memory) send back updated rows -> network traffic [client -> server] Imagine the same example implemented with a stored procedure. In that kind of example chances are that the stored procedure really shines. In your example you don't have any logic but just insert rows. It's an I/O bound use case. No or little gain to have a compiled procedure. And you'll have as many network exchanges as if you used INSERT. Whatever way rows have to be sent to the server. Also no gain in the network traffic amount. In your example maybe bulk insert could help reaching best performances. A: MySQL is unlike many other engines in that ordinary statements are reasonably fast -- and wrapping in a Store Proc may add more overhead than it saves. You want faster? Batch the rows into a single INSERT. (Or, if there is a huge list, break it into clumps of 1000.) See executemany().
Calling Stored Procedures is much slower than just calling insert and bulk insert is basically the same, Why?
I have a table and a stored procedure like following, CREATE TABLE `inspect_call` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `task_id` bigint(20) unsigned NOT NULL DEFAULT '0', `cc_number` varchar(63) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `created_at` bigint(20) unsigned NOT NULL DEFAULT '0', `updated_at` bigint(20) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`id`), KEY `task_id` (`task_id`) ) ENGINE=InnoDB AUTO_INCREMENT=234031 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci CREATE PROCEDURE inspect_proc(IN task bigint,IN number varchar(63)) INSERT INTO inspect_call(task_id,cc_number) values (task, number) I had assumed that calling the stored procedure would be (much) faster than just calling insert. But to my surprised that is NOT the case at all. When I insert 10000 rows records, the insert command takes around 4 minutes while the stored procedure takes around 15 minutes. I have run the test many times to confirm that. The MySQL server is not a high end server but I don't understand why calling the stored procedure is much slower. #using mysql-connector-python 8.0.31 command = ("INSERT INTO inspect_call (task_id,cc_number)" "VALUES (%s, %s)") for i in range(rows): cursor.execute(command, (task_id,f"{cc}{i}")) # cursor.callproc("inspect_proc", (task_id,f"{cc}{i}")) cnx.commit() BTW, I read some articles saying I can set innodb_flush_log_at_trx_commit = 2 to improve the insert speed but I don't plan to do that. --- update --- From the answers I got I tried bulk insert(executemany) to see if there any improvement, but to my surprise there isn't. cursor = cnx.cursor(buffered=True) for i in range(int(rows/1000)): data = [] for j in range(1000): data.append((task_id,f"{cc}{i*1000+j}")) cursor.executemany(command,data) cnx.commit() # no improvement compared to cursor = cnx.cursor() for i in range(rows): cursor.execute(command, (task_id,f"{cc}{i}")) I tried many times (also tried 100 record for one executemany shot) and find their performances are basically the same. Why is that ? --- update 2 --- I finally figure out why insert is so slow! Because I run the script from my laptop and access the database from its external host name. Once I uploaded the script to the server and access the DB from inside the intranet, it is much faster. Inserting 10000 records takes around 3 to 4 seconds; inserting 100,000 records takes around 36 seconds. I did not network can cause such a difference! BUT executemany didn't improve the performance in my case though.
[ "Your example won't give credit to stored procedure because it won't use any advantages of stored procedure.\nMain advantages of stored procedures are :\n\nit's compiled\nit saves network exchanges (as computations operate on the server side)\n\nImagine you have a logic enough complex not to be operated by UPDATE and you'd like to operate e.g. in Python, it requires :\n\nselect rows -> network traffic [server -> client]\nupdate rows -> quite slow : Python is interpreted, maybe even slower if you use an ORM like SQLAlchemy (objets have to be created in memory)\nsend back updated rows -> network traffic [client -> server]\n\nImagine the same example implemented with a stored procedure.\nIn that kind of example chances are that the stored procedure really shines.\nIn your example you don't have any logic but just insert rows.\nIt's an I/O bound use case. No or little gain to have a compiled procedure.\nAnd you'll have as many network exchanges as if you used INSERT.\nWhatever way rows have to be sent to the server.\nAlso no gain in the network traffic amount.\nIn your example maybe bulk insert could help reaching best performances.\n", "MySQL is unlike many other engines in that ordinary statements are reasonably fast -- and wrapping in a Store Proc may add more overhead than it saves.\nYou want faster? Batch the rows into a single INSERT. (Or, if there is a huge list, break it into clumps of 1000.)\nSee executemany().\n" ]
[ 1, 1 ]
[]
[]
[ "bulkinsert", "mysql", "python", "stored_functions" ]
stackoverflow_0074457656_bulkinsert_mysql_python_stored_functions.txt
Q: 'tuple' object has no attribute 'strip' I want to receive the text australia and trim all the extra characters. I am trying to achive this using strip, but getting an error result = [('australia',)] result = result[0].strip('(') File "./prog.py", line 2, in <module> AttributeError: 'tuple' object has no attribute 'strip' What is the right way to achieve the same. Thank you. A: The ( is not part of a string value; you have a 1-element tuple as the first list item, and you need to index it: result = result[0][0]. >>> result = [('australia',)] >>> result[0] ('australia',) >>> result[0][0] 'australia'
'tuple' object has no attribute 'strip'
I want to receive the text australia and trim all the extra characters. I am trying to achive this using strip, but getting an error result = [('australia',)] result = result[0].strip('(') File "./prog.py", line 2, in <module> AttributeError: 'tuple' object has no attribute 'strip' What is the right way to achieve the same. Thank you.
[ "The ( is not part of a string value; you have a 1-element tuple as the first list item, and you need to index it: result = result[0][0].\n>>> result = [('australia',)]\n>>> result[0]\n('australia',)\n>>> result[0][0]\n'australia'\n\n" ]
[ 1 ]
[]
[]
[ "python", "strip", "tuples" ]
stackoverflow_0074466795_python_strip_tuples.txt
Q: Pandas Specific Pivot of DataFrame I am trying to reshape a given DataFrame ts type value1 value2 0 1 foo 10 16 1 1 bar 11 17 2 2 foo 12 18 3 2 bar 13 19 4 3 foo 14 20 5 3 bar 15 21 into the following shape: foo bar value1 value2 value1 value2 1 10 16 11 17 2 12 18 13 19 3 14 20 15 21 I know how to do this programatically, this is not the point. But I for the life of me can't find the proper pandas method to do this efficiently. Help would be greatly appreciated. A: here is one way to do it df.set_index(['type','ts']).unstack(0).swaplevel(axis=1).sort_index(axis=1) type bar foo value1 value2 value1 value2 ts 1 11 17 10 16 2 13 19 12 18 3 15 21 14 20
Pandas Specific Pivot of DataFrame
I am trying to reshape a given DataFrame ts type value1 value2 0 1 foo 10 16 1 1 bar 11 17 2 2 foo 12 18 3 2 bar 13 19 4 3 foo 14 20 5 3 bar 15 21 into the following shape: foo bar value1 value2 value1 value2 1 10 16 11 17 2 12 18 13 19 3 14 20 15 21 I know how to do this programatically, this is not the point. But I for the life of me can't find the proper pandas method to do this efficiently. Help would be greatly appreciated.
[ "here is one way to do it\ndf.set_index(['type','ts']).unstack(0).swaplevel(axis=1).sort_index(axis=1)\n\ntype bar foo\n value1 value2 value1 value2\nts \n1 11 17 10 16\n2 13 19 12 18\n3 15 21 14 20\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074466758_pandas_python.txt
Q: Why does my Django form data not appear in the database? I try to develop a simple input form to save a deposit for a fishing vessel. The vessel and the net are tables in the database. There is no error when the form is submitted but there is nothing happening in the background. I use a PostgreSQL database with PgAdmin for insights.I am a little bit stuck since it's my first time working with Django. I tried adding the dep_id field into the form but it did not change anything. [forms.py] from django import forms from django.forms import ModelForm from myapp.models import Deposit class UploadForm(ModelForm): dep_date = forms.DateField() harbour = forms.CharField() vessel = forms.ModelChoiceField(queryset=Vessel.objects.all()) net = forms.ModelChoiceField(queryset=Net.objects.all()) amount = forms.DecimalField() class Meta: model = Deposit fields = ['dep_date', 'harbour', 'vessel', 'net', 'amount'] [models.py] from django.db import models class Vessel(models.Model): VID = models.IntegerField(primary_key=True, default=None) vessel_name = models.CharField(max_length=100) flag = models.CharField(max_length=100) registration_number = models.CharField(max_length=100) WIN = models.CharField(max_length=100) IRCS = models.CharField(max_length=100) vessel_type = models.CharField(max_length=250) fishing_methods = models.CharField(max_length=255) length = models.DecimalField(default=0, max_digits=5, decimal_places=2) auth_period_from = models.CharField(max_length=100) auth_period_to = models.CharField(max_length=100) class Net(models.Model): net_id = models.IntegerField(primary_key=True, default = None) prod_date = models.DateField() weight = models.DecimalField(default=0, max_digits=6, decimal_places=2) material = models.CharField(max_length=100) fishing_type = models.CharField(max_length=100, default=None) class Deposit(models.Model): dep_id = models.BigAutoField(primary_key=True, default=None) dep_date = models.DateField() harbour = models.CharField(max_length=100) vessel = models.ForeignKey(Vessel, to_field='VID', on_delete=models.CASCADE) net = models.ForeignKey(Net, to_field='net_id', on_delete=models.CASCADE) amount = models.DecimalField(default=0, max_digits=5, decimal_places=2) [views.py] from django.shortcuts import render, redirect from .models import Vessel from .forms import UploadForm def put_deposit(request): if request.POST: form = UploadForm(request.POST) print(request) if form.is_valid(): form.save() redirect(index) return render(request, 'upload.html', {'form' : UploadForm}) [upload.html] <p> Upload </p> <form method="POST" action="{% url 'put_deposit' %}" enctype="multipart/form-data"> {% csrf_token %} {{form}} <button> Submit </button> </form> Maybe I have any kind of dependency wrong or is it a problem with a key? A: This is more of a troubleshooting suggestion, but hard to show in a comment. Your form might not be validating for some reason or another - add this to see if there are errors: def put_deposit(request): if request.POST: form = UploadForm(request.POST) print(request) if form.is_valid(): form.save() else: print(form.errors) # add these two lines redirect(index) return render(request, 'upload.html', {'form' : UploadForm}) A: You're using a ModelForm, but then adding fields which are already in your model to the form as if it were a regular form. You can add extra fields to your ModelForm by doing so, but since you are adding the same fields, perhaps that is why it is not validating. Suggestion: Try (1) What @Milo has already suggested to print out form.errors to see if the form is indeed valid or not, (2) Change your form to: class UploadForm(ModelForm): class Meta: model = Deposit fields = ['dep_date', 'harbour', 'vessel', 'net', 'amount'] (3) Also, although I do not think this is what is causing the error, your view has some issues. Try: def put_deposit(request): form = UploadForm(request.POST or None) print(request.POST) if form.is_valid(): form.save() redirect('index') else: print(form.errors) return render(request, 'upload.html', {'form' : form})
Why does my Django form data not appear in the database?
I try to develop a simple input form to save a deposit for a fishing vessel. The vessel and the net are tables in the database. There is no error when the form is submitted but there is nothing happening in the background. I use a PostgreSQL database with PgAdmin for insights.I am a little bit stuck since it's my first time working with Django. I tried adding the dep_id field into the form but it did not change anything. [forms.py] from django import forms from django.forms import ModelForm from myapp.models import Deposit class UploadForm(ModelForm): dep_date = forms.DateField() harbour = forms.CharField() vessel = forms.ModelChoiceField(queryset=Vessel.objects.all()) net = forms.ModelChoiceField(queryset=Net.objects.all()) amount = forms.DecimalField() class Meta: model = Deposit fields = ['dep_date', 'harbour', 'vessel', 'net', 'amount'] [models.py] from django.db import models class Vessel(models.Model): VID = models.IntegerField(primary_key=True, default=None) vessel_name = models.CharField(max_length=100) flag = models.CharField(max_length=100) registration_number = models.CharField(max_length=100) WIN = models.CharField(max_length=100) IRCS = models.CharField(max_length=100) vessel_type = models.CharField(max_length=250) fishing_methods = models.CharField(max_length=255) length = models.DecimalField(default=0, max_digits=5, decimal_places=2) auth_period_from = models.CharField(max_length=100) auth_period_to = models.CharField(max_length=100) class Net(models.Model): net_id = models.IntegerField(primary_key=True, default = None) prod_date = models.DateField() weight = models.DecimalField(default=0, max_digits=6, decimal_places=2) material = models.CharField(max_length=100) fishing_type = models.CharField(max_length=100, default=None) class Deposit(models.Model): dep_id = models.BigAutoField(primary_key=True, default=None) dep_date = models.DateField() harbour = models.CharField(max_length=100) vessel = models.ForeignKey(Vessel, to_field='VID', on_delete=models.CASCADE) net = models.ForeignKey(Net, to_field='net_id', on_delete=models.CASCADE) amount = models.DecimalField(default=0, max_digits=5, decimal_places=2) [views.py] from django.shortcuts import render, redirect from .models import Vessel from .forms import UploadForm def put_deposit(request): if request.POST: form = UploadForm(request.POST) print(request) if form.is_valid(): form.save() redirect(index) return render(request, 'upload.html', {'form' : UploadForm}) [upload.html] <p> Upload </p> <form method="POST" action="{% url 'put_deposit' %}" enctype="multipart/form-data"> {% csrf_token %} {{form}} <button> Submit </button> </form> Maybe I have any kind of dependency wrong or is it a problem with a key?
[ "This is more of a troubleshooting suggestion, but hard to show in a comment. Your form might not be validating for some reason or another - add this to see if there are errors:\ndef put_deposit(request):\n if request.POST: \n form = UploadForm(request.POST)\n print(request)\n if form.is_valid():\n form.save()\n else:\n print(form.errors) # add these two lines\n redirect(index)\n return render(request, 'upload.html', {'form' : UploadForm})\n\n", "You're using a ModelForm, but then adding fields which are already in your model to the form as if it were a regular form. You can add extra fields to your ModelForm by doing so, but since you are adding the same fields, perhaps that is why it is not validating.\nSuggestion: Try\n(1) What @Milo has already suggested to print out form.errors to see if the form is indeed valid or not,\n(2) Change your form to:\nclass UploadForm(ModelForm):\n\n class Meta:\n model = Deposit\n fields = ['dep_date', 'harbour', \n 'vessel', 'net', \n 'amount'] \n\n(3) Also, although I do not think this is what is causing the error, your view has some issues. Try:\ndef put_deposit(request):\n form = UploadForm(request.POST or None)\n print(request.POST)\n if form.is_valid():\n form.save()\n redirect('index')\n else:\n print(form.errors)\n return render(request, 'upload.html', {'form' : form})\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "forms", "postgresql", "python" ]
stackoverflow_0074466074_django_forms_postgresql_python.txt
Q: Placing key/value pairs from dict into .set() values in Tkinter In Tkinter, I'm trying to place key/value pairs from a dictionary called 'headers' inside the set() pairs in the set_values tuple below. Before this process I open a json file, deserialize the data into a dictionary called headers. This dictionary is for API headers in the Tkinter App. The set_value pairs are 5 pairs of entries for API header keys and values. So, if the dict headers only ends up being 3 key/value pairs, I don't want to use all 5 set_value's, I'd only want to use 3. Basically I'm thinking of a way to place the headers key/value pairs inside the .set() respected number of .set() pairs. See bottom for expected output. I have a dict() named headers: headers = {'First': variable1, 'Second': variable2, 'Third': variable3} And, on Tkinter, I have about 5 different pairs of Entries (below): I put these .sets() inside a tuple because I'm thinking it may be easier to iterate & set key/value pairs from headers into set_values below. Or a dict may work better. set_values = ( ( 1_key_entry.set(), 1_value_entry.set() ), ( 2_key_entry.set(), 2_value_entry.set() ), ( 3_key_entry.set(), 3_value_entry.set() ), ( 4_key_entry.set(), 4_value_entry.set() ), ( 5_key_entry.set(), 5_value_entry.set() ), ) Now, based on the length of headers, say 3, I only want those 3 key/value pairs from headers to be inserted inside the .set()'s in set_values. My desired output would be: (Notice the .set(key) and .set(value)): set_values = ( ( 1_key_entry.set('First'), 1_value_entry.set(variable1) ), ( 2_key_entry.set('Second'), 2_value_entry.set(variable2) ), ( 3_key_entry.set('Third'), 3_value_entry.set(variable3) ) ) Lengths: x = len(headers) # Length of dict headers y = set_values[:x] # set_values pairs needed based on length of headers. In my App, I have an easy work around but it is based off asserting k is equal to a user input like Content-Type, Authorization etc in the entry field. for k, v in json_file.items(): if k == 'Content-Type': 1_key_entry.set(k) 1_value_entry.set(v) elif k == 'Authorization': 2_key_entry.set(k) 2_value_entry.set(v) . . . However, what I need is that any k/v pair in the headers dictionary can be automatically set to a set() pair in set_values. A: I'm not 100% clear on what you're after, but if I understand correctly: you want to split up key/value pairs from a JSON dictionary into pairs of tkinter Entry widgets? If that's the case, then here is an example of how to do that in a loop: import tkinter as tk root = tk.Tk() # get your values however you need to, I'm just using numbers as placeholders headers = { 'First': 1, 'Second': 2, 'Third': 3, } # notice how Entry is only being instantiated twice in this loop... for i, (k, v) in enumerate(headers.items()): # column of Entry widgets w/ text from dict keys ent_k = tk.Entry(root) ent_k.insert(tk.END, k) ent_k.grid(i, 0) # column of Entry widgets w/ text from dict values ent_v = tk.Entry(root) ent_v.insert(tk.END, v) ent_v.grid(i, 1) root.mainloop() Does that help? Note that the tricky part of declaring widgets in a loop like this is that accessing the individual entries can't be done by name, so setting their values programmatically becomes difficult. You may have to resort to some kind of trickery involving root.winfo_children() to get a list of the widgets in the parent window...
Placing key/value pairs from dict into .set() values in Tkinter
In Tkinter, I'm trying to place key/value pairs from a dictionary called 'headers' inside the set() pairs in the set_values tuple below. Before this process I open a json file, deserialize the data into a dictionary called headers. This dictionary is for API headers in the Tkinter App. The set_value pairs are 5 pairs of entries for API header keys and values. So, if the dict headers only ends up being 3 key/value pairs, I don't want to use all 5 set_value's, I'd only want to use 3. Basically I'm thinking of a way to place the headers key/value pairs inside the .set() respected number of .set() pairs. See bottom for expected output. I have a dict() named headers: headers = {'First': variable1, 'Second': variable2, 'Third': variable3} And, on Tkinter, I have about 5 different pairs of Entries (below): I put these .sets() inside a tuple because I'm thinking it may be easier to iterate & set key/value pairs from headers into set_values below. Or a dict may work better. set_values = ( ( 1_key_entry.set(), 1_value_entry.set() ), ( 2_key_entry.set(), 2_value_entry.set() ), ( 3_key_entry.set(), 3_value_entry.set() ), ( 4_key_entry.set(), 4_value_entry.set() ), ( 5_key_entry.set(), 5_value_entry.set() ), ) Now, based on the length of headers, say 3, I only want those 3 key/value pairs from headers to be inserted inside the .set()'s in set_values. My desired output would be: (Notice the .set(key) and .set(value)): set_values = ( ( 1_key_entry.set('First'), 1_value_entry.set(variable1) ), ( 2_key_entry.set('Second'), 2_value_entry.set(variable2) ), ( 3_key_entry.set('Third'), 3_value_entry.set(variable3) ) ) Lengths: x = len(headers) # Length of dict headers y = set_values[:x] # set_values pairs needed based on length of headers. In my App, I have an easy work around but it is based off asserting k is equal to a user input like Content-Type, Authorization etc in the entry field. for k, v in json_file.items(): if k == 'Content-Type': 1_key_entry.set(k) 1_value_entry.set(v) elif k == 'Authorization': 2_key_entry.set(k) 2_value_entry.set(v) . . . However, what I need is that any k/v pair in the headers dictionary can be automatically set to a set() pair in set_values.
[ "I'm not 100% clear on what you're after, but if I understand correctly: you want to split up key/value pairs from a JSON dictionary into pairs of tkinter Entry widgets?\nIf that's the case, then here is an example of how to do that in a loop:\nimport tkinter as tk\n\n\nroot = tk.Tk()\n# get your values however you need to, I'm just using numbers as placeholders\nheaders = {\n 'First': 1,\n 'Second': 2,\n 'Third': 3,\n}\n\n# notice how Entry is only being instantiated twice in this loop...\nfor i, (k, v) in enumerate(headers.items()):\n # column of Entry widgets w/ text from dict keys\n ent_k = tk.Entry(root)\n ent_k.insert(tk.END, k)\n ent_k.grid(i, 0)\n # column of Entry widgets w/ text from dict values\n ent_v = tk.Entry(root)\n ent_v.insert(tk.END, v)\n ent_v.grid(i, 1)\n\nroot.mainloop()\n\nDoes that help?\nNote that the tricky part of declaring widgets in a loop like this is that accessing the individual entries can't be done by name, so setting their values programmatically becomes difficult. You may have to resort to some kind of trickery involving root.winfo_children() to get a list of the widgets in the parent window...\n" ]
[ 1 ]
[]
[]
[ "dictionary", "python", "python_3.x", "tkinter" ]
stackoverflow_0074465596_dictionary_python_python_3.x_tkinter.txt
Q: psycopg2.OperationalError: FATAL: password authentication failed for user "" I am a fairly new to web developement. First I deployed a static website on my vps (Ubuntu 16.04) without problem and then I tried to add a blog app to it. It works well locally with PostgreSQL but I can't make it work on my server. It seems like it tries to connect to Postgres with my Unix user. Why would my server try to do that? I did create a database and a owner via the postgres user, matching the login information in settings.py, I was expecting psycopg2 to try to connect to the database using these login informations: Settings.py + python-decouple: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': config ('NAME'), 'USER': config ('USER'), 'PASSWORD': config ('PASSWORD'), 'HOST': 'localhost', 'PORT': '', } } This is the error message I get each time I try to ./manage.py migrate 'myportfolio' is my Unix user name, the database username is different: Traceback (most recent call last): File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 194, in connect self.connection = self.get_new_connection(conn_params) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection connection = Database.connect(**conn_params) File "/home/myportfolio/lib/python3.5/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: password authentication failed for user "myportfolio" FATAL: password authentication failed for user "myportfolio" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "./manage.py", line 15, in <module> execute_from_command_line(sys.argv) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line utility.execute() File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/__init__.py", line 365, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **cmd_options) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/base.py", line 335, in execute output = self.handle(*args, **options) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/commands/migrate.py", line 79, in handle executor = MigrationExecutor(connection, self.migration_progress_callback) File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/executor.py", line 18, in __init__ self.loader = MigrationLoader(self.connection) File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/loader.py", line 49, in __init__ self.build_graph() File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/loader.py", line 206, in build_graph self.applied_migrations = recorder.applied_migrations() File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/recorder.py", line 61, in applied_migrations if self.has_table(): File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/recorder.py", line 44, in has_table return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 255, in cursor return self._cursor() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 232, in _cursor self.ensure_connection() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/utils.py", line 89, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 194, in connect self.connection = self.get_new_connection(conn_params) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection connection = Database.connect(**conn_params) File "/home/myportfolio/lib/python3.5/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: FATAL: password authentication failed for user "myportfolio" FATAL: password authentication failed for user "myportfolio" I tried to: delete my django code, re install delete/purge postgres and reinstall modify pg_hba.conf local to trust At one point I did create a django superuser called 'myportfolio' as my unix user: could this have create a problem ? A: As per the error, it is clear that the failure is when your Application is trying to postgres and the important part to concentrate is Authentication. Do these steps to first understand and reproduce the issue. I assume it as a Linux Server and recommend these steps. Step 1: $ python3 >>>import psycopg2 >>>psycopg2.connect("dbname=postgres user=postgres host=localhost password=oracle port=5432") >>>connection object at 0x5f03d2c402d8; dsn: 'host=localhost port=5432 dbname=postgres user=postgres password=xxx', closed: 0 You should get such a message. This is a success message. When i use a wrong password, i get this error. >>>psycopg2.connect("dbname=postgres user=postgres host=localhost password=wrongpassword port=5432") >>>Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: password authentication failed for user "postgres" FATAL: password authentication failed for user "postgres" When there is no entry in pg_hba.conf file, i get the following error. >>> psycopg2.connect("dbname=postgres user=postgres host=localhost password=oracle port=5432 ") >>> Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: no pg_hba.conf entry for host "::1", user "postgres", database "postgres", SSL on FATAL: no pg_hba.conf entry for host "::1", user "postgres", database "postgres", SSL off So, the issue is with password. Check if your password contains any special characters or spaces. if your password has spaces or special characters, use double quotes as i used below. >>> psycopg2.connect(dbname="postgres", user="postgres", password="passwords with spaces", host="localhost", port ="5432") If all is good with the above steps and you got success messages, it is very clear that the issue is with your dsn. Print the values passed to these variables. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': config ('NAME'), 'USER': config ('USER'), 'PASSWORD': config ('PASSWORD'), 'HOST': 'localhost', 'PORT': '', } } Validate if all the values are being substituted appropriately. You may have the correct password for the user but the dsn is not picking the correct password for the user. See if you can print the dsn and validate if the connection string is perfectly being generated. You will get the fix there. A: So I was just stuck on this problem and I thought I'd save whoever comes across this post some time by posting the actual commands. This was done on my raspberry pi. sudo su - postgres postgres@raspberrypi:~$ psql postgres=# CREATE DATABASE websitenamehere postgres=# CREATE USER mywebsiteuser WITH PASSWORD 'Password'; postgres=# GRANT ALL PRIVILEGES ON DATABASE websitenamehere to mywebsiteuser; postgres=# \q Done, you have now created a user. A: What is setup as user in config ('USER'). Following the error: FATAL: password authentication failed for user "myportfolio" user is myportfolio, so you will need to create that user if it does not exist. A: I had something similar. My issue was that I did not set the environment variables correctly so it couldn't connect. Ensure that if you go to Edit Configurations, then Environment Variables, and put in your answers in that column. A: This problem might also occur if you have some special characters within your password that Postgres cannot cope with (unless you do some special encoding). A: For me, I had the wrong port. Additional characters. A: This solved for me: from sqlalchemy import create_engine connection_string_orig = "postgres://user_with_%34_in_the_string:pw@host:port/db" connection_string = connection_string_orig.replace("%", "%25") engine = create_engine(connection_string) print(engine.url) # should be identical to connection_string_orig engine.connect() from: https://www.appsloveworld.com/coding/python3x/7/flask-alchemy-psycopg2-operationalerror-fatal-password-authentication-fail
psycopg2.OperationalError: FATAL: password authentication failed for user ""
I am a fairly new to web developement. First I deployed a static website on my vps (Ubuntu 16.04) without problem and then I tried to add a blog app to it. It works well locally with PostgreSQL but I can't make it work on my server. It seems like it tries to connect to Postgres with my Unix user. Why would my server try to do that? I did create a database and a owner via the postgres user, matching the login information in settings.py, I was expecting psycopg2 to try to connect to the database using these login informations: Settings.py + python-decouple: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': config ('NAME'), 'USER': config ('USER'), 'PASSWORD': config ('PASSWORD'), 'HOST': 'localhost', 'PORT': '', } } This is the error message I get each time I try to ./manage.py migrate 'myportfolio' is my Unix user name, the database username is different: Traceback (most recent call last): File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 194, in connect self.connection = self.get_new_connection(conn_params) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection connection = Database.connect(**conn_params) File "/home/myportfolio/lib/python3.5/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: password authentication failed for user "myportfolio" FATAL: password authentication failed for user "myportfolio" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "./manage.py", line 15, in <module> execute_from_command_line(sys.argv) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line utility.execute() File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/__init__.py", line 365, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **cmd_options) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/base.py", line 335, in execute output = self.handle(*args, **options) File "/home/myportfolio/lib/python3.5/site-packages/django/core/management/commands/migrate.py", line 79, in handle executor = MigrationExecutor(connection, self.migration_progress_callback) File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/executor.py", line 18, in __init__ self.loader = MigrationLoader(self.connection) File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/loader.py", line 49, in __init__ self.build_graph() File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/loader.py", line 206, in build_graph self.applied_migrations = recorder.applied_migrations() File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/recorder.py", line 61, in applied_migrations if self.has_table(): File "/home/myportfolio/lib/python3.5/site-packages/django/db/migrations/recorder.py", line 44, in has_table return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 255, in cursor return self._cursor() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 232, in _cursor self.ensure_connection() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/utils.py", line 89, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection self.connect() File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/base/base.py", line 194, in connect self.connection = self.get_new_connection(conn_params) File "/home/myportfolio/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection connection = Database.connect(**conn_params) File "/home/myportfolio/lib/python3.5/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: FATAL: password authentication failed for user "myportfolio" FATAL: password authentication failed for user "myportfolio" I tried to: delete my django code, re install delete/purge postgres and reinstall modify pg_hba.conf local to trust At one point I did create a django superuser called 'myportfolio' as my unix user: could this have create a problem ?
[ "As per the error, it is clear that the failure is when your Application is trying to postgres and the important part to concentrate is Authentication. \nDo these steps to first understand and reproduce the issue. \nI assume it as a Linux Server and recommend these steps. \nStep 1:\n$ python3\n>>>import psycopg2\n>>>psycopg2.connect(\"dbname=postgres user=postgres host=localhost password=oracle port=5432\")\n>>>connection object at 0x5f03d2c402d8; dsn: 'host=localhost port=5432 dbname=postgres user=postgres password=xxx', closed: 0\n\nYou should get such a message. This is a success message. \nWhen i use a wrong password, i get this error.\n>>>psycopg2.connect(\"dbname=postgres user=postgres host=localhost password=wrongpassword port=5432\")\n>>>Traceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\nFile \"/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py\", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\npsycopg2.OperationalError: FATAL: password authentication failed for user \"postgres\"\nFATAL: password authentication failed for user \"postgres\"\n\nWhen there is no entry in pg_hba.conf file, i get the following error.\n>>> psycopg2.connect(\"dbname=postgres user=postgres host=localhost password=oracle port=5432 \")\n>>> Traceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\nFile \"/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py\", line 130, in connect\nconn = _connect(dsn, connection_factory=connection_factory, **kwasync)\npsycopg2.OperationalError: FATAL: no pg_hba.conf entry for host \"::1\", user \"postgres\", database \"postgres\", SSL on\nFATAL: no pg_hba.conf entry for host \"::1\", user \"postgres\", database \"postgres\", SSL off\n\nSo, the issue is with password. Check if your password contains any special characters or spaces. if your password has spaces or special characters, use double quotes as i used below. \n>>> psycopg2.connect(dbname=\"postgres\", user=\"postgres\", password=\"passwords with spaces\", host=\"localhost\", port =\"5432\")\n\nIf all is good with the above steps and you got success messages, it is very clear that the issue is with your dsn. \nPrint the values passed to these variables. \nDATABASES = {\n'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': config ('NAME'),\n 'USER': config ('USER'),\n 'PASSWORD': config ('PASSWORD'),\n 'HOST': 'localhost',\n 'PORT': '',\n}\n\n}\nValidate if all the values are being substituted appropriately. You may have the correct password for the user but the dsn is not picking the correct password for the user. See if you can print the dsn and validate if the connection string is perfectly being generated. You will get the fix there. \n", "So I was just stuck on this problem and I thought I'd save whoever comes across this post some time by posting the actual commands. This was done on my raspberry pi.\n\nsudo su - postgres\npostgres@raspberrypi:~$ psql\npostgres=# CREATE DATABASE websitenamehere\npostgres=# CREATE USER mywebsiteuser WITH PASSWORD 'Password';\npostgres=# GRANT ALL PRIVILEGES ON DATABASE websitenamehere to mywebsiteuser;\npostgres=# \\q\n\nDone, you have now created a user.\n", "What is setup as user in config ('USER'). Following the error:\n\nFATAL: password authentication failed for user \"myportfolio\"\n\nuser is myportfolio, so you will need to create that user if it does not exist.\n", "I had something similar. My issue was that I did not set the environment variables correctly so it couldn't connect. Ensure that if you go to Edit Configurations, then Environment Variables, and put in your answers in that column.\n", "This problem might also occur if you have some special characters within your password that Postgres cannot cope with (unless you do some special encoding). \n", "For me, I had the wrong port. Additional characters.\n", "This solved for me:\nfrom sqlalchemy import create_engine\nconnection_string_orig = \"postgres://user_with_%34_in_the_string:pw@host:port/db\"\nconnection_string = connection_string_orig.replace(\"%\", \"%25\")\nengine = create_engine(connection_string)\nprint(engine.url) # should be identical to connection_string_orig\nengine.connect()\nfrom:\nhttps://www.appsloveworld.com/coding/python3x/7/flask-alchemy-psycopg2-operationalerror-fatal-password-authentication-fail\n" ]
[ 11, 9, 3, 1, 0, 0, 0 ]
[ "Try something like this:\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n }\n}\n\n" ]
[ -1 ]
[ "django", "postgresql", "python", "ubuntu_16.04" ]
stackoverflow_0048999379_django_postgresql_python_ubuntu_16.04.txt
Q: How can I filter data from a dataframe to show data between several datetimes from a different dataframe? I want to filter df1 to only show data that is between the DatetimeStart and DatetimeEnd datetimes in df2. df1 Estimate datetimeUTC 0 24.870665 2022-05-15 06:05:00+00:00 1 28.534566 2022-05-15 06:10:00+00:00 2 24.412932 2022-05-15 06:15:00+00:00 3 39.325210 2022-05-15 06:20:00+00:00 4 146.334005 2022-05-15 06:25:00+00:00 ... ... ... 4286 1.604675 2022-07-24 05:35:00+00:00 4287 1.090453 2022-07-24 05:40:00+00:00 4288 0.747863 2022-07-24 05:45:00+00:00 4289 0.000000 2022-07-24 05:50:00+00:00 4290 0.000000 2022-07-24 05:55:00+00:00 df2 DatetimeStart DatetimeEnd meanKGH 0 2022-05-16 21:27:30.031000+00:00 2022-05-16 22:30:05.374000+00:00 2.558881 1 2022-05-17 14:05:41.241000+00:00 2022-05-17 17:19:46.208000+00:00 4.423160 2 2022-05-17 17:55:06.274000+00:00 2022-05-17 20:11:23.265000+00:00 4.435756 3 2022-05-17 20:40:24.169000+00:00 2022-05-17 22:46:46.491000+00:00 4.937177 4 2022-05-18 14:19:36.670000+00:00 2022-05-18 15:24:39.494000+00:00 1.490863 5 2022-05-18 15:34:29.384000+00:00 2022-05-18 16:39:24.150000+00:00 0.731882 6 2022-05-18 17:04:25.134000+00:00 2022-05-18 18:09:37.950000+00:00 3.623294 7 2022-05-18 18:49:55.826000+00:00 2022-05-18 19:52:34.110000+00:00 5.690513 8 2022-05-18 20:23:29.154000+00:00 2022-05-18 21:04:44.305000+00:00 11.824433 9 2022-05-18 21:44:16.175000+00:00 2022-05-18 22:44:41.218000+00:00 11.896398 10 2022-05-18 22:56:54.645000+00:00 2022-05-18 23:55:03.087000+00:00 4.003575 11 2022-05-19 14:15:19.518000+00:00 2022-05-19 18:24:34.936000+00:00 9.140599 12 2022-05-19 19:09:40.824000+00:00 2022-05-19 23:06:15.612000+00:00 9.136605 13 2022-05-20 13:28:52.073000+00:00 2022-05-20 15:31:54.219000+00:00 5.421379 14 2022-05-20 15:47:27.298000+00:00 2022-05-20 17:56:20.666000+00:00 1.422874 15 2022-07-18 14:27:59.238000+00:00 2022-07-18 16:59:48.325000+00:00 2.178103 16 2022-07-18 17:11:14.584000+00:00 2022-07-18 18:55:34.275000+00:00 2.964559 17 2022-07-18 19:23:23.860000+00:00 2022-07-18 21:23:59.641000+00:00 5.661950 18 2022-07-18 21:31:36.162000+00:00 2022-07-18 22:41:29.999000+00:00 8.059542 19 2022-07-19 13:18:58.930000+00:00 2022-07-19 15:00:55.187000+00:00 0.953863 20 2022-07-19 15:03:22.686000+00:00 2022-07-19 17:03:06.405000+00:00 11.836619 21 2022-07-20 13:44:33.822000+00:00 2022-07-20 15:59:30.456000+00:00 0.958181 22 2022-07-20 16:00:28.649000+00:00 2022-07-20 18:05:20.733000+00:00 5.560149 23 2022-07-20 18:06:02.896000+00:00 2022-07-20 20:00:05.697000+00:00 2.577347 24 2022-07-20 20:00:43.818000+00:00 2022-07-20 22:17:46.254000+00:00 14.638751 25 2022-07-21 13:57:41.194000+00:00 2022-07-21 16:01:36.047000+00:00 7.850944 26 2022-07-21 16:05:13.766000+00:00 2022-07-21 17:59:12.472000+00:00 0.977591 27 2022-07-21 18:00:02.641000+00:00 2022-07-21 20:09:59.584000+00:00 9.231221 28 2022-07-21 20:10:21.683000+00:00 2022-07-21 20:42:12.073000+00:00 17.146463 29 2022-07-21 20:44:47.577000+00:00 2022-07-21 22:25:56.725000+00:00 5.674103 30 2022-07-22 13:40:16.324000+00:00 2022-07-22 14:38:50.858000+00:00 16.757238 31 2022-07-22 14:41:54.427000+00:00 2022-07-22 15:46:33.143000+00:00 9.189459 32 2022-07-22 15:54:15.672000+00:00 2022-07-22 17:53:17.154000+00:00 3.150163 So far, I have tried this, but am only getting the df1 data for the last (id=32) date range in df2 for i in range(len(df2)): t1 = df2.loc[i, 'DatetimeStart'] t2 = df2.loc[i, 'DatetimeEnd'] data = df1.loc[(df1['datetimeUTC'] >= t1) & (df1['datetimeUTC'] <= t2)] A: Create new dataframe newdf=pd.DataFrame(data=None, columns=df1.columns) Then concatenate for i in range(len(df2)): newdf=pd.concat([newdf,(df1[df1['datetimeUTC'].between(df2['DatetimeStart'][i],df2['DatetimeEnd'][i])])],ignore_index=True)
How can I filter data from a dataframe to show data between several datetimes from a different dataframe?
I want to filter df1 to only show data that is between the DatetimeStart and DatetimeEnd datetimes in df2. df1 Estimate datetimeUTC 0 24.870665 2022-05-15 06:05:00+00:00 1 28.534566 2022-05-15 06:10:00+00:00 2 24.412932 2022-05-15 06:15:00+00:00 3 39.325210 2022-05-15 06:20:00+00:00 4 146.334005 2022-05-15 06:25:00+00:00 ... ... ... 4286 1.604675 2022-07-24 05:35:00+00:00 4287 1.090453 2022-07-24 05:40:00+00:00 4288 0.747863 2022-07-24 05:45:00+00:00 4289 0.000000 2022-07-24 05:50:00+00:00 4290 0.000000 2022-07-24 05:55:00+00:00 df2 DatetimeStart DatetimeEnd meanKGH 0 2022-05-16 21:27:30.031000+00:00 2022-05-16 22:30:05.374000+00:00 2.558881 1 2022-05-17 14:05:41.241000+00:00 2022-05-17 17:19:46.208000+00:00 4.423160 2 2022-05-17 17:55:06.274000+00:00 2022-05-17 20:11:23.265000+00:00 4.435756 3 2022-05-17 20:40:24.169000+00:00 2022-05-17 22:46:46.491000+00:00 4.937177 4 2022-05-18 14:19:36.670000+00:00 2022-05-18 15:24:39.494000+00:00 1.490863 5 2022-05-18 15:34:29.384000+00:00 2022-05-18 16:39:24.150000+00:00 0.731882 6 2022-05-18 17:04:25.134000+00:00 2022-05-18 18:09:37.950000+00:00 3.623294 7 2022-05-18 18:49:55.826000+00:00 2022-05-18 19:52:34.110000+00:00 5.690513 8 2022-05-18 20:23:29.154000+00:00 2022-05-18 21:04:44.305000+00:00 11.824433 9 2022-05-18 21:44:16.175000+00:00 2022-05-18 22:44:41.218000+00:00 11.896398 10 2022-05-18 22:56:54.645000+00:00 2022-05-18 23:55:03.087000+00:00 4.003575 11 2022-05-19 14:15:19.518000+00:00 2022-05-19 18:24:34.936000+00:00 9.140599 12 2022-05-19 19:09:40.824000+00:00 2022-05-19 23:06:15.612000+00:00 9.136605 13 2022-05-20 13:28:52.073000+00:00 2022-05-20 15:31:54.219000+00:00 5.421379 14 2022-05-20 15:47:27.298000+00:00 2022-05-20 17:56:20.666000+00:00 1.422874 15 2022-07-18 14:27:59.238000+00:00 2022-07-18 16:59:48.325000+00:00 2.178103 16 2022-07-18 17:11:14.584000+00:00 2022-07-18 18:55:34.275000+00:00 2.964559 17 2022-07-18 19:23:23.860000+00:00 2022-07-18 21:23:59.641000+00:00 5.661950 18 2022-07-18 21:31:36.162000+00:00 2022-07-18 22:41:29.999000+00:00 8.059542 19 2022-07-19 13:18:58.930000+00:00 2022-07-19 15:00:55.187000+00:00 0.953863 20 2022-07-19 15:03:22.686000+00:00 2022-07-19 17:03:06.405000+00:00 11.836619 21 2022-07-20 13:44:33.822000+00:00 2022-07-20 15:59:30.456000+00:00 0.958181 22 2022-07-20 16:00:28.649000+00:00 2022-07-20 18:05:20.733000+00:00 5.560149 23 2022-07-20 18:06:02.896000+00:00 2022-07-20 20:00:05.697000+00:00 2.577347 24 2022-07-20 20:00:43.818000+00:00 2022-07-20 22:17:46.254000+00:00 14.638751 25 2022-07-21 13:57:41.194000+00:00 2022-07-21 16:01:36.047000+00:00 7.850944 26 2022-07-21 16:05:13.766000+00:00 2022-07-21 17:59:12.472000+00:00 0.977591 27 2022-07-21 18:00:02.641000+00:00 2022-07-21 20:09:59.584000+00:00 9.231221 28 2022-07-21 20:10:21.683000+00:00 2022-07-21 20:42:12.073000+00:00 17.146463 29 2022-07-21 20:44:47.577000+00:00 2022-07-21 22:25:56.725000+00:00 5.674103 30 2022-07-22 13:40:16.324000+00:00 2022-07-22 14:38:50.858000+00:00 16.757238 31 2022-07-22 14:41:54.427000+00:00 2022-07-22 15:46:33.143000+00:00 9.189459 32 2022-07-22 15:54:15.672000+00:00 2022-07-22 17:53:17.154000+00:00 3.150163 So far, I have tried this, but am only getting the df1 data for the last (id=32) date range in df2 for i in range(len(df2)): t1 = df2.loc[i, 'DatetimeStart'] t2 = df2.loc[i, 'DatetimeEnd'] data = df1.loc[(df1['datetimeUTC'] >= t1) & (df1['datetimeUTC'] <= t2)]
[ "Create new dataframe\nnewdf=pd.DataFrame(data=None, columns=df1.columns)\n\nThen concatenate\nfor i in range(len(df2)):\n newdf=pd.concat([newdf,(df1[df1['datetimeUTC'].between(df2['DatetimeStart'][i],df2['DatetimeEnd'][i])])],ignore_index=True)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074466353_python.txt
Q: Django orm get latest for each group I am using Django 1.6 with Mysql. I have these models: class Student(models.Model): username = models.CharField(max_length=200, unique = True) class Score(models.Model): student = models.ForeignKey(Student) date = models.DateTimeField() score = models.IntegerField() I want to get the latest score record for each student. I have tried: Score.objects.values('student').annotate(latest_date=Max('date')) and: Score.objects.values('student__username').annotate(latest_date=Max('date')) as described Django ORM - Get the latest record for the group but it did not help. A: If your DB is postgres which supports distinct() on field you can try Score.objects.order_by('student__username', '-date').distinct('student__username') A: This should work on Django 1.2+ and MySQL: Score.objects.annotate( max_date=Max('student__score__date') ).filter( date=F('max_date') ) A: I believe this would give you the student and the data Score.objects.values('student').annotate(latest_date=Max('date')) If you want the full Score records, it seems you will have to use a raw SQL query: Filtering Django Query by the Record with the Maximum Column Value A: Some great answers already, but none of them mentions Window functions. The following example annotates all score objects with the latest score for the corresponding student: from django.db.models import F, Window from django.db.models.functions import FirstValue scores = Score.objects.annotate( latest_score=Window( expression=FirstValue('score'), partition_by=['student'], order_by=F('date').desc(), ) ) This results in the following SQL (using Sqlite backend): SELECT "score"."id", "score"."student_id", "score"."date", "score"."score", FIRST_VALUE("score"."score") OVER (PARTITION BY "score"."student_id" ORDER BY "score"."date" DESC) AS "latest_score" FROM "score" The required information is already there, but we can also reduce this queryset to a set of unique combinations of student_id and latest_score. For example, on PostgreSQL we can use distinct with field names, as in scores.distinct('student'). On other db backends we can do something like set(scores.values_list('student_id', 'latest_score')), although this evaluates the queryset. Unfortunately, at the time of writing, it is not yet possible to filter a windowed queryset.
Django orm get latest for each group
I am using Django 1.6 with Mysql. I have these models: class Student(models.Model): username = models.CharField(max_length=200, unique = True) class Score(models.Model): student = models.ForeignKey(Student) date = models.DateTimeField() score = models.IntegerField() I want to get the latest score record for each student. I have tried: Score.objects.values('student').annotate(latest_date=Max('date')) and: Score.objects.values('student__username').annotate(latest_date=Max('date')) as described Django ORM - Get the latest record for the group but it did not help.
[ "If your DB is postgres which supports distinct() on field you can try\nScore.objects.order_by('student__username', '-date').distinct('student__username')\n\n", "This should work on Django 1.2+ and MySQL:\nScore.objects.annotate(\n max_date=Max('student__score__date')\n).filter(\n date=F('max_date')\n)\n\n", "I believe this would give you the student and the data\nScore.objects.values('student').annotate(latest_date=Max('date'))\n\nIf you want the full Score records, it seems you will have to use a raw SQL query: Filtering Django Query by the Record with the Maximum Column Value\n", "Some great answers already, but none of them mentions Window functions.\nThe following example annotates all score objects with the latest score for the corresponding student:\nfrom django.db.models import F, Window\nfrom django.db.models.functions import FirstValue\n\nscores = Score.objects.annotate(\n latest_score=Window(\n expression=FirstValue('score'),\n partition_by=['student'],\n order_by=F('date').desc(),\n )\n)\n\nThis results in the following SQL (using Sqlite backend):\nSELECT \n \"score\".\"id\", \n \"score\".\"student_id\", \n \"score\".\"date\", \n \"score\".\"score\", \n FIRST_VALUE(\"score\".\"score\") \n OVER (PARTITION BY \"score\".\"student_id\" ORDER BY \"score\".\"date\" DESC) \n AS \"latest_score\" \nFROM \"score\"\n\nThe required information is already there, but we can also reduce this queryset to a set of unique combinations of student_id and latest_score.\nFor example, on PostgreSQL we can use distinct with field names, as in scores.distinct('student').\nOn other db backends we can do something like set(scores.values_list('student_id', 'latest_score')), although this evaluates the queryset.\nUnfortunately, at the time of writing, it is not yet possible to filter a windowed queryset.\n" ]
[ 82, 43, 7, 0 ]
[ "Here's an example using Greatest with a secondary annotate. I was facing and issue where annotate was returning duplicate records ( Examples ), but the last_message_time Greatest annotation was causing duplicates.\nqs = (\n Example.objects.filter(\n Q(xyz=xyz)\n )\n .exclude(\n Q(zzz=zzz)\n )\n # this annotation causes duplicate Examples in the qs\n # and distinct doesn't work, as expected\n # .distinct('id') \n .annotate(\n last_message_time=Greatest(\n \"comments__created\",\n \"files__owner_files__created\",\n )\n )\n # so this second annotation selects the Max value of the various Greatest\n .annotate(\n last_message_time=Max(\n \"last_message_time\"\n )\n )\n .order_by(\"-last_message_time\")\n )\n\n\nreference:\n\nhttps://docs.djangoproject.com/en/3.1/ref/models/database-functions/#greatest\nfrom django.db.models import Max\n\n" ]
[ -1 ]
[ "django", "django_orm", "django_queryset", "python" ]
stackoverflow_0019923877_django_django_orm_django_queryset_python.txt
Q: Read txt file including scientific numbers having D instead of E in python I have a txt file inculding 10 columns and and want to read it as a dataframe. The problem is that the numbers are outputs of Fortran and having a weird notation like 9.677975573367686D+00 and cannot be converted to float. Thank you in advance. The following codes did not worked. data = np.loadtxt('data.txt', converters={0: lambda s: s.replace(b'D', b'E')}) float(val.replace('D', 'E')) A: If your code returned an error message it's best to post the whole message which can help others identify what was wrong. My example tsv, the blanks are tabs, '\t'. 123.456D78 23.455D+00 456.789 987.65D3 45D-4 78.9D-03 9.677975573367686D+00 609.54d+4 123.456 This code worked to read the above tsv. import numpy as np def translate( s ): s = s.replace( b'D', b'E' ) s = s.replace( b'd', b'E' ) return s conv = dict( zip( range( 3 ), [ translate ] * 3 )) # Apply translate to all three columns. print( conv ) # {0: <function translate at 0x7fc3bf938430>, 1: <function translate at 0x7fc3bf938430>, # 2: <function translate at 0x7fc3bf938430>} result = np.loadtxt( 'rtsv.tsv', delimiter = '\t', converters = conv ) result # array([[1.23456000e+80, 2.34550000e+01, 4.56789000e+02], # [9.87650000e+05, 4.50000000e-03, 7.89000000e-02], # [9.67797557e+00, 6.09540000e+06, 1.23456000e+02]])
Read txt file including scientific numbers having D instead of E in python
I have a txt file inculding 10 columns and and want to read it as a dataframe. The problem is that the numbers are outputs of Fortran and having a weird notation like 9.677975573367686D+00 and cannot be converted to float. Thank you in advance. The following codes did not worked. data = np.loadtxt('data.txt', converters={0: lambda s: s.replace(b'D', b'E')}) float(val.replace('D', 'E'))
[ "If your code returned an error message it's best to post the whole message which can help others identify what was wrong.\nMy example tsv, the blanks are tabs, '\\t'.\n123.456D78 23.455D+00 456.789\n987.65D3 45D-4 78.9D-03\n9.677975573367686D+00 609.54d+4 123.456\n\nThis code worked to read the above tsv.\nimport numpy as np \n\ndef translate( s ): \n s = s.replace( b'D', b'E' ) \n s = s.replace( b'd', b'E' ) \n return s \n\nconv = dict( zip( range( 3 ), [ translate ] * 3 ))\n# Apply translate to all three columns.\n\nprint( conv ) \n# {0: <function translate at 0x7fc3bf938430>, 1: <function translate at 0x7fc3bf938430>, \n# 2: <function translate at 0x7fc3bf938430>}\n\nresult = np.loadtxt( 'rtsv.tsv', delimiter = '\\t', converters = conv )\n\nresult\n# array([[1.23456000e+80, 2.34550000e+01, 4.56789000e+02],\n# [9.87650000e+05, 4.50000000e-03, 7.89000000e-02],\n# [9.67797557e+00, 6.09540000e+06, 1.23456000e+02]])\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python", "python_3.x" ]
stackoverflow_0074454899_numpy_pandas_python_python_3.x.txt
Q: Failed to create a virtual environment on MacOS with M1 with PyCharm I just bought a new MacBook Pro with M1 Pro I installed python 3.11 and Pycharm as IDE. I tried to create a new project using virtualenv but it continues to show an error (see below)... I tried using Python 3.10, I tried installing it from Homebrew, reinstalling it.. nothing changes... Steps to Reproduce Start a new project. Select VirtualEnv as Interpreter Create What happen A: You are trying to create the virtualenv at /Users/test, to which (by default, and unless running as root) you don't have permissions. Try setting the Location field to your own home (somewhere under /Users/antonellobarbone/). A: Have you tried creating it via command line? If that works, the problem is Pycharm. If not, the problem is either in your base installation or access rights.
Failed to create a virtual environment on MacOS with M1 with PyCharm
I just bought a new MacBook Pro with M1 Pro I installed python 3.11 and Pycharm as IDE. I tried to create a new project using virtualenv but it continues to show an error (see below)... I tried using Python 3.10, I tried installing it from Homebrew, reinstalling it.. nothing changes... Steps to Reproduce Start a new project. Select VirtualEnv as Interpreter Create What happen
[ "You are trying to create the virtualenv at /Users/test, to which (by default, and unless running as root) you don't have permissions. Try setting the Location field to your own home (somewhere under /Users/antonellobarbone/).\n", "Have you tried creating it via command line? If that works, the problem is Pycharm. If not, the problem is either in your base installation or access rights.\n" ]
[ 3, 0 ]
[]
[]
[ "apple_m1", "macos", "pycharm", "python", "virtualenv" ]
stackoverflow_0074466791_apple_m1_macos_pycharm_python_virtualenv.txt
Q: How to make this program to displayed the result correctly I'm just learning Python and I don't know how to make this program to display result in label that I want and when I click button again I want to the new result replaces the previous one I want to last class shows result in label or entry when i click 1st button and when i click it again the new result will replace previous. This program is not finished yet. I don't want to write all code when i have problem with first function of program. Once I deal with this problem, writing the rest of the code will not be difficult import tkinter as tk from tkinter import ttk class tkinterApp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) windowWidth = 300 windowHeight = 200 offsetLeft = int( (self.winfo_screenwidth() - windowWidth) / 2 ) offsetTop = int( (self.winfo_screenheight() - windowHeight) / 2 ) self.geometry('{}x{}+{}+{}'.format(windowWidth, windowHeight, offsetLeft, offsetTop)) self.title('Konwerter systemów liczbowych') self.minsize(300, 200) container = tk.Frame(self, relief="ridge", width=300, height=200) container.pack(expand = False) container.grid_rowconfigure(0, weight = 1) container.grid_columnconfigure(0, weight = 1) self.frames = {} for F in (StartPage, Decy, decBin): frame = F(container, self) self.frames[F] = frame frame.grid(row = 0, column = 0, sticky ="nsew") self.show_frame(StartPage) def show_frame(self, cont): frame = self.frames[cont] frame.tkraise() class StartPage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wybierz system do którego należy Twoja liczba.") label.grid() button1 = ttk.Button(self, text ="Decymalny", command = lambda : controller.show_frame(Decy)) button1.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Binarny", command = lambda : controller.show_frame(Binar)) button2.grid(padx = 5, pady = 5) button3 = ttk.Button(self, text ="Oktalny", command = lambda : controller.show_frame(Oktal)) button3.grid(padx = 5, pady = 5) button4 = ttk.Button(self, text ="Heksadecymalny", command = lambda : controller.show_frame(Heksal)) button4.grid(padx = 5, pady = 5) class Decy(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wybierz system na jaki chcesz przekowertować") label.grid() label = ttk.Label(self, text ="swoją liczbę.") label.grid() button1 = ttk.Button(self, text ="Binarny", command = lambda : controller.show_frame(decBin)) button1.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Oktalny", command = lambda : controller.show_frame(decOkt)) button2.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Heksadecymalny", command = lambda : controller.show_frame(decHex)) button2.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Powrót", command = lambda : controller.show_frame(StartPage)) button2.grid(padx = 5, pady = 5) class decBin(tk.Frame): def clearText(self): self.entry1.confing(text='') def oblicz(): dec = wpis.get() dec = int(dec) i = 0 bnum = [] while dec!=0: rem = dec%2 bnum.insert(i, rem) i = i+1 dec = int(dec/2) i = i-1 def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wprowadź liczbę i zatwierdź.") label.grid() wpis = ttk.Entry(self) wpis.grid() button1 = ttk.Button(self, text="Konwertuj", command = oblicz) button1.grid(padx = 10, pady = 10) button2 = ttk.Button(self, text ="Powrót", command = lambda : controller.show_frame(StartPage)) button2.grid(padx = 10, pady = 10) app = tkinterApp() app.mainloop() A: If you want to update an existing Label widget, declare a tk.StringVar() to store the label text, then bind that to your Label's textvariable attribute. Then your Label will automatically update whenever you set() the StringVar. label_var = tk.StringVar(self, 'Default Value') # both of these args are optional label = ttk.Label(self, textvariable=label_var) # instantiate Label and bind the var To update the label: label_var.set('New String Value') When you grid()/pack()/place() your Label it will start with the text you gave the StringVar, if any.
How to make this program to displayed the result correctly
I'm just learning Python and I don't know how to make this program to display result in label that I want and when I click button again I want to the new result replaces the previous one I want to last class shows result in label or entry when i click 1st button and when i click it again the new result will replace previous. This program is not finished yet. I don't want to write all code when i have problem with first function of program. Once I deal with this problem, writing the rest of the code will not be difficult import tkinter as tk from tkinter import ttk class tkinterApp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) windowWidth = 300 windowHeight = 200 offsetLeft = int( (self.winfo_screenwidth() - windowWidth) / 2 ) offsetTop = int( (self.winfo_screenheight() - windowHeight) / 2 ) self.geometry('{}x{}+{}+{}'.format(windowWidth, windowHeight, offsetLeft, offsetTop)) self.title('Konwerter systemów liczbowych') self.minsize(300, 200) container = tk.Frame(self, relief="ridge", width=300, height=200) container.pack(expand = False) container.grid_rowconfigure(0, weight = 1) container.grid_columnconfigure(0, weight = 1) self.frames = {} for F in (StartPage, Decy, decBin): frame = F(container, self) self.frames[F] = frame frame.grid(row = 0, column = 0, sticky ="nsew") self.show_frame(StartPage) def show_frame(self, cont): frame = self.frames[cont] frame.tkraise() class StartPage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wybierz system do którego należy Twoja liczba.") label.grid() button1 = ttk.Button(self, text ="Decymalny", command = lambda : controller.show_frame(Decy)) button1.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Binarny", command = lambda : controller.show_frame(Binar)) button2.grid(padx = 5, pady = 5) button3 = ttk.Button(self, text ="Oktalny", command = lambda : controller.show_frame(Oktal)) button3.grid(padx = 5, pady = 5) button4 = ttk.Button(self, text ="Heksadecymalny", command = lambda : controller.show_frame(Heksal)) button4.grid(padx = 5, pady = 5) class Decy(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wybierz system na jaki chcesz przekowertować") label.grid() label = ttk.Label(self, text ="swoją liczbę.") label.grid() button1 = ttk.Button(self, text ="Binarny", command = lambda : controller.show_frame(decBin)) button1.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Oktalny", command = lambda : controller.show_frame(decOkt)) button2.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Heksadecymalny", command = lambda : controller.show_frame(decHex)) button2.grid(padx = 5, pady = 5) button2 = ttk.Button(self, text ="Powrót", command = lambda : controller.show_frame(StartPage)) button2.grid(padx = 5, pady = 5) class decBin(tk.Frame): def clearText(self): self.entry1.confing(text='') def oblicz(): dec = wpis.get() dec = int(dec) i = 0 bnum = [] while dec!=0: rem = dec%2 bnum.insert(i, rem) i = i+1 dec = int(dec/2) i = i-1 def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = ttk.Label(self, text ="Wprowadź liczbę i zatwierdź.") label.grid() wpis = ttk.Entry(self) wpis.grid() button1 = ttk.Button(self, text="Konwertuj", command = oblicz) button1.grid(padx = 10, pady = 10) button2 = ttk.Button(self, text ="Powrót", command = lambda : controller.show_frame(StartPage)) button2.grid(padx = 10, pady = 10) app = tkinterApp() app.mainloop()
[ "If you want to update an existing Label widget, declare a tk.StringVar() to store the label text, then bind that to your Label's textvariable attribute. Then your Label will automatically update whenever you set() the StringVar.\nlabel_var = tk.StringVar(self, 'Default Value') # both of these args are optional\nlabel = ttk.Label(self, textvariable=label_var) # instantiate Label and bind the var\n\nTo update the label:\nlabel_var.set('New String Value')\n\nWhen you grid()/pack()/place() your Label it will start with the text you gave the StringVar, if any.\n" ]
[ 0 ]
[]
[]
[ "label", "python", "tkinter" ]
stackoverflow_0074466647_label_python_tkinter.txt
Q: How to scrape reviews from chrome web store for a given extension? I am trying to use this python code to scrape chrome web store from lxml import html import requests url = 'https://chrome.google.com/webstore/detail/cookie-editor/hlkenndednhfkekhgcdicdfddnkalmdm' values = {'username': 'myemail@gmail.com', 'password': 'mypassword'} page = requests.get(url, data=values) print(page) tree = html.fromstring(page.content) review = tree.xpath('//div[@class="ba-Eb-ba"]/text()')[0] print(review) however, I am getting Bad request 400. Is it even possible to scrape chrome web store? A: The webpage's contents are loaded by JavaScript. So you have to apply an automation tool something like Selenium to grab the right data. Example: from selenium import webdriver import time from bs4 import BeautifulSoup from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service,options=options) data = [] driver.get('https://chrome.google.com/webstore/detail/cookie-editor/hlkenndednhfkekhgcdicdfddnkalmdm') driver.maximize_window() time.sleep(3) driver.find_element(By.XPATH,'//*[@class="e-f-b-L" and contains(text(),"Review")]').click() time.sleep(1) soup = BeautifulSoup(driver.page_source,"html.parser") data =[] reviews = soup.select('div.ba-bc-Xb') for review in reviews: name = review.select_one('span[class="comment-thread-displayname"]').get_text(strip=True) comment = review.select_one('div[class="ba-Eb-ba"]').get_text(strip=True) data.append({ 'name': name, 'comment': comment }) print(data) Oputput: [{'name': 'PingPing But', 'comment': 'Love it..... so simple and easy to use !'}, {'name': 'Zhou Jeffrey', 'comment': "doesn't work anymore"}, {'name': 'eunice miralles', 'comment': 'same im trying to find a fix and in github they said it has a problem with permission but still not fixed'}, {'name': 'Jade Martinito', 'comment': 'me too'}, {'name': 'Bonafide Champ', 'comment': 'It works fine but it does this weird thing when I import cookies in incognito mode, the cookies still get imported in the main browser windows.'}, {'name': 'Arman Nawaz World', 'comment': 'Easy to use this extension. it is very user friendly and simple interface, while other looks little complicated\nReview by ArmanxNawaz'}, {'name': 'Bagong Pook Elementary School', 'comment': 'Easy to use! Very helpful'}, {'name': 'Whitelisted', 'comment': 'Works great for development and resetting website cookies without digging through your settings'}, {'name': 'Rehxn Ali', 'comment': 'Best!! Saved Alot of Money With This Extention'}, {'name': 'biniyam demeke', 'comment': 'Oh, Very Helpful'}, {'name': 'Pingu VFX', 'comment': 'Easy to use while scamming kids on their roblox accountes'}, {'name': 'Abstractedjuice09 Z', 'comment': 'how?'}, {'name': 'jd', 'comment': 'lol same'}, {'name': 'Arnells Designs', 'comment': 'good'}, {'name': 'David Galbraith', 'comment': 'How is this called a cookie "editor"?? Not working at all. When I open it, the extension shows cookies for the page that I\'m currently on. It should be able to show cookies from every site I\'ve visited. And if I type ANYTHING in the search, nothing comes up. Not google, not Facebook, not steam, not one site that I have visited or logged into show up in the search bar. There is something very, very wrong. yeah, I can delete ALL cookies, but CCleaner does that just fine.'}, {'name': 'df fes', 'comment': 'Maybe you dont know how to use it?'}, {'name': 'Galih Kamulyan', 'comment': 'LEGENDARY'}, {'name': 'Aniket Chaudhary', 'comment': 'Liked it. But after using it for sometime, it shows an "unknown error".'}, {'name': 'Anonymous', 'comment': "mine doesn't work for first time too , it always show unknown error"}, {'name': 'Ehsan Abtahee', 'comment': 'did u find a fix?'}, {'name': 'kashba', 'comment': 'if you find a fix.. do tell me'}, {'name': 'Nischay2004 Muller', 'comment': 'The best easy cookie editor for all , strongly recommended'}, {'name': 'ultra noob', 'comment': 'Super simple and easy to use.'}, {'name': 'विकास कालीरामना', 'comment': 'Loved it!'}, {'name': 'Zachary Bolt', 'comment': 'Clean, easy to use and actively updated. 5 Stars well earned.'}, {'name': 'TALHA JUBAYER', 'comment': "Love it .it's working"}, {'name': 'amrozain 2007', 'comment': 'good for hackers'}, {'name': 'Kazuko Masao', 'comment': 'Very good .. Very good .. Very good.'}, {'name': 'chase Brigette', 'comment': 'This extention seems to be the culprit that makes bing my default browser!!! The extension was good before I realized this -_-"'}, {'name': 'Digital Audio Directions', 'comment': 'This is a joke right? Only seems to list cookies of the site you are on and all in a chopped up list format. NO search function for existing stored cookies? Search by keyword, date, etc, Does not seem available.'}, {'name': 'Phantom V', 'comment': 'This seems outdated.'}, {'name': 'Anonymous ZN49', 'comment': 'Easy to use this extension. it is very user friendly and simple interface, while other looks little complicated.'}, {'name': 'YongYi Wu', 'comment': "Who don't love cookies?"}, {'name': 'hush', 'comment': 'was working fine, now im getting an import error'}]
How to scrape reviews from chrome web store for a given extension?
I am trying to use this python code to scrape chrome web store from lxml import html import requests url = 'https://chrome.google.com/webstore/detail/cookie-editor/hlkenndednhfkekhgcdicdfddnkalmdm' values = {'username': 'myemail@gmail.com', 'password': 'mypassword'} page = requests.get(url, data=values) print(page) tree = html.fromstring(page.content) review = tree.xpath('//div[@class="ba-Eb-ba"]/text()')[0] print(review) however, I am getting Bad request 400. Is it even possible to scrape chrome web store?
[ "The webpage's contents are loaded by JavaScript. So you have to apply an automation tool something like Selenium to grab the right data.\nExample:\nfrom selenium import webdriver\nimport time\nfrom bs4 import BeautifulSoup\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\n\noptions = webdriver.ChromeOptions()\noptions.add_experimental_option(\"detach\", True)\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service,options=options)\n\ndata = []\ndriver.get('https://chrome.google.com/webstore/detail/cookie-editor/hlkenndednhfkekhgcdicdfddnkalmdm')\ndriver.maximize_window()\ntime.sleep(3)\n\ndriver.find_element(By.XPATH,'//*[@class=\"e-f-b-L\" and contains(text(),\"Review\")]').click()\ntime.sleep(1)\n\nsoup = BeautifulSoup(driver.page_source,\"html.parser\")\n\ndata =[]\nreviews = soup.select('div.ba-bc-Xb')\nfor review in reviews:\n name = review.select_one('span[class=\"comment-thread-displayname\"]').get_text(strip=True)\n comment = review.select_one('div[class=\"ba-Eb-ba\"]').get_text(strip=True)\n\n data.append({\n 'name': name,\n 'comment': comment\n })\n\nprint(data)\n\n \n\nOputput:\n[{'name': 'PingPing But', 'comment': 'Love it..... so simple and easy to use !'}, {'name': 'Zhou Jeffrey', 'comment': \"doesn't work anymore\"}, {'name': 'eunice miralles', 'comment': 'same im trying to find a fix and in github they said it has a problem with permission but still not fixed'}, {'name': 'Jade Martinito', 'comment': 'me too'}, {'name': 'Bonafide Champ', 'comment': 'It works fine but it does this weird thing when I import cookies in incognito mode, \nthe cookies still get imported in the main browser windows.'}, {'name': 'Arman Nawaz World', 'comment': 'Easy to use this extension. it is very user friendly and simple interface, while other looks little complicated\\nReview by ArmanxNawaz'}, {'name': 'Bagong Pook Elementary School', 'comment': 'Easy to use! Very helpful'}, {'name': 'Whitelisted', 'comment': 'Works great for development and resetting website cookies without digging through your settings'}, {'name': 'Rehxn Ali', 'comment': 'Best!! Saved Alot of Money With This Extention'}, {'name': 'biniyam demeke', 'comment': 'Oh, Very Helpful'}, {'name': 'Pingu VFX', 'comment': 'Easy to use while scamming kids on their roblox accountes'}, {'name': 'Abstractedjuice09 Z', 'comment': 'how?'}, {'name': 'jd', 'comment': 'lol same'}, {'name': 'Arnells Designs', 'comment': 'good'}, {'name': 'David Galbraith', 'comment': 'How is this called a cookie \"editor\"?? Not working at all. When I open it, the extension shows cookies for the page that I\\'m currently on. It should be able to show cookies from every site I\\'ve visited. And if I type ANYTHING in the search, nothing comes up. Not google, not Facebook, not steam, not one site that I have visited or logged into show up in the search bar. There is something very, very wrong. yeah, I can delete ALL cookies, but CCleaner does that just fine.'}, {'name': 'df fes', 'comment': 'Maybe you dont know how to use it?'}, {'name': 'Galih Kamulyan', 'comment': 'LEGENDARY'}, {'name': 'Aniket Chaudhary', \n'comment': 'Liked it. But after using it for sometime, it shows an \"unknown error\".'}, {'name': 'Anonymous', 'comment': \"mine doesn't work for first time too , it always show unknown error\"}, {'name': 'Ehsan Abtahee', 'comment': 'did u find a fix?'}, {'name': 'kashba', 'comment': 'if you find a fix.. do tell me'}, {'name': 'Nischay2004 Muller', 'comment': 'The best easy cookie editor for all , strongly recommended'}, {'name': 'ultra noob', 'comment': 'Super simple and easy to use.'}, {'name': 'विकास कालीरामना', 'comment': 'Loved it!'}, {'name': 'Zachary Bolt', 'comment': 'Clean, easy to use and actively updated. 5 Stars well earned.'}, {'name': 'TALHA JUBAYER', 'comment': \"Love it .it's \nworking\"}, {'name': 'amrozain 2007', 'comment': 'good for hackers'}, {'name': 'Kazuko Masao', 'comment': 'Very good \n.. Very good .. Very good.'}, {'name': 'chase Brigette', 'comment': 'This extention seems to be the culprit that makes bing my default browser!!! The extension was good before I realized this -_-\"'}, {'name': 'Digital Audio Directions', 'comment': 'This is a joke right? Only seems to list cookies of the site you are on and all in a chopped up list format. NO search function for existing stored cookies? Search by keyword, date, etc, Does not seem available.'}, {'name': 'Phantom V', 'comment': 'This seems outdated.'}, {'name': 'Anonymous ZN49', 'comment': 'Easy to use this extension. it is very user friendly and simple interface, while other looks little complicated.'}, {'name': 'YongYi Wu', 'comment': \"Who don't love cookies?\"}, {'name': 'hush', 'comment': 'was working fine, now im getting an import error'}]\n\n" ]
[ 1 ]
[]
[]
[ "python", "web_scraping" ]
stackoverflow_0074466480_python_web_scraping.txt
Q: How to use dict.get() with multidimensional dict? I have a multidimensional dict, and I'd like to be able to retrieve a value by a key:key pair, and return 'NA' if the first key doesn't exist. All of the sub-dicts have the same keys. d = { 'a': {'j':1,'k':2}, 'b': {'j':2,'k':3}, 'd': {'j':1,'k':3} } I know I can use d.get('c','NA') to get the sub-dict if it exists and return 'NA' otherwise, but I really only need one value from the sub-dict. I'd like to do something like d.get('c['j']','NA') if that existed. Right now I'm just checking to see if the top-level key exists and then assigning the sub-value to a variable if it exists or 'NA' if not. However, I'm doing this about 500k times and also retrieving/generating other information about each top-level key from elsewhere, and I'm trying to speed this up a little bit. A: How about d.get('a', {'j': 'NA'})['j'] ? If not all subdicts have a j key, then d.get('a', {}).get('j', 'NA')   To cut down on identical objects created, you can devise something like class DefaultNASubdict(dict): class NADict(object): def __getitem__(self, k): return 'NA' NA = NADict() def __missing__(self, k): return self.NA nadict = DefaultNASubdict({ 'a': {'j':1,'k':2}, 'b': {'j':2,'k':3}, 'd': {'j':1,'k':3} }) print nadict['a']['j'] # 1 print nadict['b']['j'] # 2 print nadict['c']['j'] # NA   Same idea using defaultdict: import collections class NADict(object): def __getitem__(self, k): return 'NA' @staticmethod def instance(): return NADict._instance NADict._instance = NADict() nadict = collections.defaultdict(NADict.instance, { 'a': {'j':1,'k':2}, 'b': {'j':2,'k':3}, 'd': {'j':1,'k':3} }) A: Here's a simple and efficient way to do it with ordinary dictionaries, nested an arbitrary number of levels. The example code works in both Python 2 and 3. from __future__ import print_function try: from functools import reduce except ImportError: # Assume it's built-in (Python 2.x) pass def chained_get(dct, *keys): SENTRY = object() def getter(level, key): return 'NA' if level is SENTRY else level.get(key, SENTRY) return reduce(getter, keys, dct) d = {'a': {'j': 1, 'k': 2}, 'b': {'j': 2, 'k': 3}, 'd': {'j': 1, 'k': 3}, } print(chained_get(d, 'a', 'j')) # 1 print(chained_get(d, 'b', 'k')) # 3 print(chained_get(d, 'k', 'j')) # NA It could also be done recursively: # Recursive version. def chained_get(dct, *keys): SENTRY = object() def getter(level, keys): return (level if keys[0] is SENTRY else 'NA' if level is SENTRY else getter(level.get(keys[0], SENTRY), keys[1:])) return getter(dct, keys+(SENTRY,)) Although this way of doing it isn't quite as efficient as the first. A: Another way to get multidimensional dict example ( use get method twice) d.get('a', {}).get('j') A: Rather than a hierarchy of nested dict objects, you could use one dictionary whose keys are a tuple representing a path through the hierarchy. In [34]: d2 = {(x,y):d[x][y] for x in d for y in d[x]} In [35]: d2 Out[35]: {('a', 'j'): 1, ('a', 'k'): 2, ('b', 'j'): 2, ('b', 'k'): 3, ('d', 'j'): 1, ('d', 'k'): 3} In [36]: timeit [d[x][y] for x,y in d2.keys()] 100000 loops, best of 3: 2.37 us per loop In [37]: timeit [d2[x] for x in d2.keys()] 100000 loops, best of 3: 2.03 us per loop Accessing this way looks like it's about 15% faster. You can still use the get method with a default value: In [38]: d2.get(('c','j'),'NA') Out[38]: 'NA' A: For a functional approach very similar to martineau's answer, I've gone with the following: def chained_get(dictionary: dict, *args, default: Any = None) -> Any: """ Get a value nested in a dictionary by its nested path. """ value_path = list(args) dict_chain = dictionary while value_path: try: dict_chain = dict_chain.get(value_path.pop(0)) except AttributeError: return default return dict_chain It's a slightly simpler implementation but is still recursive and optionally allows a default value. The usage is identical to martineau's answer: from typing import Any def chained_get(dictionary: dict, *args, default: Any = None) -> Any: """ Get a value nested in a dictionary by its nested path. """ value_path = list(args) dict_chain = dictionary while value_path: try: dict_chain = dict_chain.get(value_path.pop(0)) except AttributeError: return default return dict_chain def main() -> None: dct = { "a": {"j": 1, "k": 2}, "b": {"j": 2, "k": 3}, "d": {"j": 1, "k": 3}, } print(chained_get(dct, "a", "j")) # 1 print(chained_get(dct, "b", "k")) # 3 print(chained_get(dct, "k", "j")) # None print(chained_get(dct, "k", "j", default="NA")) # NA if __name__ == "__main__": main()
How to use dict.get() with multidimensional dict?
I have a multidimensional dict, and I'd like to be able to retrieve a value by a key:key pair, and return 'NA' if the first key doesn't exist. All of the sub-dicts have the same keys. d = { 'a': {'j':1,'k':2}, 'b': {'j':2,'k':3}, 'd': {'j':1,'k':3} } I know I can use d.get('c','NA') to get the sub-dict if it exists and return 'NA' otherwise, but I really only need one value from the sub-dict. I'd like to do something like d.get('c['j']','NA') if that existed. Right now I'm just checking to see if the top-level key exists and then assigning the sub-value to a variable if it exists or 'NA' if not. However, I'm doing this about 500k times and also retrieving/generating other information about each top-level key from elsewhere, and I'm trying to speed this up a little bit.
[ "How about\nd.get('a', {'j': 'NA'})['j']\n\n?\nIf not all subdicts have a j key, then\nd.get('a', {}).get('j', 'NA')\n\n \nTo cut down on identical objects created, you can devise something like\nclass DefaultNASubdict(dict):\n class NADict(object):\n def __getitem__(self, k):\n return 'NA'\n\n NA = NADict()\n\n def __missing__(self, k):\n return self.NA\n\nnadict = DefaultNASubdict({\n 'a': {'j':1,'k':2},\n 'b': {'j':2,'k':3},\n 'd': {'j':1,'k':3}\n })\n\nprint nadict['a']['j'] # 1\nprint nadict['b']['j'] # 2\nprint nadict['c']['j'] # NA\n\n \nSame idea using defaultdict:\nimport collections\n\nclass NADict(object):\n def __getitem__(self, k):\n return 'NA'\n\n @staticmethod\n def instance():\n return NADict._instance\n\nNADict._instance = NADict()\n\n\nnadict = collections.defaultdict(NADict.instance, {\n 'a': {'j':1,'k':2},\n 'b': {'j':2,'k':3},\n 'd': {'j':1,'k':3}\n })\n\n", "Here's a simple and efficient way to do it with ordinary dictionaries, nested an arbitrary number of levels. The example code works in both Python 2 and 3.\nfrom __future__ import print_function\ntry:\n from functools import reduce\nexcept ImportError: # Assume it's built-in (Python 2.x)\n pass\n\ndef chained_get(dct, *keys):\n SENTRY = object()\n def getter(level, key):\n return 'NA' if level is SENTRY else level.get(key, SENTRY)\n return reduce(getter, keys, dct)\n\n\nd = {'a': {'j': 1, 'k': 2},\n 'b': {'j': 2, 'k': 3},\n 'd': {'j': 1, 'k': 3},\n }\n\nprint(chained_get(d, 'a', 'j')) # 1\nprint(chained_get(d, 'b', 'k')) # 3\nprint(chained_get(d, 'k', 'j')) # NA\n\nIt could also be done recursively:\n# Recursive version.\n\ndef chained_get(dct, *keys):\n SENTRY = object()\n def getter(level, keys):\n return (level if keys[0] is SENTRY else\n 'NA' if level is SENTRY else\n getter(level.get(keys[0], SENTRY), keys[1:]))\n return getter(dct, keys+(SENTRY,))\n\nAlthough this way of doing it isn't quite as efficient as the first.\n", "Another way to get multidimensional dict example ( use get method twice)\nd.get('a', {}).get('j')\n\n", "Rather than a hierarchy of nested dict objects, you could use one dictionary whose keys are a tuple representing a path through the hierarchy.\nIn [34]: d2 = {(x,y):d[x][y] for x in d for y in d[x]}\n\nIn [35]: d2\nOut[35]:\n{('a', 'j'): 1,\n ('a', 'k'): 2,\n ('b', 'j'): 2,\n ('b', 'k'): 3,\n ('d', 'j'): 1,\n ('d', 'k'): 3}\n\nIn [36]: timeit [d[x][y] for x,y in d2.keys()]\n100000 loops, best of 3: 2.37 us per loop\n\nIn [37]: timeit [d2[x] for x in d2.keys()]\n100000 loops, best of 3: 2.03 us per loop\n\nAccessing this way looks like it's about 15% faster. You can still use the get method with a default value:\nIn [38]: d2.get(('c','j'),'NA')\nOut[38]: 'NA'\n\n", "For a functional approach very similar to martineau's answer, I've gone with the following:\ndef chained_get(dictionary: dict, *args, default: Any = None) -> Any:\n \"\"\"\n Get a value nested in a dictionary by its nested path.\n \"\"\"\n value_path = list(args)\n dict_chain = dictionary\n while value_path:\n try:\n dict_chain = dict_chain.get(value_path.pop(0))\n except AttributeError:\n return default\n\n return dict_chain\n\nIt's a slightly simpler implementation but is still recursive and optionally allows a default value.\nThe usage is identical to martineau's answer:\nfrom typing import Any\n\n\ndef chained_get(dictionary: dict, *args, default: Any = None) -> Any:\n \"\"\"\n Get a value nested in a dictionary by its nested path.\n \"\"\"\n value_path = list(args)\n dict_chain = dictionary\n while value_path:\n try:\n dict_chain = dict_chain.get(value_path.pop(0))\n except AttributeError:\n return default\n\n return dict_chain\n\n\ndef main() -> None:\n dct = {\n \"a\": {\"j\": 1, \"k\": 2},\n \"b\": {\"j\": 2, \"k\": 3},\n \"d\": {\"j\": 1, \"k\": 3},\n }\n\n print(chained_get(dct, \"a\", \"j\")) # 1\n print(chained_get(dct, \"b\", \"k\")) # 3\n print(chained_get(dct, \"k\", \"j\")) # None\n print(chained_get(dct, \"k\", \"j\", default=\"NA\")) # NA\n\n\nif __name__ == \"__main__\":\n main()\n\n" ]
[ 37, 5, 5, 3, 1 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0016003408_dictionary_python.txt
Q: Pandas Multiply 2D by 1D Dataframe Looking for an elegant way to multiply a 2D dataframe by a 1D series where the indices and column names align df1 = Index A B 1 1 5 2 2 6 3 3 7 4 4 8 df2 = Coef A 10 B 100 Something like... df3 = df1.mul(df2) To get : Index A B 1 10 500 2 20 600 3 30 700 4 40 800 A: There is no such thing as 1D DataFrame, you need to slice as Series to have 1D, then multiply (by default on axis=1): df3 = df1.mul(df2['Coef']) Output: A B 1 10 500 2 20 600 3 30 700 4 40 800 If Index is a column: df3 = df1.mul(df2['Coef']).combine_first(df1)[df1.columns] Output: Index A B 0 1.0 10.0 500.0 1 2.0 20.0 600.0 2 3.0 30.0 700.0 3 4.0 40.0 800.0
Pandas Multiply 2D by 1D Dataframe
Looking for an elegant way to multiply a 2D dataframe by a 1D series where the indices and column names align df1 = Index A B 1 1 5 2 2 6 3 3 7 4 4 8 df2 = Coef A 10 B 100 Something like... df3 = df1.mul(df2) To get : Index A B 1 10 500 2 20 600 3 30 700 4 40 800
[ "There is no such thing as 1D DataFrame, you need to slice as Series to have 1D, then multiply (by default on axis=1):\ndf3 = df1.mul(df2['Coef'])\n\nOutput:\n A B\n1 10 500\n2 20 600\n3 30 700\n4 40 800\n\nIf Index is a column:\ndf3 = df1.mul(df2['Coef']).combine_first(df1)[df1.columns]\n\nOutput:\n Index A B\n0 1.0 10.0 500.0\n1 2.0 20.0 600.0\n2 3.0 30.0 700.0\n3 4.0 40.0 800.0\n\n" ]
[ 5 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074466909_dataframe_pandas_python.txt
Q: SUMIF equivalent with unique date ranges in Python (Summing if date falls within various date ranges for variable creation) I am looking to create variables that sum based on date ranges unique to different features / categories to automate a current Excel task in Python. It is like a SUMIF in Excel but unique date ranges for different variables. I`ll try to recreate a similar situation as I cannot share the exact data. At the moment, I have a sales dataframe with sales per week by area like so: Week Area Sales 08/02/2019 London 200 08/02/2019 Scotland 150 15/02/2019 London 100 15/02/2019 Scotland 120 22/02/2019 London 50 22/02/2019 Scotland 20 I want to incorporate whether the date falls within sales periods for products, so say I have another dataframe like this: Product Sale Start Week Sale End Week Boots 08/02/2019 15/02/2019 Accessories 15/02/2019 22/02/2019 I want to create something that sums if the dates fall within those specified for each product. For example, for Boots below, sum Sales if the weeks in Sales fall within the Sales Periods date range: Area Boots Accessories London 300 150 Scotland 270 140 I`ve tried groupby and a pivot table but I am not sure how to incorporate the sales dates filters into it. At the moment, the sales period dataframe and the sales dataframe are separate. This is what I have for the pivot code which is almost there: test = pd.pivot_table(df,index=['Area','Week'],columns=sales_period_df['Product'],values=['Sales'],aggfunc=np.sum) But this doesnt include filtering for the sales periods and I`m not sure how to incorporate this. Would appreciate your advice, thanks in advance! A: # DF: sales (top DF in question) # DF2: sales period (second DF in question) # format the date into datetime df['Week'] = pd.to_datetime(df['Week'], dayfirst=True) df2[['Sale Start Week','Sale End Week']]=df2[['Sale Start Week','Sale End Week']].apply(pd.to_datetime, dayfirst=True) df2 # merge using merge_asof df3=pd.merge_asof( df.sort_values('Week'), df2.sort_values('Sale Start Week'), left_on = 'Week', right_on='Sale Start Week') # including only when week falls within end week df3=df3.loc[df3['Week'] <= df3['Sale End Week']] df3 # cross tab for resultset out= (pd.crosstab(index=df3['Area'], columns=df3['Product'], values=df3['Sales'], aggfunc='sum') .reset_index() .rename_axis(columns=None)) out Area Accessories Boots 0 London 150 200 1 Scotland 140 150 A: Due to overlapping periods, we can't use the classic pivoting in this case (unless we duplicate overlapping sales records for each period, wich seems too much). So we have to create this table manually. To start, let's prepare some data to work with: import pandas as pd from io import StringIO data = '''Week,Area,Sales 08/02/2019,London,200 08/02/2019,Scotland,150 15/02/2019,London,100 15/02/2019,Scotland,120 22/02/2019,London,50 22/02/2019,Scotland,20''' df = pd.read_csv(StringIO(data), index_col=0, parse_dates=True, dayfirst=True).sort_index() data = '''Product,Sale Start Week,Sale End Week Boots,08/02/2019,15/02/2019 Accessories,15/02/2019,22/02/2019 Something,08/02/2019,22/02/2019''' sales_period_df = pd.read_csv(StringIO(data), index_col=0, parse_dates=[1, 2], dayfirst=True) The structure of df and sales_period_df is slightly modified so that Week and Product are now indexes. Next, we prepare the output frame and supportive data: import pandas.IndexSlice as idx # create slices from sales_period_df # which can be used to locate data in df periods = sales_period_df.agg(lambda row: idx[row['Sale Start Week']:row['Sale End Week']], axis=1) # separate sales by area sales_by_area = df.groupby('Area')['Sales'] # create the output DataFrame with unique areas as indexes # and products as columns output = pd.DataFrame(index=df['Area'].unique(), columns=sales_period_df.index) To fill in the data, we can use eather apply or agg like this: for product in output.columns: output[product] = sales_by_area.agg(lambda sales: sales.loc[periods[product]].sum()) Let's assemble the code: import pandas as pd from pandas import IndexSlice as idx from io import StringIO data = '''Week,Area,Sales 08/02/2019,London,200 08/02/2019,Scotland,150 15/02/2019,London,100 15/02/2019,Scotland,120 22/02/2019,London,50 22/02/2019,Scotland,20''' df = pd.read_csv(StringIO(data), index_col=0, parse_dates=True, dayfirst=True).sort_index() data = '''Product,Sale Start Week,Sale End Week Boots,08/02/2019,15/02/2019 Accessories,15/02/2019,22/02/2019 Something,08/02/2019,22/02/2019''' sales_period_df = pd.read_csv(StringIO(data), index_col=0, parse_dates=[1, 2], dayfirst=True) periods = sales_period_df.agg(lambda row: idx[row['Sale Start Week']:row['Sale End Week']], axis=1) output = pd.DataFrame(index=df['Area'].unique(), columns=sales_period_df.index) sales_by_area = df.groupby('Area')['Sales'] for product in output.columns: output[product] = sales_by_area.agg(lambda sales: sales.loc[periods[product]].sum()) print(output) Output: Product Boots Accessories Something London 300 150 350 Scotland 270 140 290 A: One option is to compute the non-equi join with conditional_join to get matches, and finally groupby and sum: # pip install pyjanitor import pandas as pd import janitor (dates .conditional_join( products, ('Week', 'Sale Start Week', '>='), ('Week', 'Sale End Week', '<='), # for larger data, numba may offer # better performance use_numba = False, df_columns=['Area','Sales'], right_columns='Product') .pivot_table( index='Area', columns='Product', values='Sales', aggfunc='sum') ) Product Accessories Boots Area London 150 300 Scotland 140 270
SUMIF equivalent with unique date ranges in Python (Summing if date falls within various date ranges for variable creation)
I am looking to create variables that sum based on date ranges unique to different features / categories to automate a current Excel task in Python. It is like a SUMIF in Excel but unique date ranges for different variables. I`ll try to recreate a similar situation as I cannot share the exact data. At the moment, I have a sales dataframe with sales per week by area like so: Week Area Sales 08/02/2019 London 200 08/02/2019 Scotland 150 15/02/2019 London 100 15/02/2019 Scotland 120 22/02/2019 London 50 22/02/2019 Scotland 20 I want to incorporate whether the date falls within sales periods for products, so say I have another dataframe like this: Product Sale Start Week Sale End Week Boots 08/02/2019 15/02/2019 Accessories 15/02/2019 22/02/2019 I want to create something that sums if the dates fall within those specified for each product. For example, for Boots below, sum Sales if the weeks in Sales fall within the Sales Periods date range: Area Boots Accessories London 300 150 Scotland 270 140 I`ve tried groupby and a pivot table but I am not sure how to incorporate the sales dates filters into it. At the moment, the sales period dataframe and the sales dataframe are separate. This is what I have for the pivot code which is almost there: test = pd.pivot_table(df,index=['Area','Week'],columns=sales_period_df['Product'],values=['Sales'],aggfunc=np.sum) But this doesnt include filtering for the sales periods and I`m not sure how to incorporate this. Would appreciate your advice, thanks in advance!
[ "# DF: sales (top DF in question)\n# DF2: sales period (second DF in question)\n\n# format the date into datetime\ndf['Week'] = pd.to_datetime(df['Week'], dayfirst=True)\ndf2[['Sale Start Week','Sale End Week']]=df2[['Sale Start Week','Sale End Week']].apply(pd.to_datetime, dayfirst=True)\ndf2\n\n# merge using merge_asof \ndf3=pd.merge_asof( df.sort_values('Week'),\n df2.sort_values('Sale Start Week'),\n left_on = 'Week',\n right_on='Sale Start Week')\n\n# including only when week falls within end week\ndf3=df3.loc[df3['Week'] <= df3['Sale End Week']]\ndf3\n\n# cross tab for resultset\nout= (pd.crosstab(index=df3['Area'], \n columns=df3['Product'], \n values=df3['Sales'], \n aggfunc='sum')\n .reset_index()\n .rename_axis(columns=None))\n\nout\n\n Area Accessories Boots\n0 London 150 200\n1 Scotland 140 150\n\n", "Due to overlapping periods, we can't use the classic pivoting in this case (unless we duplicate overlapping sales records for each period, wich seems too much). So we have to create this table manually.\nTo start, let's prepare some data to work with:\nimport pandas as pd\nfrom io import StringIO\n\ndata = '''Week,Area,Sales\n08/02/2019,London,200\n08/02/2019,Scotland,150\n15/02/2019,London,100\n15/02/2019,Scotland,120\n22/02/2019,London,50\n22/02/2019,Scotland,20'''\n\ndf = pd.read_csv(StringIO(data), index_col=0, parse_dates=True, dayfirst=True).sort_index()\n\ndata = '''Product,Sale Start Week,Sale End Week\nBoots,08/02/2019,15/02/2019\nAccessories,15/02/2019,22/02/2019\nSomething,08/02/2019,22/02/2019'''\n\nsales_period_df = pd.read_csv(StringIO(data), index_col=0, parse_dates=[1, 2], dayfirst=True)\n\nThe structure of df and sales_period_df is slightly modified so that Week and Product are now indexes.\nNext, we prepare the output frame and supportive data:\nimport pandas.IndexSlice as idx\n\n# create slices from sales_period_df\n# which can be used to locate data in df\nperiods = sales_period_df.agg(lambda row: idx[row['Sale Start Week']:row['Sale End Week']], axis=1)\n\n# separate sales by area\nsales_by_area = df.groupby('Area')['Sales']\n\n# create the output DataFrame with unique areas as indexes \n# and products as columns\noutput = pd.DataFrame(index=df['Area'].unique(), columns=sales_period_df.index)\n\nTo fill in the data, we can use eather apply or agg like this:\nfor product in output.columns:\n output[product] = sales_by_area.agg(lambda sales: sales.loc[periods[product]].sum())\n\nLet's assemble the code:\nimport pandas as pd\nfrom pandas import IndexSlice as idx\nfrom io import StringIO\n\ndata = '''Week,Area,Sales\n08/02/2019,London,200\n08/02/2019,Scotland,150\n15/02/2019,London,100\n15/02/2019,Scotland,120\n22/02/2019,London,50\n22/02/2019,Scotland,20'''\n\ndf = pd.read_csv(StringIO(data), index_col=0, parse_dates=True, dayfirst=True).sort_index()\n\ndata = '''Product,Sale Start Week,Sale End Week\nBoots,08/02/2019,15/02/2019\nAccessories,15/02/2019,22/02/2019\nSomething,08/02/2019,22/02/2019'''\n\nsales_period_df = pd.read_csv(StringIO(data), index_col=0, parse_dates=[1, 2], dayfirst=True)\n\nperiods = sales_period_df.agg(lambda row: idx[row['Sale Start Week']:row['Sale End Week']], axis=1)\noutput = pd.DataFrame(index=df['Area'].unique(), columns=sales_period_df.index)\nsales_by_area = df.groupby('Area')['Sales']\n\nfor product in output.columns:\n output[product] = sales_by_area.agg(lambda sales: sales.loc[periods[product]].sum())\n\nprint(output)\n\nOutput:\nProduct Boots Accessories Something\nLondon 300 150 350\nScotland 270 140 290\n\n", "One option is to compute the non-equi join with conditional_join to get matches, and finally groupby and sum:\n# pip install pyjanitor\nimport pandas as pd\nimport janitor\n\n(dates\n.conditional_join(\n products, \n ('Week', 'Sale Start Week', '>='), \n ('Week', 'Sale End Week', '<='),\n # for larger data, numba may offer \n # better performance\n use_numba = False, \n df_columns=['Area','Sales'], \n right_columns='Product')\n.pivot_table(\n index='Area',\n columns='Product',\n values='Sales',\n aggfunc='sum')\n)\nProduct Accessories Boots\nArea \nLondon 150 300\nScotland 140 270\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "data_manipulation", "pandas", "python", "sumifs" ]
stackoverflow_0074275891_data_manipulation_pandas_python_sumifs.txt
Q: How to improve the efficiency of this python function? I pass a list (called a) of characters. The characters could be either letters or emojis. Ex: a=['a','b','f','a','g', ''] Then I count the occurrences of each character in the list. This function return just the most frequent character by alphabetical order. ex_n.2: if the most frequents characters are 'b' and 'a', it returns me 'a' def occorrenze(a): dix={} #dictionary for i in a: if i in dix: dix[i]+=1 else: dix[i]=1 #it finds me the max values in the dict. maxvalues=max(dix.values()) #it creates a list with the keys having the max values maxkeys= [k for k,v in dix.items() if v == maxvalues] #It return just one characters, the one first in alphabetical order return sorted(maxkeys)[0] I don't know how to make this function faster. A: As @TimRoberts commented, one can use collections.Counter. This object will count the number of times each item occurs. Then we can find the most common objects, and in the case of ties, we sort the values. In the example below, b and d both occur three times. But using counter.most_common(n=1) would give us d because d came before b in the list of characters. Therefore, we find all values that have the max count and sort those values. Note that sorted will sort upper-case before lower-case. from collections import Counter string = ["d", "d", "d", "b", "b", "b", "a"] counts = Counter(string) most_common = counts.most_common()[0] # ('d', 3) most_common_count = most_common[1] # 3 tied_values = [s for s, count in counts.items() if count == most_common_count] # ['b', 'd'] tied_values = sorted(tied_values) # ['b', 'd'] A: Try this it returns the most frequent character by alphabetical order here by using the function sorted() and the method count() of the list def most_frequent(a): return sorted(a, key=a.count, reverse=True)[0] That's fast because it uses the CPython implementation of the Timsort algorithm for sorting and the method count() is O(n) More Details about Timsort here TimeComplexity Details here
How to improve the efficiency of this python function?
I pass a list (called a) of characters. The characters could be either letters or emojis. Ex: a=['a','b','f','a','g', ''] Then I count the occurrences of each character in the list. This function return just the most frequent character by alphabetical order. ex_n.2: if the most frequents characters are 'b' and 'a', it returns me 'a' def occorrenze(a): dix={} #dictionary for i in a: if i in dix: dix[i]+=1 else: dix[i]=1 #it finds me the max values in the dict. maxvalues=max(dix.values()) #it creates a list with the keys having the max values maxkeys= [k for k,v in dix.items() if v == maxvalues] #It return just one characters, the one first in alphabetical order return sorted(maxkeys)[0] I don't know how to make this function faster.
[ "As @TimRoberts commented, one can use collections.Counter. This object will count the number of times each item occurs. Then we can find the most common objects, and in the case of ties, we sort the values.\nIn the example below, b and d both occur three times. But using counter.most_common(n=1) would give us d because d came before b in the list of characters. Therefore, we find all values that have the max count and sort those values.\nNote that sorted will sort upper-case before lower-case.\nfrom collections import Counter\n\nstring = [\"d\", \"d\", \"d\", \"b\", \"b\", \"b\", \"a\"]\n\ncounts = Counter(string)\nmost_common = counts.most_common()[0] # ('d', 3)\nmost_common_count = most_common[1] # 3\ntied_values = [s for s, count in counts.items() if count == most_common_count] # ['b', 'd']\ntied_values = sorted(tied_values) # ['b', 'd']\n\n", "Try this\nit returns the most frequent character by alphabetical order here by using the function sorted() and the method count() of the list\ndef most_frequent(a):\n return sorted(a, key=a.count, reverse=True)[0]\n\nThat's fast because it uses the CPython implementation of the Timsort algorithm for sorting and the method count() is O(n)\nMore Details about Timsort here \nTimeComplexity Details here\n" ]
[ 1, 0 ]
[]
[]
[ "performance", "python" ]
stackoverflow_0074466852_performance_python.txt
Q: I can't get all the html data from beautiful soup Im new in webscraping and i wanted to get just a text from a google page (basically the date of a soccer match), but the soup doesnt get all the html (im gessing beacause of request) so i can't find it, I know it can be beacause of google using javascript and I should use selenium chromedriver, but the thing is that I need the code to be usable on an another computer so it cant really use it.. heres the code : import pandas as pd from bs4 import BeautifulSoup import requests a = "Newcastle" url ="https://www.google.com/search?q=" + a + "+next+match" response = requests.get(url) soup = BeautifulSoup(response.text,"html.parser") print(soup) for a in soup.findAll('div') : print(soup.get_text()) what i wanna find is "<span class="imso_mh__lr-dt-ds">17/12, 13:30</span>" it has "//*[@id="sports-app"]/div/div[3]/div[1]/div/div/div/div/div[1]/div/div[1]/div/span[2]" as xpath Is it even possible ? A: Try to set User-Agent header when requesting the page from Google: import requests from bs4 import BeautifulSoup a = "Newcastle" url = "https://www.google.com/search?q=" + a + "+next+match&hl=en" headers = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0" } soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser") next_match = soup.select_one('[data-entityname="Match Header"]') for t in next_match.select('[aria-hidden="true"]'): t.extract() text = next_match.get_text(strip=True, separator=" ") print(text) Prints: Club Friendlies · Dec 17, 13:30 Newcastle VS Vallecano
I can't get all the html data from beautiful soup
Im new in webscraping and i wanted to get just a text from a google page (basically the date of a soccer match), but the soup doesnt get all the html (im gessing beacause of request) so i can't find it, I know it can be beacause of google using javascript and I should use selenium chromedriver, but the thing is that I need the code to be usable on an another computer so it cant really use it.. heres the code : import pandas as pd from bs4 import BeautifulSoup import requests a = "Newcastle" url ="https://www.google.com/search?q=" + a + "+next+match" response = requests.get(url) soup = BeautifulSoup(response.text,"html.parser") print(soup) for a in soup.findAll('div') : print(soup.get_text()) what i wanna find is "<span class="imso_mh__lr-dt-ds">17/12, 13:30</span>" it has "//*[@id="sports-app"]/div/div[3]/div[1]/div/div/div/div/div[1]/div/div[1]/div/span[2]" as xpath Is it even possible ?
[ "Try to set User-Agent header when requesting the page from Google:\nimport requests\nfrom bs4 import BeautifulSoup\n\n\na = \"Newcastle\"\nurl = \"https://www.google.com/search?q=\" + a + \"+next+match&hl=en\"\n\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0\"\n}\n\nsoup = BeautifulSoup(requests.get(url, headers=headers).content, \"html.parser\")\n\nnext_match = soup.select_one('[data-entityname=\"Match Header\"]')\nfor t in next_match.select('[aria-hidden=\"true\"]'):\n t.extract()\n\ntext = next_match.get_text(strip=True, separator=\" \")\nprint(text)\n\nPrints:\nClub Friendlies · Dec 17, 13:30 Newcastle VS Vallecano\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "html", "python", "python_requests", "web_scraping" ]
stackoverflow_0074466121_beautifulsoup_html_python_python_requests_web_scraping.txt
Q: How can I sum user input numbers whilst in a loop? I'm trying to get the sum of numbers that a user inputs in a loop, but I can't get it to include the first number input - here's what I have so far number = int(input("Enter a number")) total = 0 while number != -1: number = int(input("Enter another number")) total += number else: print(total) Probably something easy I'm missing but I'm stumped ( i am a beginner as you can tell) I have tried changing the name of the first variable number but I end up in a constant loop even when number = -1 A: number = int(input("Enter a number")) total = 0 while number != -1: total += number number = int(input("Enter another number")) else: print(total) Just move the summation one line above.
How can I sum user input numbers whilst in a loop?
I'm trying to get the sum of numbers that a user inputs in a loop, but I can't get it to include the first number input - here's what I have so far number = int(input("Enter a number")) total = 0 while number != -1: number = int(input("Enter another number")) total += number else: print(total) Probably something easy I'm missing but I'm stumped ( i am a beginner as you can tell) I have tried changing the name of the first variable number but I end up in a constant loop even when number = -1
[ "number = int(input(\"Enter a number\"))\ntotal = 0\nwhile number != -1:\n total += number\n number = int(input(\"Enter another number\"))\nelse:\n print(total)\n\nJust move the summation one line above.\n" ]
[ 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0074467095_loops_python.txt
Q: String input inserted as individual characters Trying to insert users into database using Python through input, whenever I type a name like "kai" it takes each individual letter like "k", "a", "i" instead: cr = db.cursor() cr.execute("CREATE TABLE if not exists users (user_id int,name text)") cr.execute("CREATE TABLE if not exists skills (name text,progress int, user_id int )") question = input("enter your name") for key, person in enumerate(question): cr.execute(f"insert into users(user_id, name) values({key + 1},'{person}')") db.commit() db.close() I want entire input as one like "kai", not "k", "a", "i". A: A string is an interable. Read the doc on enumerate. When it iterates a string it processes one character at a time, thus the result you see.
String input inserted as individual characters
Trying to insert users into database using Python through input, whenever I type a name like "kai" it takes each individual letter like "k", "a", "i" instead: cr = db.cursor() cr.execute("CREATE TABLE if not exists users (user_id int,name text)") cr.execute("CREATE TABLE if not exists skills (name text,progress int, user_id int )") question = input("enter your name") for key, person in enumerate(question): cr.execute(f"insert into users(user_id, name) values({key + 1},'{person}')") db.commit() db.close() I want entire input as one like "kai", not "k", "a", "i".
[ "A string is an interable. Read the doc on enumerate. When it iterates a string it processes one character at a time, thus the result you see.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "sql", "sqlite" ]
stackoverflow_0074463890_python_python_3.x_sql_sqlite.txt
Q: How to assign identical random IDs conditionally to "related" rows in pandas? New to Python I'm struggling with the problem to assign some random IDs to "related" rows where the relation is simply their proximity (within 14 days) in consecutive days grouped by user. In that example I chose uuidwithout any specific intention. It could be any other random IDs uniquely indentifying conceptually related rows. import pandas as pd import uuid import numpy as np Here is a dummy dataframe: dummy_df = pd.DataFrame({"transactionid": [1, 2, 3, 4, 5, 6, 7, 8], "user": ["michael", "michael", "michael", "tom", "tom", "tom", "tom", "tom"], "transactiontime": pd.to_datetime(["2022-01-01", "2022-01-02", "2022-01-03", "2022-09-01", "2022-09-13", "2022-10-17", "2022-10-20", "2022-11-17"])}) dummy_df.head(10) transactionid user transactiontime 0 1 michael 2022-01-01 1 2 michael 2022-01-02 2 3 michael 2022-01-03 3 4 tom 2022-09-01 4 5 tom 2022-09-13 5 6 tom 2022-10-17 6 7 tom 2022-10-20 7 8 tom 2022-11-17 Here I sort transactions and calculate their difference in days: dummy_df = dummy_df.assign( timediff = dummy_df .sort_values('transactiontime') .groupby(["user"])['transactiontime'].diff() / np.timedelta64(1, 'D') ).fillna(0) dummy_df.head(10) transactionid user transactiontime timediff 0 1 michael 2022-01-01 0.0 1 2 michael 2022-01-02 1.0 2 3 michael 2022-01-03 1.0 3 4 tom 2022-09-01 0.0 4 5 tom 2022-09-13 12.0 5 6 tom 2022-10-17 34.0 6 7 tom 2022-10-20 3.0 7 8 tom 2022-11-17 28.0 Here I create a new column with a random IDs for each related transaction - though it does not work as expected: dummy_df.assign(related_transaction = np.where((dummy_df.timediff >= 0) & (dummy_df.timediff < 15), uuid.uuid4(), dummy_df.transactionid)) transactionid user transactiontime timediff related_transaction 0 1 michael 2022-01-01 0.0 fd630f07-6564-4773-aff9-44ecb1e4211d 1 2 michael 2022-01-02 1.0 fd630f07-6564-4773-aff9-44ecb1e4211d 2 3 michael 2022-01-03 1.0 fd630f07-6564-4773-aff9-44ecb1e4211d 3 4 tom 2022-09-01 0.0 fd630f07-6564-4773-aff9-44ecb1e4211d 4 5 tom 2022-09-13 12.0 fd630f07-6564-4773-aff9-44ecb1e4211d 5 6 tom 2022-10-17 34.0 6 6 7 tom 2022-10-20 3.0 fd630f07-6564-4773-aff9-44ecb1e4211d 7 8 tom 2022-11-17 28.0 8 What I would expect is something like given that the user group difference between transactions is within 14 days: transactionid user transactiontime timediff related_transaction 0 1 michael 2022-01-01 0.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 1 2 michael 2022-01-02 1.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 2 3 michael 2022-01-03 1.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 3 4 tom 2022-09-01 0.0 b1da2251-7770-4756-8863-c82f90657542 4 5 tom 2022-09-13 12.0 b1da2251-7770-4756-8863-c82f90657542 5 6 tom 2022-10-17 34.0 485a8d97-80d1-4184-8fc8-99523f471527 6 7 tom 2022-10-20 3.0 485a8d97-80d1-4184-8fc8-99523f471527 7 8 tom 2022-11-17 28.0 8 A: Taking the idea from Luise, we start with an empty column for related_transaction. Then, we iterate through each row. For each date, we check if it is already part of a transaction. If so, continue. Otherwise, assign a new transaction to that date and all other dates within 15 following days for the same user: import datetime df = dummy_df df['related_transaction'] = None for i, row in dummy_df.iterrows(): if df.loc[i].related_transaction is not None: # We already assigned that row continue df.loc[ # Select where: (df.transactiontime <= row.transactiontime + datetime.timedelta(days=15)) & # Current row + 15 days (df.user == row.user) & # Same user (pd.isna(df.related_transaction)), # Don't overwrite anything already assigned 'related_transaction' # Set this column to: ] = uuid.uuid4() # Assign new UUID This gives the output: transactionid user transactiontime related_transaction 0 1 michael 2022-01-01 82d28e10-149b-481e-ba41-f5833662ba99 1 2 michael 2022-01-02 82d28e10-149b-481e-ba41-f5833662ba99 2 3 michael 2022-01-03 82d28e10-149b-481e-ba41-f5833662ba99 3 4 tom 2022-09-01 fa253663-8615-419a-afda-7646906024f0 4 5 tom 2022-09-13 fa253663-8615-419a-afda-7646906024f0 5 6 tom 2022-10-17 d6152d4b-1560-40e0-8589-bd8e3da363db 6 7 tom 2022-10-20 d6152d4b-1560-40e0-8589-bd8e3da363db 7 8 tom 2022-11-17 2a93d78d-b6f6-4f0f-bb09-1bc18361aa21 In your example, the dates are already sorted, that's an important assumption I'm making here! A: The mismatch between your code and your desired result is that uuid.uuid4() creates an ID a single time and assigns it to all the relevant rows defined by np.where(). Instead, you need to generate the IDs in a vectorized way. Try the following approach: df.loc[ROW_CONDITIONs, COLUMNS] = VECTORIZED_ID_GENERATOR which for your example would be dummy_df.loc[(dummy_df['timediff'] >= 0) & (dummy_df['timediff'] < 15), 'related_transaction'] = dummy_df.apply(lambda _: uuid.uuid4(), axis=1) Take into account that this only solves your question of how to assign random IDs using uuid conditionally in Pandas. It looks to me that you also need to generate the same ID for the same user and for transactions every 15 days. My advice for that would be to generate a dataframe where every row is a combination of two transactions and add a condition saying that the users from both transactions need to be the same.
How to assign identical random IDs conditionally to "related" rows in pandas?
New to Python I'm struggling with the problem to assign some random IDs to "related" rows where the relation is simply their proximity (within 14 days) in consecutive days grouped by user. In that example I chose uuidwithout any specific intention. It could be any other random IDs uniquely indentifying conceptually related rows. import pandas as pd import uuid import numpy as np Here is a dummy dataframe: dummy_df = pd.DataFrame({"transactionid": [1, 2, 3, 4, 5, 6, 7, 8], "user": ["michael", "michael", "michael", "tom", "tom", "tom", "tom", "tom"], "transactiontime": pd.to_datetime(["2022-01-01", "2022-01-02", "2022-01-03", "2022-09-01", "2022-09-13", "2022-10-17", "2022-10-20", "2022-11-17"])}) dummy_df.head(10) transactionid user transactiontime 0 1 michael 2022-01-01 1 2 michael 2022-01-02 2 3 michael 2022-01-03 3 4 tom 2022-09-01 4 5 tom 2022-09-13 5 6 tom 2022-10-17 6 7 tom 2022-10-20 7 8 tom 2022-11-17 Here I sort transactions and calculate their difference in days: dummy_df = dummy_df.assign( timediff = dummy_df .sort_values('transactiontime') .groupby(["user"])['transactiontime'].diff() / np.timedelta64(1, 'D') ).fillna(0) dummy_df.head(10) transactionid user transactiontime timediff 0 1 michael 2022-01-01 0.0 1 2 michael 2022-01-02 1.0 2 3 michael 2022-01-03 1.0 3 4 tom 2022-09-01 0.0 4 5 tom 2022-09-13 12.0 5 6 tom 2022-10-17 34.0 6 7 tom 2022-10-20 3.0 7 8 tom 2022-11-17 28.0 Here I create a new column with a random IDs for each related transaction - though it does not work as expected: dummy_df.assign(related_transaction = np.where((dummy_df.timediff >= 0) & (dummy_df.timediff < 15), uuid.uuid4(), dummy_df.transactionid)) transactionid user transactiontime timediff related_transaction 0 1 michael 2022-01-01 0.0 fd630f07-6564-4773-aff9-44ecb1e4211d 1 2 michael 2022-01-02 1.0 fd630f07-6564-4773-aff9-44ecb1e4211d 2 3 michael 2022-01-03 1.0 fd630f07-6564-4773-aff9-44ecb1e4211d 3 4 tom 2022-09-01 0.0 fd630f07-6564-4773-aff9-44ecb1e4211d 4 5 tom 2022-09-13 12.0 fd630f07-6564-4773-aff9-44ecb1e4211d 5 6 tom 2022-10-17 34.0 6 6 7 tom 2022-10-20 3.0 fd630f07-6564-4773-aff9-44ecb1e4211d 7 8 tom 2022-11-17 28.0 8 What I would expect is something like given that the user group difference between transactions is within 14 days: transactionid user transactiontime timediff related_transaction 0 1 michael 2022-01-01 0.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 1 2 michael 2022-01-02 1.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 2 3 michael 2022-01-03 1.0 ad2a8f23-05a5-49b1-b45e-cbf3f0ba23ff 3 4 tom 2022-09-01 0.0 b1da2251-7770-4756-8863-c82f90657542 4 5 tom 2022-09-13 12.0 b1da2251-7770-4756-8863-c82f90657542 5 6 tom 2022-10-17 34.0 485a8d97-80d1-4184-8fc8-99523f471527 6 7 tom 2022-10-20 3.0 485a8d97-80d1-4184-8fc8-99523f471527 7 8 tom 2022-11-17 28.0 8
[ "Taking the idea from Luise, we start with an empty column for related_transaction. Then, we iterate through each row. For each date, we check if it is already part of a transaction. If so, continue. Otherwise, assign a new transaction to that date and all other dates within 15 following days for the same user:\nimport datetime\ndf = dummy_df\ndf['related_transaction'] = None\nfor i, row in dummy_df.iterrows():\n if df.loc[i].related_transaction is not None:\n # We already assigned that row\n continue\n df.loc[ # Select where:\n (df.transactiontime <= row.transactiontime + datetime.timedelta(days=15)) & # Current row + 15 days\n (df.user == row.user) & # Same user\n (pd.isna(df.related_transaction)), # Don't overwrite anything already assigned\n 'related_transaction' # Set this column to:\n ] = uuid.uuid4() # Assign new UUID\n\nThis gives the output:\n\n transactionid user transactiontime related_transaction\n0 1 michael 2022-01-01 82d28e10-149b-481e-ba41-f5833662ba99\n1 2 michael 2022-01-02 82d28e10-149b-481e-ba41-f5833662ba99\n2 3 michael 2022-01-03 82d28e10-149b-481e-ba41-f5833662ba99\n3 4 tom 2022-09-01 fa253663-8615-419a-afda-7646906024f0\n4 5 tom 2022-09-13 fa253663-8615-419a-afda-7646906024f0\n5 6 tom 2022-10-17 d6152d4b-1560-40e0-8589-bd8e3da363db\n6 7 tom 2022-10-20 d6152d4b-1560-40e0-8589-bd8e3da363db\n7 8 tom 2022-11-17 2a93d78d-b6f6-4f0f-bb09-1bc18361aa21\n\nIn your example, the dates are already sorted, that's an important assumption I'm making here!\n", "The mismatch between your code and your desired result is that uuid.uuid4() creates an ID a single time and assigns it to all the relevant rows defined by np.where(). Instead, you need to generate the IDs in a vectorized way.\nTry the following approach:\ndf.loc[ROW_CONDITIONs, COLUMNS] = VECTORIZED_ID_GENERATOR\n\nwhich for your example would be\ndummy_df.loc[(dummy_df['timediff'] >= 0) & (dummy_df['timediff'] < 15), 'related_transaction'] = dummy_df.apply(lambda _: uuid.uuid4(), axis=1)\n\nTake into account that this only solves your question of how to assign random IDs using uuid conditionally in Pandas. It looks to me that you also need to generate the same ID for the same user and for transactions every 15 days. My advice for that would be to generate a dataframe where every row is a combination of two transactions and add a condition saying that the users from both transactions need to be the same.\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074466504_pandas_python.txt
Q: Print a specific word when a number is div. by 5 , and another specific word when its div.by 10 On a range 1 to 100, I want to a specific word when the number is divisible by 5 for example "Good" , and another specific word when its divisible by 10 for example "morning" 1,2,3,4,good,6,7,8,9,morning .... etc i made this code but its only working when its div. by 5 for z in range (0, 101 , 1): if z%5==0: print("good") elif z%10==0: print("morning") else: print(z) A: Reverse the conditions for divisible by 10 or 5. If it is divisible by 10, you get "morning", then checks for 5, otherwise prints that value. As currently written, all 10 digits will match the 5 condition, and not print further because you're using elif. If you used if twice, it would print both "good" and "morning", which doesn't match your expected output.
Print a specific word when a number is div. by 5 , and another specific word when its div.by 10
On a range 1 to 100, I want to a specific word when the number is divisible by 5 for example "Good" , and another specific word when its divisible by 10 for example "morning" 1,2,3,4,good,6,7,8,9,morning .... etc i made this code but its only working when its div. by 5 for z in range (0, 101 , 1): if z%5==0: print("good") elif z%10==0: print("morning") else: print(z)
[ "Reverse the conditions for divisible by 10 or 5. If it is divisible by 10, you get \"morning\", then checks for 5, otherwise prints that value.\nAs currently written, all 10 digits will match the 5 condition, and not print further because you're using elif. If you used if twice, it would print both \"good\" and \"morning\", which doesn't match your expected output.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074467092_python.txt
Q: How do I write the lark grammar for First Order Logic with Equality? According to AIMA (Russell & Norvig, 2010) this is the BNF grammar for FOL with Equality: How do I convert this to a lark grammar? Specifically, how do I represent n-ary predicates using lark grammar? A: I'm going to take this question as asking how to specify the syntax of an application of an identifier to a parenthesised, comma-separated list of terms. In syntactic terms, that's similar enough to JSON list syntax to make it worthwhile looking at the first sample grammar (for JSON) in the Lark documentation site. Functions and predicates in your FOL grammar differ from JSON lists: they use round parentheses (()) instead of square brackets ([]) and they need to additionally specify the name of a function or predicate, but the JSON grammar shows how to write a comma-separated list of things, and we can easily apply that exact same grammar syntax: AtomicSentence : Predicate ['(' Term (',' Term)* ')'] | Term '=' Term Term: Function '(' Term (',' Term)* ')' | Constant | Variable (I left out the rest of the supplied grammar, since it isn't relevant to the question about Predicates.) In that grammar syntax, parentheses are used for grouping and * is a postfix operator indicating the Kleene star; that is, "any number of repetitions, including zero, of the operand". Square brackets ([]) are used to enclose an optional syntactic sequence. That's not BNF, since BNF doesn't have syntactic operators like optionality or Kleene Star. It's an example of what's often called "Extended BNF" (EBNF), which comes in a huge number of varieties with subtly different syntaxes. But it can be mechanically desugared into BNF; one BNF equivalent for the above would be: AtomicSentence : Predicate '(' TermList ')' | Predicate | Term '=' Term Term: Function '(' TermList ')' | Constant | Variable TermList : Term | TermList ',' Term That grammar does not provide a way to specify the arity (that is, the number of arguments) of each function and predicate, and consequently will generate invalid sentences. The abstract FOL grammar has no evident way of defining new predicates or functions, presumably because functions and predicates are not first-order objects. So every function, predicate and constant must be individually defined in some extra-grammatic way. Thus, the grammar might be considered short-hand for a class of more specific grammars, each with an associated finite set of defined symbols. Those definitions presumably specify the arity of each symbol, as well as indicating which are predicates and which are functions. In order for the concrete grammar to restrict itself to predicates and functions written with correct arity (that is, with the correct number of arguments), it would need to be modified (for each specific collection of predicates and functions), according to a model like this: AtomicSentence : Predicate/0 | Predicate/1 '(' TermList/1 ')' | Predicate/2 '(' TermList/2 ')' | Predicate/3 '(' TermList/3 ')' | Predicate/4 '(' TermList/4 ')' ... | Term '=' Term Term: Constant | Function/1 '(' TermList/1 ')' | Function/2 '(' TermList/2 ')' | Function/3 '(' TermList/3 ')' | Function/4 '(' TermList/4 ')' ... | Variable TermList/1 : Term TermList/2 : TermList/1 ',' Term TermList/3 : TermList/2 ',' Term TermList/4 : TermList/3 ',' Term ... Predicate/0 : "True" | "False" Predicate/2 : "After" | "Loves" ... Function/1 : "Mother" | "LeftLeg" Function/2 : "Sum" | "Product" ... (The /n suffixes are part of the individual names; their semantic significance is external to the grammar. I take that particular sylistic convention from Prolog; it's also used by Erlang and some ML derivatives.) The ellipses represent other concrete definitions, which I didn't happen to write out; they are not intended to be thought of as lists of arbitrary length. The number of aritys actually defined by the grammar will be limited to the aritys actually used by the specific functions and predicates for which the concrete grammar is being defined. So each concrete grammar produced according to that model will have a finite number of productions. Note that a context-free grammar is not able to accurately represent a language in which functions are defined with a specific arity and elsewhere used only with exactly the same arity, unless there is a prespecified maximum arity. Grammatical concordance of that form (as with mandatory declaration of used symbols) requires a context-sensitive grammar formalism. This answer deliberately does not discuss operator precedence (for the operators defined in ComplexSentence), because it's not part of the original question. Without that specification, the grammar is ambiguous, but there is certainly an operator binding precedence hierarchy, presumably defined in the narrative surrounding the FOL grammar. A: Here is my lark grammar for a LaTeX parser that only accepts valid WFFs of First-order logic with equality and functions. I've left out operator precedence because it causes ambiguity. Instead it is handled by required explicit parenthesis. This rules are adapted from the definitions given in a logic textbook (J., 2012). wff: atomic_wff | compound_wff atomic_wff: predicate [left_parenthesis term (comma term)* right_parenthesis] | term equal_to term compound_wff: left_parenthesis wff right_parenthesis | not space wff | wff space and space wff | wff space nand space wff | wff space or space wff | wff space xor space wff | wff space nor space wff | wff space implied_by space? wff | wff space implies space wff | wff space iff space wff | (quantifier left_curly_brace variable right_curly_brace)* space? left_parenthesis wff right_parenthesis term: function left_parenthesis term (comma term)* right_parenthesis | name | variable space: /\s+/ comma: "," equal_to: "=" left_parenthesis: "(" right_parenthesis: ")" left_curly_brace: "{" right_curly_brace: "}" quantifier: universal_quantifier | existential_quantifier | uniqueness_quantifier universal_quantifier: "\\forall" existential_quantifier: "\\exists" uniqueness_quantifier: "\\exists!" name: /[a-t]/ | /[a-t]_[1-9]\d*/ variable: /[u-z]/ | /[u-z]_[1-9]\d*/ predicate: /[A-HJ-Z]/ | /[A-HJ-Z]_[1-9]\d*/ function: /[a-z]/ | /[a-z]_[1-9]\d*/ not: "\\neg" and: "\\wedge" nand: "\\uparrow" | "\\barwedge" or: "\\vee" xor: "\\veebar" nor: "\\downarrow" implies: "\\rightarrow" | "\\Rightarrow" | "\\Longrightarrow" | "\\implies" implied_by: "\\leftarrow" | "\\Leftarrow" | "\\Longleftarrow" iff: "\\leftrightarrow" | "\\iff" Sources J., S. N. J. (2012). Syntax of GPL. In Logic: The laws of truth (pp. 267–268). essay, Princeton University Press.
How do I write the lark grammar for First Order Logic with Equality?
According to AIMA (Russell & Norvig, 2010) this is the BNF grammar for FOL with Equality: How do I convert this to a lark grammar? Specifically, how do I represent n-ary predicates using lark grammar?
[ "I'm going to take this question as asking how to specify the syntax of an application of an identifier to a parenthesised, comma-separated list of terms.\nIn syntactic terms, that's similar enough to JSON list syntax to make it worthwhile looking at the first sample grammar (for JSON) in the Lark documentation site. Functions and predicates in your FOL grammar differ from JSON lists: they use round parentheses (()) instead of square brackets ([]) and they need to additionally specify the name of a function or predicate, but the JSON grammar shows how to write a comma-separated list of things, and we can easily apply that exact same grammar syntax:\nAtomicSentence\n : Predicate ['(' Term (',' Term)* ')']\n | Term '=' Term\n\nTerm: Function '(' Term (',' Term)* ')'\n | Constant\n | Variable\n\n(I left out the rest of the supplied grammar, since it isn't relevant to the question about Predicates.)\nIn that grammar syntax, parentheses are used for grouping and * is a postfix operator indicating the Kleene star; that is, \"any number of repetitions, including zero, of the operand\". Square brackets ([]) are used to enclose an optional syntactic sequence.\nThat's not BNF, since BNF doesn't have syntactic operators like optionality or Kleene Star. It's an example of what's often called \"Extended BNF\" (EBNF), which comes in a huge number of varieties with subtly different syntaxes. But it can be mechanically desugared into BNF; one BNF equivalent for the above would be:\nAtomicSentence\n : Predicate '(' TermList ')'\n | Predicate\n | Term '=' Term\n\nTerm: Function '(' TermList ')'\n | Constant\n | Variable\n\nTermList\n : Term\n | TermList ',' Term\n\nThat grammar does not provide a way to specify the arity (that is, the number of arguments) of each function and predicate, and consequently will generate invalid sentences. The abstract FOL grammar has no evident way of defining new predicates or functions, presumably because functions and predicates are not first-order objects. So every function, predicate and constant must be individually defined in some extra-grammatic way. Thus, the grammar might be considered short-hand for a class of more specific grammars, each with an associated finite set of defined symbols. Those definitions presumably specify the arity of each symbol, as well as indicating which are predicates and which are functions.\nIn order for the concrete grammar to restrict itself to predicates and functions written with correct arity (that is, with the correct number of arguments), it would need to be modified (for each specific collection of predicates and functions), according to a model like this:\nAtomicSentence\n : Predicate/0\n | Predicate/1 '(' TermList/1 ')'\n | Predicate/2 '(' TermList/2 ')'\n | Predicate/3 '(' TermList/3 ')'\n | Predicate/4 '(' TermList/4 ')'\n ...\n | Term '=' Term\n\nTerm: Constant\n | Function/1 '(' TermList/1 ')'\n | Function/2 '(' TermList/2 ')'\n | Function/3 '(' TermList/3 ')'\n | Function/4 '(' TermList/4 ')'\n ...\n | Variable\n\nTermList/1 : Term\nTermList/2 : TermList/1 ',' Term\nTermList/3 : TermList/2 ',' Term\nTermList/4 : TermList/3 ',' Term\n...\nPredicate/0 : \"True\" | \"False\"\nPredicate/2 : \"After\" | \"Loves\"\n...\nFunction/1 : \"Mother\" | \"LeftLeg\"\nFunction/2 : \"Sum\" | \"Product\"\n...\n\n(The /n suffixes are part of the individual names; their semantic significance is external to the grammar. I take that particular sylistic convention from Prolog; it's also used by Erlang and some ML derivatives.)\nThe ellipses represent other concrete definitions, which I didn't happen to write out; they are not intended to be thought of as lists of arbitrary length. The number of aritys actually defined by the grammar will be limited to the aritys actually used by the specific functions and predicates for which the concrete grammar is being defined. So each concrete grammar produced according to that model will have a finite number of productions.\nNote that a context-free grammar is not able to accurately represent a language in which functions are defined with a specific arity and elsewhere used only with exactly the same arity, unless there is a prespecified maximum arity. Grammatical concordance of that form (as with mandatory declaration of used symbols) requires a context-sensitive grammar formalism.\nThis answer deliberately does not discuss operator precedence (for the operators defined in ComplexSentence), because it's not part of the original question. Without that specification, the grammar is ambiguous, but there is certainly an operator binding precedence hierarchy, presumably defined in the narrative surrounding the FOL grammar.\n", "Here is my lark grammar for a LaTeX parser that only accepts valid WFFs of First-order logic with equality and functions. I've left out operator precedence because it causes ambiguity. Instead it is handled by required explicit parenthesis. This rules are adapted from the definitions given in a logic textbook (J., 2012).\nwff: atomic_wff | compound_wff\natomic_wff: predicate [left_parenthesis term (comma term)* right_parenthesis] | term equal_to term\ncompound_wff: left_parenthesis wff right_parenthesis\n | not space wff\n | wff space and space wff\n | wff space nand space wff\n | wff space or space wff\n | wff space xor space wff\n | wff space nor space wff\n | wff space implied_by space? wff\n | wff space implies space wff\n | wff space iff space wff\n | (quantifier left_curly_brace variable right_curly_brace)* space? left_parenthesis wff right_parenthesis\nterm: function left_parenthesis term (comma term)* right_parenthesis\n | name\n | variable\n\nspace: /\\s+/\ncomma: \",\"\nequal_to: \"=\"\nleft_parenthesis: \"(\"\nright_parenthesis: \")\"\nleft_curly_brace: \"{\"\nright_curly_brace: \"}\"\nquantifier: universal_quantifier | existential_quantifier | uniqueness_quantifier\nuniversal_quantifier: \"\\\\forall\"\nexistential_quantifier: \"\\\\exists\"\nuniqueness_quantifier: \"\\\\exists!\"\nname: /[a-t]/ | /[a-t]_[1-9]\\d*/\nvariable: /[u-z]/ | /[u-z]_[1-9]\\d*/\npredicate: /[A-HJ-Z]/ | /[A-HJ-Z]_[1-9]\\d*/\nfunction: /[a-z]/ | /[a-z]_[1-9]\\d*/\nnot: \"\\\\neg\"\nand: \"\\\\wedge\"\nnand: \"\\\\uparrow\" | \"\\\\barwedge\"\nor: \"\\\\vee\"\nxor: \"\\\\veebar\"\nnor: \"\\\\downarrow\"\nimplies: \"\\\\rightarrow\" | \"\\\\Rightarrow\" | \"\\\\Longrightarrow\" | \"\\\\implies\"\nimplied_by: \"\\\\leftarrow\" | \"\\\\Leftarrow\" | \"\\\\Longleftarrow\"\niff: \"\\\\leftrightarrow\" | \"\\\\iff\"\n\nSources\n\nJ., S. N. J. (2012). Syntax of GPL. In Logic: The laws of truth (pp. 267–268). essay, Princeton University Press.\n\n" ]
[ 1, 0 ]
[]
[]
[ "bnf", "first_order_logic", "lark_parser", "parsing", "python" ]
stackoverflow_0074420733_bnf_first_order_logic_lark_parser_parsing_python.txt
Q: Position frequency matrix for Pandas column with strings I have a pandas Dataframe with a column of peptide sequences and I want to know how many times each each amino acid appears at each position. I have written the following code to create the position frequency matrix: import pandas as pd from itertools import chain def frequency_matrix(df): # Empty position frequency matrix freq_matrix_df = pd.DataFrame( columns = sorted(set(chain.from_iterable(df.peptide_alpha))), index=range(df.peptide_len.max()), ).fillna(0) for _, row in df.iterrows(): for idx, aa in enumerate(row["peptide_alpha"]): freq_matrix_df.loc[idx, aa] += 1 return freq_matrix_df which for the following sample DataFrame: mini_df = pd.DataFrame(["YTEGDALDALGLKRY", "LTEIYGERLYETSY", "PVEEFNELLSKY", "TVDIQNPDITSSRY", "ASDKETYELRY"], columns=["peptide_alpha"]) mini_df["peptide_len"] = mini_df["peptide_alpha"].str.len() peptide_alpha peptide_len 0 YTEGDALDALGLKRY 15 1 LTEIYGERLYETSY 14 2 PVEEFNELLSKY 12 3 TVDIQNPDITSSRY 14 4 ASDKETYELRY 11 gives the following output: A D E F G I K L N P Q R S T V Y 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 0 2 0 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 1 0 1 2 1 0 0 0 0 0 0 0 0 0 4 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 5 1 0 0 0 1 0 0 0 2 0 0 0 0 1 0 0 6 0 0 2 0 0 0 0 1 0 1 0 0 0 0 0 1 7 0 2 1 0 0 0 0 1 0 0 0 1 0 0 0 0 8 1 0 0 0 0 1 0 3 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 10 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 1 11 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1 12 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 13 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 This works for small DataFrames but because of the for loop becomes too slow for bigger datasets. Is there a way to rewrite this in a faster/vectorized way? A: Solution mini_df['peptide_len'] = mini_df.peptide_len.map(lambda x: range(x)) mini_df['peptide_alpha'] = mini_df.peptide_alpha.map(list) mini_df = mini_df.explode(["peptide_alpha", "peptide_len"]) pd.crosstab(mini_df.peptide_len, mini_df.peptide_alpha) Performance With the dataframe mini_df = pd.concat([mini_df] * 10000) On my machine, my solution solves the problem within 0.5s, whereas the solution of the OP takes 1m8.6s. Consequently, I believe that my solution can be useful for him. Output peptide_alpha A D E F G I K L N P Q R S T V Y peptide_len 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 0 2 0 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 1 0 1 2 1 0 0 0 0 0 0 0 0 0 4 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 5 1 0 0 0 1 0 0 0 2 0 0 0 0 1 0 0 6 0 0 2 0 0 0 0 1 0 1 0 0 0 0 0 1 7 0 2 1 0 0 0 0 1 0 0 0 1 0 0 0 0 8 1 0 0 0 0 1 0 3 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 10 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 1 11 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1 12 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 13 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Position frequency matrix for Pandas column with strings
I have a pandas Dataframe with a column of peptide sequences and I want to know how many times each each amino acid appears at each position. I have written the following code to create the position frequency matrix: import pandas as pd from itertools import chain def frequency_matrix(df): # Empty position frequency matrix freq_matrix_df = pd.DataFrame( columns = sorted(set(chain.from_iterable(df.peptide_alpha))), index=range(df.peptide_len.max()), ).fillna(0) for _, row in df.iterrows(): for idx, aa in enumerate(row["peptide_alpha"]): freq_matrix_df.loc[idx, aa] += 1 return freq_matrix_df which for the following sample DataFrame: mini_df = pd.DataFrame(["YTEGDALDALGLKRY", "LTEIYGERLYETSY", "PVEEFNELLSKY", "TVDIQNPDITSSRY", "ASDKETYELRY"], columns=["peptide_alpha"]) mini_df["peptide_len"] = mini_df["peptide_alpha"].str.len() peptide_alpha peptide_len 0 YTEGDALDALGLKRY 15 1 LTEIYGERLYETSY 14 2 PVEEFNELLSKY 12 3 TVDIQNPDITSSRY 14 4 ASDKETYELRY 11 gives the following output: A D E F G I K L N P Q R S T V Y 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 0 2 0 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 1 0 1 2 1 0 0 0 0 0 0 0 0 0 4 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 5 1 0 0 0 1 0 0 0 2 0 0 0 0 1 0 0 6 0 0 2 0 0 0 0 1 0 1 0 0 0 0 0 1 7 0 2 1 0 0 0 0 1 0 0 0 1 0 0 0 0 8 1 0 0 0 0 1 0 3 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 10 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 1 11 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1 12 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 13 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 This works for small DataFrames but because of the for loop becomes too slow for bigger datasets. Is there a way to rewrite this in a faster/vectorized way?
[ "Solution\nmini_df['peptide_len'] = mini_df.peptide_len.map(lambda x: range(x))\nmini_df['peptide_alpha'] = mini_df.peptide_alpha.map(list)\nmini_df = mini_df.explode([\"peptide_alpha\", \"peptide_len\"])\n\npd.crosstab(mini_df.peptide_len, mini_df.peptide_alpha)\n\nPerformance\nWith the dataframe\nmini_df = pd.concat([mini_df] * 10000)\n\nOn my machine, my solution solves the problem within 0.5s, whereas the solution of the OP takes 1m8.6s. Consequently, I believe that my solution can be useful for him.\nOutput\npeptide_alpha A D E F G I K L N P Q R S T V Y\npeptide_len \n0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1\n1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 0\n2 0 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0\n3 0 0 1 0 1 2 1 0 0 0 0 0 0 0 0 0\n4 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1\n5 1 0 0 0 1 0 0 0 2 0 0 0 0 1 0 0\n6 0 0 2 0 0 0 0 1 0 1 0 0 0 0 0 1\n7 0 2 1 0 0 0 0 1 0 0 0 1 0 0 0 0\n8 1 0 0 0 0 1 0 3 0 0 0 0 0 0 0 0\n9 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1\n10 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 1\n11 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1\n12 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0\n13 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2\n14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1\n\n" ]
[ 2 ]
[]
[]
[ "frequency", "pandas", "position", "python", "python_3.x" ]
stackoverflow_0074466989_frequency_pandas_position_python_python_3.x.txt
Q: How to edit a message in discord.py I would like to have my bot edit a message if it detects a keyword, i'm not sure how to edit the message though. I've looked through the documentation but can't seem to figure it out. I'm using discord.py with python 3.6. This is the code: @bot.event async def on_message(message): if 'test' in message.content: await edit(message, "testtest") This is the error: File "testthing.py", line 67, in on_message await edit(message, "test") NameError: name 'edit' is not defined I would like the bot to edit a message to "testtest" if the message contains the word test, but i just get an error. A: You can use the Message.edit coroutine. The arguments must be passed as keyword arguments content, embed, or delete_after. You may only edit messages that you have sent. await message.edit(content="newcontent") A: Here's a solution that worked for me. @client.command() async def test(ctx): message = await ctx.send("hello") await asyncio.sleep(1) await message.edit(content="newcontent") A: Did you do this: from discord import edit or this: from discord import * before using message.edit function? If you did it, maybe the problem is with your discord.py version. Try this: print(discord.__version__) A: If you are wanting to update responses in discord.py you have to use: @tree.command(name = 'foobar', description = 'Send the word foo and update it to say bar') async def self(interaction: discord.Interaction): await interaction.response.send_message(f'foo', ephemeral = True) time.sleep(1) await interaction.edit_original_response(content=f'bar') A: Assign the original message to a variable. Reference the variable with .edit(content='content'). (You need "content=" in there). @bot.command() async def test(ctx): msg = await ctx.send('test') await msg.edit(content='this message has been edited')
How to edit a message in discord.py
I would like to have my bot edit a message if it detects a keyword, i'm not sure how to edit the message though. I've looked through the documentation but can't seem to figure it out. I'm using discord.py with python 3.6. This is the code: @bot.event async def on_message(message): if 'test' in message.content: await edit(message, "testtest") This is the error: File "testthing.py", line 67, in on_message await edit(message, "test") NameError: name 'edit' is not defined I would like the bot to edit a message to "testtest" if the message contains the word test, but i just get an error.
[ "You can use the Message.edit coroutine. The arguments must be passed as keyword arguments content, embed, or delete_after. You may only edit messages that you have sent.\nawait message.edit(content=\"newcontent\")\n\n", "Here's a solution that worked for me.\n@client.command()\nasync def test(ctx):\n message = await ctx.send(\"hello\")\n await asyncio.sleep(1)\n await message.edit(content=\"newcontent\")\n\n", "Did you do this:\nfrom discord import edit\n\nor this:\nfrom discord import *\n\nbefore using message.edit function?\nIf you did it, maybe the problem is with your discord.py version.\nTry this:\nprint(discord.__version__)\n\n", "If you are wanting to update responses in discord.py you have to use:\n@tree.command(name = 'foobar', description = 'Send the word foo and update it to say bar')\nasync def self(interaction: discord.Interaction):\n await interaction.response.send_message(f'foo', ephemeral = True)\n time.sleep(1)\n await interaction.edit_original_response(content=f'bar')\n\n", "Assign the original message to a variable. Reference the variable with .edit(content='content').\n(You need \"content=\" in there).\n@bot.command()\nasync def test(ctx):\n msg = await ctx.send('test')\n await msg.edit(content='this message has been edited')\n\n" ]
[ 23, 10, 0, 0, 0 ]
[ "Please try to add def to your code like this:\n@bot.event\nasync def on_message(message):\n if 'test' in message.content:\n await edit(message, \"edited !\")\n\n", "This is what I did:\n@bot.event\nasync def on_message(message):\n if message.content == 'test':\n await message.channel.send('Hello World!')\n await message.edit(content='testtest')\n\nI don't know if this will work for you, but try and see.\n" ]
[ -1, -1 ]
[ "discord.py", "python" ]
stackoverflow_0055711572_discord.py_python.txt
Q: Having trouble with def functions I have been taking this class for a bit with python for a bit and I have stumbled into a problem where any time I try to "def" a function, it says that it is not defined, I have no idea what I am doing wrong and this has become so frustrating. # Define main def main(): MIN = -100 MAX = 100 LIST_SIZE = 10 #Create empty list named scores scores = [] # Create a loop to fill the score list for i in range(LIST_SIZE): scores.append(random.randint(MIN, MAX)) #Print the score list print(scores) print("Highest Value: " + str(findHighest(scores))) Every time I try to test run this, I get "builtins.NameError" name 'LIST SIZE' is not defined. I cant take out the main function! It's required for the assignment, and even if I take it out I still run into errors. A: Your MIN, MAX, and LIST_SIZE variables are all being defined locally within def main(): By the looks of it, you want the code below those lines to be part of main, so fix the indentation to properly declare it as part of main. def main(): MIN = -100 MAX = 100 LIST_SIZE = 10 #Create empty list named scores scores = [] # Create a loop to fill the score list for i in range(LIST_SIZE): scores.append(random.randint(MIN, MAX)) #Print the score list print(scores) print("Highest Value: " + str(findHighest(scores))) A: import random # Define main def main(): MIN = -100 MAX = 100 LIST_SIZE = 10 #Create empty list named scores scores = [] # Create a loop to fill the score list for i in range(LIST_SIZE): scores.append(random.randint(MIN, MAX)) #Print the score list print(scores) print("Highest Value: " + str(findHighest(scores))) main() Output: [79] NOTE: You will get another error message: NameError: name 'findHighest' is not defined Which I think findHighest should be a function in some part of your code.
Having trouble with def functions
I have been taking this class for a bit with python for a bit and I have stumbled into a problem where any time I try to "def" a function, it says that it is not defined, I have no idea what I am doing wrong and this has become so frustrating. # Define main def main(): MIN = -100 MAX = 100 LIST_SIZE = 10 #Create empty list named scores scores = [] # Create a loop to fill the score list for i in range(LIST_SIZE): scores.append(random.randint(MIN, MAX)) #Print the score list print(scores) print("Highest Value: " + str(findHighest(scores))) Every time I try to test run this, I get "builtins.NameError" name 'LIST SIZE' is not defined. I cant take out the main function! It's required for the assignment, and even if I take it out I still run into errors.
[ "Your MIN, MAX, and LIST_SIZE variables are all being defined locally within def main():\nBy the looks of it, you want the code below those lines to be part of main, so fix the indentation to properly declare it as part of main.\ndef main():\n MIN = -100\n MAX = 100\n LIST_SIZE = 10\n\n #Create empty list named scores\n scores = []\n\n # Create a loop to fill the score list\n for i in range(LIST_SIZE): \n scores.append(random.randint(MIN, MAX))\n #Print the score list\n print(scores) \n print(\"Highest Value: \" + str(findHighest(scores)))\n\n", "import random\n\n# Define main\ndef main():\n MIN = -100\n MAX = 100\n LIST_SIZE = 10\n #Create empty list named scores\n scores = []\n # Create a loop to fill the score list\n for i in range(LIST_SIZE): \n scores.append(random.randint(MIN, MAX))\n #Print the score list\n print(scores) \n print(\"Highest Value: \" + str(findHighest(scores)))\nmain()\n\nOutput:\n[79]\n\nNOTE: You will get another error message:\nNameError: name 'findHighest' is not defined\n\nWhich I think findHighest should be a function in some part of your code.\n" ]
[ 2, 1 ]
[]
[]
[ "function", "nameerror", "python" ]
stackoverflow_0074467137_function_nameerror_python.txt
Q: QQ Plot for Poisson Distribution in Python I've been trying to make a QQ plot in python for a poisson distribution. Here is what I have so far: import numpy as np import statsmodels.api as sm import scipy.stats as stats pois = np.random.poisson(2.5, 100) #creates random Poisson distribution with mean = 2.5 fig =sm.qqplot(pois, stats.poisson, line = 's') plt.show() Whenever I do this, I get "AttributeError: 'poisson_gen' object has no attribute 'fit'" When googling that error, I found a lot of people saying that there is no Poisson.fit available. I'm pretty sure that the qqplot function is calling Poisson.fit. Does this mean that the qqplot function will not work with the Poisson distribution? If the qqplot function does not work with Poisson distributions, how would you recommend generating this plot? Any recommendations would be appreciated. A: I had the same error. The following seemed to work for me: import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats data=np.random.poisson(2.5, 100) stats.probplot(data, dist='poisson', sparams=(2.5,), plot=plt) plt.show() A: It is the end of 2022, and this is still a thing. I noticed that the statsmodels qqplots can accept frozen scipy distributions, which are not fit and thus do not throw the error for discrete distributions. from scipy import stats import statsmodels.api as sm import numpy as np import matplotlib.pyplot as plt mu = 10 test_array = stats.poisson.rvs(mu=mu, size=10000) fig, ax = plt.subplots(figsize=(7, 5)) ax.set_title("Poisson vs Poisson Example Q-Q Plot", fontsize=14) test_mu = np.mean(test_array) qdist = stats.poisson(test_mu) sm.qqplot(test_array, dist=qdist, line="45", ax=ax) fig.set_tight_layout(True) plt.savefig('poisson_qq_ex.png') plt.close() Example Q-Q plot using StatsModels with discrete Poisson distribution
QQ Plot for Poisson Distribution in Python
I've been trying to make a QQ plot in python for a poisson distribution. Here is what I have so far: import numpy as np import statsmodels.api as sm import scipy.stats as stats pois = np.random.poisson(2.5, 100) #creates random Poisson distribution with mean = 2.5 fig =sm.qqplot(pois, stats.poisson, line = 's') plt.show() Whenever I do this, I get "AttributeError: 'poisson_gen' object has no attribute 'fit'" When googling that error, I found a lot of people saying that there is no Poisson.fit available. I'm pretty sure that the qqplot function is calling Poisson.fit. Does this mean that the qqplot function will not work with the Poisson distribution? If the qqplot function does not work with Poisson distributions, how would you recommend generating this plot? Any recommendations would be appreciated.
[ "I had the same error. The following seemed to work for me:\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\ndata=np.random.poisson(2.5, 100)\nstats.probplot(data, dist='poisson', sparams=(2.5,), plot=plt)\nplt.show()\n\n", "It is the end of 2022, and this is still a thing. I noticed that the statsmodels qqplots can accept frozen scipy distributions, which are not fit and thus do not throw the error for discrete distributions.\nfrom scipy import stats\nimport statsmodels.api as sm\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmu = 10\ntest_array = stats.poisson.rvs(mu=mu, size=10000)\nfig, ax = plt.subplots(figsize=(7, 5))\nax.set_title(\"Poisson vs Poisson Example Q-Q Plot\", fontsize=14)\ntest_mu = np.mean(test_array)\nqdist = stats.poisson(test_mu)\nsm.qqplot(test_array, dist=qdist, line=\"45\", ax=ax)\n\nfig.set_tight_layout(True)\nplt.savefig('poisson_qq_ex.png')\nplt.close()\n\nExample Q-Q plot using StatsModels with discrete Poisson distribution\n" ]
[ 5, 1 ]
[]
[]
[ "numpy", "python", "scipy" ]
stackoverflow_0032983664_numpy_python_scipy.txt
Q: Logic behind Pylint error E1128 (assignment-from-none) Consider the following use case (minimum example): def get_func(param): if param is None: def func(): return None else: def func(): return param return func def process_val(param): func = get_func(param) val = func() # Do stuff with 'val'; *None* is an useful case. return val Here, func() can return None or not, depending on the value of param, but Pylint triggers E1128 for this, with the following explanation: Used when an assignment is done on a function call but the inferred function returns nothing but None. I am tempted to just disable the warning for this code, but it is actually classified as an Error, which makes me think this has actually produced bugs in the past, so I would like to understand: is this a Pylint error, that doesn't see that sometimes the function created will return something else than None? Or is it considered too bad practice to possibly have a function that always returns None? Maybe some other explanation that I cannot see? In case this seems like a too convoluted, the actual use case is more like this: def get_func(source): if source is None: def func(): return None elif source is "webcam": # Open webcam... def func(): # Capture frame from webcam return frame elif source is "server": # Open connection to server... def func(): # Read data from server. return data # Other cases... return func def process_val(source): data_func = get_func(source) # Here, do stuff in a loop, or pass *data_func* to other functions... # The code that uses the *data_func* knows that *None* means that # data could not be read and that's OK. For the code that uses data_func, it's simpler like this than to having to consider the value of source to decide if the data will always be None. To me this seems a valid functional-style approach (maybe I'm wrong and this is not the Pythonic way). (I'm using Pylint 2.12.2) A: If the function does not always return None, then it's a false positive from pylint not understanding your code well. If the function always return None you have no reason to assign it in a variable, and it means the code is at best is doing a useless assignment, not doing what you think it does, or at worst completely wrong. Not sure why it's an error message and not a warning though.
Logic behind Pylint error E1128 (assignment-from-none)
Consider the following use case (minimum example): def get_func(param): if param is None: def func(): return None else: def func(): return param return func def process_val(param): func = get_func(param) val = func() # Do stuff with 'val'; *None* is an useful case. return val Here, func() can return None or not, depending on the value of param, but Pylint triggers E1128 for this, with the following explanation: Used when an assignment is done on a function call but the inferred function returns nothing but None. I am tempted to just disable the warning for this code, but it is actually classified as an Error, which makes me think this has actually produced bugs in the past, so I would like to understand: is this a Pylint error, that doesn't see that sometimes the function created will return something else than None? Or is it considered too bad practice to possibly have a function that always returns None? Maybe some other explanation that I cannot see? In case this seems like a too convoluted, the actual use case is more like this: def get_func(source): if source is None: def func(): return None elif source is "webcam": # Open webcam... def func(): # Capture frame from webcam return frame elif source is "server": # Open connection to server... def func(): # Read data from server. return data # Other cases... return func def process_val(source): data_func = get_func(source) # Here, do stuff in a loop, or pass *data_func* to other functions... # The code that uses the *data_func* knows that *None* means that # data could not be read and that's OK. For the code that uses data_func, it's simpler like this than to having to consider the value of source to decide if the data will always be None. To me this seems a valid functional-style approach (maybe I'm wrong and this is not the Pythonic way). (I'm using Pylint 2.12.2)
[ "If the function does not always return None, then it's a false positive from pylint not understanding your code well. If the function always return None you have no reason to assign it in a variable, and it means the code is at best is doing a useless assignment, not doing what you think it does, or at worst completely wrong. Not sure why it's an error message and not a warning though.\n" ]
[ 1 ]
[]
[]
[ "pylint", "python" ]
stackoverflow_0074467217_pylint_python.txt
Q: How to calculate the number of charging sessions in my data? I have a data set that looks like this: Timestamp Cumulative Energy (kWh) Charging? 2022-08-19 05:45:00 24.9 1 2022-08-19 06:00:00 44.7 1 2022-08-19 06:15:00 53.1 1 2022-08-19 06:30:00 0 0 And so on. The data set represents the usage of an EV charger for a couple weeks. I want to be able to calculate the number of sessions total and the average energy withdrawn per charging session. Each charging session varies, some are an hour long, some less, some more. Since the dataset provides the cumulative energy, I thought that ways to go about this would be to group consecutive sessions (Charging = 1) identify the largest value for Cumulative Energy (kWh) and commit these values to a dictionary which I can then use to calculate the total number of sessions and the average cum. energy of each session. I'm unsure of how to go about writing this in Python though. Any help would be greatly appreciated! Update: I did the following: result = ( evdata.groupby(["Charging?", (evdata['Charging?'] != evdata['Charging?'].shift()).cumsum()], sort=False) .size() .reset_index(level=1, drop=True) ) - - 0 1707 1 1 0 43 1 3 0 38 1 4 And so on. So we've managed to get the number of charging and non-charging sessions. But on the right-hand column we see the number of 15-minute charging sessions when I would ideally like to see the maximum cumulative energy (kWh) for that group? A: I copied the first three rows at the bottom to check the solution. hene two rows in the result Please note I'm still not clear on how you like the dictionary to look like, i.e, what will be the key, I understand the value # identify the consecutive charging session # take diff of two consecutive rows, first row will be Nan, so make it -1 # and take absolution value to do a cumsum (see intermediate result below) # drop duplicates based on seq while keeping last df2=df.assign(seq=df['Charging?'].diff().fillna(-1).abs().cumsum()).drop_duplicates(subset=['seq'], keep='last') # keep only rows where charging is 1 out=df2.loc[df2['Charging?'].eq(1)]['Cumulative Energy (kWh)'] out # RESULT 2 53.1 6 53.1 Name: Cumulative Energy (kWh), dtype: float64 Intermediate result df['Charging?'].diff().fillna(-1).abs().cumsum() 0 1.0 1 1.0 2 1.0 3 2.0 4 3.0 5 3.0 6 3.0 Name: Charging?, dtype: float64 A: Not my favorite solution, since it utilizes looping, but I believe this works for you import numpy as np import pandas as pd df = pd.DataFrame( # sample df [ ['2022-08-19 05:45:00', 24.9, 1], ['2022-08-19 06:00:00', 44.7, 1], ['2022-08-19 06:15:00', 53.1, 1], ['2022-08-19 06:30:00' ,0, 0], ['2022-08-19 05:45:00', 10, 1], ['2022-08-19 06:00:00', 20, 1], ['2022-08-19 06:15:00', 10, 1], ['2022-08-19 06:30:00' ,0, 0], ['2022-08-19 05:45:00', 30, 1], ['2022-08-19 06:00:00', 30, 1], ['2022-08-19 06:15:00', 30, 1], ['2022-08-19 06:30:00' ,0, 0] ] ) sessionid=1 # init session id df[3] = 0 # set default for i in np.arange(0,df.shape[0]-1): if i == 0: # first session id df.iloc[i,3] = sessionid if df.iloc[i,2] ==0: # if we are at end of session sessionid +=1 df.iloc[i+1,3] = sessionid # set the session id of the next record to current print(df.loc[df[1]!=0].groupby([3])[1].mean()) # exclude all 0 values print(df.loc[df[1]!=0].groupby([3])[1].max()) print(df.loc[df[1]!=0].groupby([3])[1].min()) print(df.loc[df[1]!=0].groupby([3])[1].std()) Here is your output 3 1 40.900000 2 13.333333 3 30.000000 Name: 1, dtype: float64 3 1 53.1 2 20.0 3 30.0 Name: 1, dtype: float64 3 1 24.9 2 10.0 3 30.0 Name: 1, dtype: float64 3 1 14.478950 2 5.773503 3 0.000000 Name: 1, dtype: float64
How to calculate the number of charging sessions in my data?
I have a data set that looks like this: Timestamp Cumulative Energy (kWh) Charging? 2022-08-19 05:45:00 24.9 1 2022-08-19 06:00:00 44.7 1 2022-08-19 06:15:00 53.1 1 2022-08-19 06:30:00 0 0 And so on. The data set represents the usage of an EV charger for a couple weeks. I want to be able to calculate the number of sessions total and the average energy withdrawn per charging session. Each charging session varies, some are an hour long, some less, some more. Since the dataset provides the cumulative energy, I thought that ways to go about this would be to group consecutive sessions (Charging = 1) identify the largest value for Cumulative Energy (kWh) and commit these values to a dictionary which I can then use to calculate the total number of sessions and the average cum. energy of each session. I'm unsure of how to go about writing this in Python though. Any help would be greatly appreciated! Update: I did the following: result = ( evdata.groupby(["Charging?", (evdata['Charging?'] != evdata['Charging?'].shift()).cumsum()], sort=False) .size() .reset_index(level=1, drop=True) ) - - 0 1707 1 1 0 43 1 3 0 38 1 4 And so on. So we've managed to get the number of charging and non-charging sessions. But on the right-hand column we see the number of 15-minute charging sessions when I would ideally like to see the maximum cumulative energy (kWh) for that group?
[ "I copied the first three rows at the bottom to check the solution. hene two rows in the result\nPlease note I'm still not clear on how you like the dictionary to look like, i.e, what will be the key, I understand the value\n# identify the consecutive charging session\n# take diff of two consecutive rows, first row will be Nan, so make it -1\n# and take absolution value to do a cumsum (see intermediate result below)\n\n# drop duplicates based on seq while keeping last\n\ndf2=df.assign(seq=df['Charging?'].diff().fillna(-1).abs().cumsum()).drop_duplicates(subset=['seq'], keep='last')\n\n\n# keep only rows where charging is 1\nout=df2.loc[df2['Charging?'].eq(1)]['Cumulative Energy (kWh)']\n\nout\n\n\n# RESULT\n\n2 53.1\n6 53.1\nName: Cumulative Energy (kWh), dtype: float64\n\nIntermediate result\ndf['Charging?'].diff().fillna(-1).abs().cumsum()\n\n0 1.0\n1 1.0\n2 1.0\n3 2.0\n4 3.0\n5 3.0\n6 3.0\nName: Charging?, dtype: float64\n\n", "Not my favorite solution, since it utilizes looping, but I believe this works for you\n\nimport numpy as np\nimport pandas as pd\n\ndf = pd.DataFrame( # sample df\n[\n['2022-08-19 05:45:00', 24.9, 1],\n['2022-08-19 06:00:00', 44.7, 1],\n['2022-08-19 06:15:00', 53.1, 1],\n['2022-08-19 06:30:00' ,0, 0],\n ['2022-08-19 05:45:00', 10, 1],\n['2022-08-19 06:00:00', 20, 1],\n['2022-08-19 06:15:00', 10, 1],\n['2022-08-19 06:30:00' ,0, 0],\n ['2022-08-19 05:45:00', 30, 1],\n['2022-08-19 06:00:00', 30, 1],\n['2022-08-19 06:15:00', 30, 1],\n['2022-08-19 06:30:00' ,0, 0]\n]\n)\nsessionid=1 # init session id\ndf[3] = 0 # set default\nfor i in np.arange(0,df.shape[0]-1):\n \n if i == 0: # first session id\n df.iloc[i,3] = sessionid\n \n if df.iloc[i,2] ==0: # if we are at end of session\n sessionid +=1\n\n df.iloc[i+1,3] = sessionid # set the session id of the next record to current\n\nprint(df.loc[df[1]!=0].groupby([3])[1].mean()) # exclude all 0 values\nprint(df.loc[df[1]!=0].groupby([3])[1].max())\nprint(df.loc[df[1]!=0].groupby([3])[1].min())\nprint(df.loc[df[1]!=0].groupby([3])[1].std())\n\n\nHere is your output\n3\n1 40.900000\n2 13.333333\n3 30.000000\nName: 1, dtype: float64\n3\n1 53.1\n2 20.0\n3 30.0\nName: 1, dtype: float64\n3\n1 24.9\n2 10.0\n3 30.0\nName: 1, dtype: float64\n3\n1 14.478950\n2 5.773503\n3 0.000000\nName: 1, dtype: float64\n\n" ]
[ 0, 0 ]
[]
[]
[ "data_analysis", "dataframe", "python" ]
stackoverflow_0074465768_data_analysis_dataframe_python.txt
Q: finding sum of fractions n/1 to 1/n I am trying to find the sum n/1 + (n-1)/2 + (n-2)/3 ... + 1/n. I am not getting the correct output This is what I have n = int(input("Please enter a positive integer: ")) sum2 = 0.0 for i in range(1, n-1): sum2 = sum2 + (i/1) print("For n =", n, "the sum n/1 + (n-1)/2 + ... 1/n is", sum2) My expected output for sum2 is 11.15 when 6 is entered as n but it's not correct. What am I doing wrong? A: When talking about the 2nd summation, besides the numerator decreasing one by one, the denominator also needs to increase one by one. n = int(input("Please enter a positive integer: ")) sum2 = 0 for i in range(0, n): sum2 = sum2 + (n-i)/(i+1) print("For n =", n, "the sum n/1 + (n-1)/2 + ... 1/n is", sum2)
finding sum of fractions n/1 to 1/n
I am trying to find the sum n/1 + (n-1)/2 + (n-2)/3 ... + 1/n. I am not getting the correct output This is what I have n = int(input("Please enter a positive integer: ")) sum2 = 0.0 for i in range(1, n-1): sum2 = sum2 + (i/1) print("For n =", n, "the sum n/1 + (n-1)/2 + ... 1/n is", sum2) My expected output for sum2 is 11.15 when 6 is entered as n but it's not correct. What am I doing wrong?
[ "When talking about the 2nd summation, besides the numerator decreasing one by one, the denominator also needs to increase one by one.\nn = int(input(\"Please enter a positive integer: \"))\n\n\nsum2 = 0\n\nfor i in range(0, n):\n sum2 = sum2 + (n-i)/(i+1)\n\nprint(\"For n =\", n, \"the sum n/1 + (n-1)/2 + ... 1/n is\", sum2)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074467227_python.txt
Q: How to annotate each lmplot facet by hue group or combined data I'm trying to add annotations to lmplots in a FacetGrid (r and p values for each regression) but the plots have two regression lines because I'm using "hue", and therefore I get two annotations that are stacked on top of each other. I'd like to either specify that they are displayed in different locations or ideally to use the complete dataset, not separated by the argument passed to hue I assume for that I need to modify "data" in the annotate function but I cannot figure out how. I did manage to do it by creating a dataframe that contains all r and p values and looping through g.axes_dict.items(), but I would like a more elegant solution where the values can be calculated and displayed directly import pandas as pd import seaborn as sns import scipy as sp dict = { 'ID': ['A','B','C','D','A','B','C','D','A','B','C','D','A','B','C','D'], 'SCORE': [18,20,37,40,34,21,24,12,34,54,23,43,23,31,65,78], 'AGE': [34,54,46,65,43,23,54,23,43,54,23,32,56,42,12,43], 'GENDER': [1,1,1,1,2,2,2,2,1,1,1,1,2,2,2,2] } df = pd.DataFrame(dict) g = sns.lmplot(x='SCORE', y='AGE', data=df,hue='GENDER', col='ID', height=3, aspect=1) def annotate(data, **kws): r, p = sp.stats.pearsonr(data['SCORE'], data['AGE']) ax = plt.gca() ax.text(.05, .8, 'r={:.2f}, p={:.2g}'.format(r, p), transform=ax.transAxes) g.map_dataframe(annotate) A: The tips dataset is being used because the sample data in the OP causes scipy to generate ConstantInputWarning: An input array is constant; the correlation coefficient is not defined. Use a dict to define the y-position for each hue category ideally to use the complete dataset When using .map_dataframe, for each facet, each hue group is plotted separately, which can be seen by displaying data in def annotate. If you are separating the data by using hue, then separate statistics should be plotted. import seaborn as sns import scipy # function def annotate(data, **kws): # display data; see that for each Facet, hue groups are annotated separately - uncomment the following two lines # print(data.sex.unique()) # display(data) # get the hue group; there will be one g = data.sex.unique()[0] # get the y-position from the dict y = yg[g] r, p = scipy.stats.pearsonr(data['total_bill'], data['tip']) ax = plt.gca() ax.text(1, y, f'{g}: r={r:.2f}, p={p:.2f}') # sample data tips = sns.load_dataset('tips') # define a y-position for each annotation in the hue group yg = {'Male': 8, 'Female': 9} # plot g = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1) # annotate _ = g.map_dataframe(annotate) Iterate through g.axes.flat Alternative, do not use .map_dataframe. Flatten and iterate through each axes, which easily allows for calculations and annotations to be made with all the data for each facet. g = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1) # flatten the axes for all the facets axes = g.axes.flat # iterate through each axes for ax in axes: # get the title which can be used to filter the data by col col, group = ax.get_title().split(' = ') # select data from dataframe data = tips[tips[col].eq(group)] # get statistics r, p = scipy.stats.pearsonr(data['total_bill'], data['tip']) # annotate ax.text(2, 8, f'Combined: r={r:.2f}, p={p:.2f}') Iterate through g.axes_dict.items() This option has the col= groups as keys, but then hard coding 'time' is required for creating data. g = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1) # iterate through g.axes_dict for group, ax in g.axes_dict.items(): # select data from dataframe data = tips[tips['time'].eq(group)] # get statistics r, p = scipy.stats.pearsonr(data['total_bill'], data['tip']) # annotate ax.text(2, 8, f'Combined: r={r:.2f}, p={p:.2f}') Plot Result
How to annotate each lmplot facet by hue group or combined data
I'm trying to add annotations to lmplots in a FacetGrid (r and p values for each regression) but the plots have two regression lines because I'm using "hue", and therefore I get two annotations that are stacked on top of each other. I'd like to either specify that they are displayed in different locations or ideally to use the complete dataset, not separated by the argument passed to hue I assume for that I need to modify "data" in the annotate function but I cannot figure out how. I did manage to do it by creating a dataframe that contains all r and p values and looping through g.axes_dict.items(), but I would like a more elegant solution where the values can be calculated and displayed directly import pandas as pd import seaborn as sns import scipy as sp dict = { 'ID': ['A','B','C','D','A','B','C','D','A','B','C','D','A','B','C','D'], 'SCORE': [18,20,37,40,34,21,24,12,34,54,23,43,23,31,65,78], 'AGE': [34,54,46,65,43,23,54,23,43,54,23,32,56,42,12,43], 'GENDER': [1,1,1,1,2,2,2,2,1,1,1,1,2,2,2,2] } df = pd.DataFrame(dict) g = sns.lmplot(x='SCORE', y='AGE', data=df,hue='GENDER', col='ID', height=3, aspect=1) def annotate(data, **kws): r, p = sp.stats.pearsonr(data['SCORE'], data['AGE']) ax = plt.gca() ax.text(.05, .8, 'r={:.2f}, p={:.2g}'.format(r, p), transform=ax.transAxes) g.map_dataframe(annotate)
[ "\nThe tips dataset is being used because the sample data in the OP causes scipy to generate ConstantInputWarning: An input array is constant; the correlation coefficient is not defined.\nUse a dict to define the y-position for each hue category\nideally to use the complete dataset\n\nWhen using .map_dataframe, for each facet, each hue group is plotted separately, which can be seen by displaying data in def annotate.\nIf you are separating the data by using hue, then separate statistics should be plotted.\n\n\n\nimport seaborn as sns\nimport scipy\n\n\n# function \ndef annotate(data, **kws):\n\n # display data; see that for each Facet, hue groups are annotated separately - uncomment the following two lines\n # print(data.sex.unique())\n # display(data) \n\n # get the hue group; there will be one\n g = data.sex.unique()[0]\n\n # get the y-position from the dict\n y = yg[g]\n\n r, p = scipy.stats.pearsonr(data['total_bill'], data['tip'])\n ax = plt.gca()\n ax.text(1, y, f'{g}: r={r:.2f}, p={p:.2f}')\n\n\n\n# sample data\ntips = sns.load_dataset('tips')\n\n# define a y-position for each annotation in the hue group\nyg = {'Male': 8, 'Female': 9}\n\n# plot\ng = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1)\n\n# annotate\n_ = g.map_dataframe(annotate)\n\n\n\nIterate through g.axes.flat\n\nAlternative, do not use .map_dataframe.\nFlatten and iterate through each axes, which easily allows for calculations and annotations to be made with all the data for each facet.\n\ng = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1)\n\n# flatten the axes for all the facets\naxes = g.axes.flat\n\n# iterate through each axes\nfor ax in axes:\n \n # get the title which can be used to filter the data by col\n col, group = ax.get_title().split(' = ')\n \n # select data from dataframe\n data = tips[tips[col].eq(group)]\n \n # get statistics\n r, p = scipy.stats.pearsonr(data['total_bill'], data['tip'])\n \n # annotate\n ax.text(2, 8, f'Combined: r={r:.2f}, p={p:.2f}')\n\nIterate through g.axes_dict.items()\n\nThis option has the col= groups as keys, but then hard coding 'time' is required for creating data.\n\ng = sns.lmplot(x='total_bill', y='tip', col='time', data=tips, hue='sex', height=5, aspect=1)\n\n# iterate through g.axes_dict\nfor group, ax in g.axes_dict.items():\n\n # select data from dataframe\n data = tips[tips['time'].eq(group)]\n \n # get statistics\n r, p = scipy.stats.pearsonr(data['total_bill'], data['tip'])\n \n # annotate\n ax.text(2, 8, f'Combined: r={r:.2f}, p={p:.2f}')\n\nPlot Result\n\n" ]
[ 1 ]
[]
[]
[ "facet_grid", "lmplot", "plot_annotations", "python", "seaborn" ]
stackoverflow_0074465966_facet_grid_lmplot_plot_annotations_python_seaborn.txt
Q: How to parse SOAP XML with Python? Goal: Get the values inside <Name> tags and print them out. Simplified XML below. <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soap:Body> <GetStartEndPointResponse xmlns="http://www.etis.fskab.se/v1.0/ETISws"> <GetStartEndPointResult> <Code>0</Code> <Message /> <StartPoints> <Point> <Id>545</Id> <Name>Get Me</Name> <Type>sometype</Type> <X>333</X> <Y>222</Y> </Point> <Point> <Id>634</Id> <Name>Get me too</Name> <Type>sometype</Type> <X>555</X> <Y>777</Y> </Point> </StartPoints> </GetStartEndPointResult> </GetStartEndPointResponse> </soap:Body> </soap:Envelope> Attempt: import requests from xml.etree import ElementTree response = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst') # XML parsing here dom = ElementTree.fromstring(response.text) names = dom.findall('*/Name') for name in names: print(name.text) I have read other people recommending zeep to parse soap xml but I found it hard to get my head around. A: The issue here is dealing with the XML namespaces: import requests from xml.etree import ElementTree response = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst') # define namespace mappings to use as shorthand below namespaces = { 'soap': 'http://schemas.xmlsoap.org/soap/envelope/', 'a': 'http://www.etis.fskab.se/v1.0/ETISws', } dom = ElementTree.fromstring(response.content) # reference the namespace mappings here by `<name>:` names = dom.findall( './soap:Body' '/a:GetStartEndPointResponse' '/a:GetStartEndPointResult' '/a:StartPoints' '/a:Point' '/a:Name', namespaces, ) for name in names: print(name.text) The namespaces come from the xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" and xmlns="http://www.etis.fskab.se/v1.0/ETISws" attributes on the Envelope and GetStartEndPointResponse nodes respectively. Keep in mind, a namespace is inherited by all children nodes of a parent even if the namespace isn't explicitly specified on the child's tag as <namespace:tag>. Note: I had to use response.content rather than response.body. A: An old question but worth to mention another option for this task. I like to use xmltodict (Github) a lightweight converter of XML to python dictionary. Take your soap response in a variable named stack Parse it with xmltodict.parse In [48]: stack_d = xmltodict.parse(stack) Check the result: In [49]: stack_d Out[49]: OrderedDict([('soap:Envelope', OrderedDict([('@xmlns:soap', 'http://schemas.xmlsoap.org/soap/envelope/'), ('@xmlns:xsd', 'http://www.w3.org/2001/XMLSchema'), ('@xmlns:xsi', 'http://www.w3.org/2001/XMLSchema-instance'), ('soap:Body', OrderedDict([('GetStartEndPointResponse', OrderedDict([('@xmlns', 'http://www.etis.fskab.se/v1.0/ETISws'), ('GetStartEndPointResult', OrderedDict([('Code', '0'), ('Message', None), ('StartPoints', OrderedDict([('Point', [OrderedDict([('Id', '545'), ('Name', 'Get Me'), ('Type', 'sometype'), ('X', '333'), ('Y', '222')]), OrderedDict([('Id', '634'), ('Name', 'Get me too'), ('Type', 'sometype'), ('X', '555'), ('Y', '777')])])]))]))]))]))]))]) At this point it become as easy as to browse a python dictionnary In [50]: stack_d['soap:Envelope']['soap:Body']['GetStartEndPointResponse']['GetStartEndPointResult']['StartPoints']['Point'] Out[50]: [OrderedDict([('Id', '545'), ('Name', 'Get Me'), ('Type', 'sometype'), ('X', '333'), ('Y', '222')]), OrderedDict([('Id', '634'), ('Name', 'Get me too'), ('Type', 'sometype'), ('X', '555'), ('Y', '777')])] A: Again, replying to an old question but I think this solution is worth sharing. Using BeautifulSoup was piece of cake for me. You can install BeautifulSoup form here. from bs4 import BeautifulSoup xml = BeautifulSoup(xml_string, 'xml') xml.find('soap:Body') # to get the soup:Body tag. xml.find('X') # for X tag A: Just replace all the 'soap:' and other namespace prefixes such as 'a:' with '' (just remove them an make it a non-SOAP xml file) new_response = response.text.replace('soap:', '').replace('a:', '') Then you can just proceed normally. A: try like this import requests from bs4 import BeautifulSoup response = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst') xml = BeautifulSoup(response.text, 'xml') xml.find('soap:Body') # to get the soup:Body tag. xml.find('X') # for X tag
How to parse SOAP XML with Python?
Goal: Get the values inside <Name> tags and print them out. Simplified XML below. <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soap:Body> <GetStartEndPointResponse xmlns="http://www.etis.fskab.se/v1.0/ETISws"> <GetStartEndPointResult> <Code>0</Code> <Message /> <StartPoints> <Point> <Id>545</Id> <Name>Get Me</Name> <Type>sometype</Type> <X>333</X> <Y>222</Y> </Point> <Point> <Id>634</Id> <Name>Get me too</Name> <Type>sometype</Type> <X>555</X> <Y>777</Y> </Point> </StartPoints> </GetStartEndPointResult> </GetStartEndPointResponse> </soap:Body> </soap:Envelope> Attempt: import requests from xml.etree import ElementTree response = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst') # XML parsing here dom = ElementTree.fromstring(response.text) names = dom.findall('*/Name') for name in names: print(name.text) I have read other people recommending zeep to parse soap xml but I found it hard to get my head around.
[ "The issue here is dealing with the XML namespaces:\nimport requests\nfrom xml.etree import ElementTree\n\nresponse = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst')\n\n# define namespace mappings to use as shorthand below\nnamespaces = {\n 'soap': 'http://schemas.xmlsoap.org/soap/envelope/',\n 'a': 'http://www.etis.fskab.se/v1.0/ETISws',\n}\ndom = ElementTree.fromstring(response.content)\n\n# reference the namespace mappings here by `<name>:`\nnames = dom.findall(\n './soap:Body'\n '/a:GetStartEndPointResponse'\n '/a:GetStartEndPointResult'\n '/a:StartPoints'\n '/a:Point'\n '/a:Name',\n namespaces,\n)\nfor name in names:\n print(name.text)\n\nThe namespaces come from the xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" and xmlns=\"http://www.etis.fskab.se/v1.0/ETISws\" attributes on the Envelope and GetStartEndPointResponse nodes respectively.\nKeep in mind, a namespace is inherited by all children nodes of a parent even if the namespace isn't explicitly specified on the child's tag as <namespace:tag>.\nNote: I had to use response.content rather than response.body.\n", "An old question but worth to mention another option for this task.\nI like to use xmltodict (Github) a lightweight converter of XML to python dictionary.\nTake your soap response in a variable named stack\nParse it with xmltodict.parse\nIn [48]: stack_d = xmltodict.parse(stack)\n\nCheck the result:\nIn [49]: stack_d\nOut[49]:\nOrderedDict([('soap:Envelope',\n OrderedDict([('@xmlns:soap',\n 'http://schemas.xmlsoap.org/soap/envelope/'),\n ('@xmlns:xsd', 'http://www.w3.org/2001/XMLSchema'),\n ('@xmlns:xsi',\n 'http://www.w3.org/2001/XMLSchema-instance'),\n ('soap:Body',\n OrderedDict([('GetStartEndPointResponse',\n OrderedDict([('@xmlns',\n 'http://www.etis.fskab.se/v1.0/ETISws'),\n ('GetStartEndPointResult',\n OrderedDict([('Code',\n '0'),\n ('Message',\n None),\n ('StartPoints',\n OrderedDict([('Point',\n [OrderedDict([('Id',\n '545'),\n ('Name',\n 'Get Me'),\n ('Type',\n 'sometype'),\n ('X',\n '333'),\n ('Y',\n '222')]),\n OrderedDict([('Id',\n '634'),\n ('Name',\n 'Get me too'),\n ('Type',\n 'sometype'),\n ('X',\n '555'),\n ('Y',\n '777')])])]))]))]))]))]))])\n\nAt this point it become as easy as to browse a python dictionnary\nIn [50]: stack_d['soap:Envelope']['soap:Body']['GetStartEndPointResponse']['GetStartEndPointResult']['StartPoints']['Point']\nOut[50]:\n[OrderedDict([('Id', '545'),\n ('Name', 'Get Me'),\n ('Type', 'sometype'),\n ('X', '333'),\n ('Y', '222')]),\nOrderedDict([('Id', '634'),\n ('Name', 'Get me too'),\n ('Type', 'sometype'),\n ('X', '555'),\n ('Y', '777')])]\n\n", "Again, replying to an old question but I think this solution is worth sharing.\nUsing BeautifulSoup was piece of cake for me. You can install BeautifulSoup form here.\nfrom bs4 import BeautifulSoup\nxml = BeautifulSoup(xml_string, 'xml')\nxml.find('soap:Body') # to get the soup:Body tag. \nxml.find('X') # for X tag\n\n", "Just replace all the 'soap:' and other namespace prefixes such as 'a:' with '' (just remove them an make it a non-SOAP xml file)\nnew_response = response.text.replace('soap:', '').replace('a:', '')\nThen you can just proceed normally.\n", "try like this\nimport requests\nfrom bs4 import BeautifulSoup\n \nresponse = requests.get('http://www.labs.skanetrafiken.se/v2.2/querystation.asp?inpPointfr=yst')\n \nxml = BeautifulSoup(response.text, 'xml')\nxml.find('soap:Body') # to get the soup:Body tag.\nxml.find('X') # for X tag\n\n" ]
[ 26, 9, 2, 0, 0 ]
[]
[]
[ "python", "python_3.x", "soap", "xml", "zeep" ]
stackoverflow_0045250626_python_python_3.x_soap_xml_zeep.txt
Q: Index pandas DataFrame by column numbers, when column names are integers I am trying to keep just certain columns of a DataFrame, and it works fine when column names are strings: In [2]: import numpy as np In [3]: import pandas as pd In [4]: a = np.arange(35).reshape(5,7) In [5]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], ['a', 'b', 'c', 'd', 'e', 'f', 'g']) In [6]: df Out[6]: a b c d e f g x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 [5 rows x 7 columns] In [7]: df[[1,3]] #No problem Out[7]: b d x 1 3 y 8 10 u 15 17 z 22 24 w 29 31 However, when column names are integers, I am getting a key error: In [8]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17)) In [9]: df Out[9]: 10 11 12 13 14 15 16 x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 [5 rows x 7 columns] In [10]: df[[1,3]] Results in: KeyError: '[1 3] not in index' I can see why pandas does not allow that -> to avoid mix up between indexing by column names and column numbers. However, is there a way to tell pandas that I want to index by column numbers? Of course, one solution is to convert column names to strings, but I am wondering if there is a better solution. A: This is exactly the purpose of iloc, see here In [37]: df Out[37]: 10 11 12 13 14 15 16 x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 In [38]: df.iloc[:,[1,3]] Out[38]: 11 13 x 1 3 y 8 10 u 15 17 z 22 24 w 29 31 A: Just convert the headers from integer to string. This should be done almost always as a best practice when working with pandas datasets to avoid surprise df.columns = df.columns.map(str) A: This is certainly one of those things that feels like a bug but is really a design decision (I think). A few work around options: rename the columns with their positions as their name: df.columns = arange(0,len(df.columns)) Another way is to get names from df.columns: print df[ df.columns[[1,3]] ] 11 13 x 1 3 y 8 10 u 15 17 z 22 24 w 29 31 I suspect this is the most appealing as it just requires adding a wee bit of code and not changing any column names. A: import pandas as pd df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17)) #Let say you want to keep only columns 1,2 (these are locations not names) needed_columns = [1,2] df = df[df.columns[needed_columns] print(df) 11 12 x 1 2 y 8 9 u 15 16 z 22 23 w 29 30
Index pandas DataFrame by column numbers, when column names are integers
I am trying to keep just certain columns of a DataFrame, and it works fine when column names are strings: In [2]: import numpy as np In [3]: import pandas as pd In [4]: a = np.arange(35).reshape(5,7) In [5]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], ['a', 'b', 'c', 'd', 'e', 'f', 'g']) In [6]: df Out[6]: a b c d e f g x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 [5 rows x 7 columns] In [7]: df[[1,3]] #No problem Out[7]: b d x 1 3 y 8 10 u 15 17 z 22 24 w 29 31 However, when column names are integers, I am getting a key error: In [8]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17)) In [9]: df Out[9]: 10 11 12 13 14 15 16 x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 [5 rows x 7 columns] In [10]: df[[1,3]] Results in: KeyError: '[1 3] not in index' I can see why pandas does not allow that -> to avoid mix up between indexing by column names and column numbers. However, is there a way to tell pandas that I want to index by column numbers? Of course, one solution is to convert column names to strings, but I am wondering if there is a better solution.
[ "This is exactly the purpose of iloc, see here\nIn [37]: df\nOut[37]: \n 10 11 12 13 14 15 16\nx 0 1 2 3 4 5 6\ny 7 8 9 10 11 12 13\nu 14 15 16 17 18 19 20\nz 21 22 23 24 25 26 27\nw 28 29 30 31 32 33 34\n\nIn [38]: df.iloc[:,[1,3]]\nOut[38]: \n 11 13\nx 1 3\ny 8 10\nu 15 17\nz 22 24\nw 29 31\n\n", "Just convert the headers from integer to string. This should be done almost always as a best practice when working with pandas datasets to avoid surprise\ndf.columns = df.columns.map(str)\n\n", "This is certainly one of those things that feels like a bug but is really a design decision (I think).\nA few work around options:\nrename the columns with their positions as their name:\n df.columns = arange(0,len(df.columns))\n\nAnother way is to get names from df.columns:\nprint df[ df.columns[[1,3]] ]\n 11 13\nx 1 3\ny 8 10\nu 15 17\nz 22 24\nw 29 31\n\nI suspect this is the most appealing as it just requires adding a wee bit of code and not changing any column names. \n", "import pandas as pd\ndf = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))\n\n#Let say you want to keep only columns 1,2 (these are locations not names)\nneeded_columns = [1,2]\n\ndf = df[df.columns[needed_columns]\n\nprint(df)\n\n11 12\nx 1 2\ny 8 9\nu 15 16\nz 22 23\nw 29 30\n\n\n" ]
[ 20, 10, 3, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0027156278_pandas_python.txt
Q: How to slice list based on a condition that every element of another list must appear atleast once? I have two lists : a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0] key = [1, 2, 4, 6] I want to check if all elements in the key have atleast once appeared in the list a and remove the ones after that. desired output : a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2] here is what i tried: if a[-1] not in key: indx = -1 while indx < 0: if a[indx] in k: ind = indx indx = 1 else: indx= indx-1 a = a[:ind+1] but this just check if the last element of a is in key. Idk how to check for the condition if all the key elements have appeared atleast once. Can some help ? A: This function slices the list based on the condition that every element of the key must appear at least once in a. def slice_list(a, key): for i in range(len(a)): # iterate over the list if a[i] in key: # check if the element is in the key key.remove(a[i]) # remove the element from the key if not key: # if the key is empty return a[: i + 1] # return the sliced list return a # if the key is not empty return the original list print(slice_list(a, key)) Output: [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2] A: Try: a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0] key = [1, 2, 4, 6] max_idx = max(a.index(k) for k in key) print(a[: max_idx + 1]) Prints: [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2] A: Another method to get the same result :) for i in range(len(a)): if all(x in a[:i] for x in key): b = a[:i] break print(b) Output: [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2] A: Here is an efficient solution that is O(n) and doesn't require slicing lists or a list.index operation in a loop: First, create a dictionary mapping each element of a to the index of its first occurrence. a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0] a_lookup = dict() for ind, val in enumerate(a): if val not in a_lookup: a_lookup[val] = ind This gives us a_lookup = {3: 0, 8: 1, 5: 2, 1: 3, 4: 4, 7: 5, 6: 8, 2: 10, 0: 15} Next, find the largest value in the dictionary, for all keys in the key list. If we use dict.get to get the keys, a non-existent key will return None, which will cause a TypeError in the max call. We can catch this and handle it appropriately. Once we've found the maximum index, slice the list until this index to get what we need. key = [1, 2, 4, 6] try: max_index = max(a_lookup.get(k) for k in key) sliced_list = a[:max_index+1] except TypeError: print("Error: all keys do not exist in a") which gives, sliced_list = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2]
How to slice list based on a condition that every element of another list must appear atleast once?
I have two lists : a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0] key = [1, 2, 4, 6] I want to check if all elements in the key have atleast once appeared in the list a and remove the ones after that. desired output : a = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2] here is what i tried: if a[-1] not in key: indx = -1 while indx < 0: if a[indx] in k: ind = indx indx = 1 else: indx= indx-1 a = a[:ind+1] but this just check if the last element of a is in key. Idk how to check for the condition if all the key elements have appeared atleast once. Can some help ?
[ "This function slices the list based on the condition that every element of the key must appear at least once in a.\ndef slice_list(a, key):\n for i in range(len(a)): # iterate over the list\n if a[i] in key: # check if the element is in the key\n key.remove(a[i]) # remove the element from the key\n if not key: # if the key is empty\n return a[: i + 1] # return the sliced list\n return a # if the key is not empty return the original list\n\n\nprint(slice_list(a, key))\n\n\nOutput: [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2]\n\n", "Try:\na = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0]\nkey = [1, 2, 4, 6]\n\nmax_idx = max(a.index(k) for k in key)\nprint(a[: max_idx + 1])\n\nPrints:\n[3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2]\n\n", "Another method to get the same result :)\nfor i in range(len(a)):\n if all(x in a[:i] for x in key):\n b = a[:i]\n break\nprint(b)\nOutput: [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2]\n\n", "Here is an efficient solution that is O(n) and doesn't require slicing lists or a list.index operation in a loop:\nFirst, create a dictionary mapping each element of a to the index of its first occurrence.\na = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2, 1, 3, 5, 7, 0]\n\na_lookup = dict()\nfor ind, val in enumerate(a):\n if val not in a_lookup: \n a_lookup[val] = ind\n\nThis gives us\na_lookup = {3: 0, 8: 1, 5: 2, 1: 3, 4: 4, 7: 5, 6: 8, 2: 10, 0: 15}\n\nNext, find the largest value in the dictionary, for all keys in the key list. If we use dict.get to get the keys, a non-existent key will return None, which will cause a TypeError in the max call. We can catch this and handle it appropriately. Once we've found the maximum index, slice the list until this index to get what we need.\nkey = [1, 2, 4, 6]\n\ntry:\n max_index = max(a_lookup.get(k) for k in key)\n sliced_list = a[:max_index+1]\nexcept TypeError:\n print(\"Error: all keys do not exist in a\")\n\nwhich gives,\nsliced_list = [3, 8, 5, 1, 4, 7, 1, 3, 6, 8, 2]\n\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "for_loop", "list", "python", "python_3.x", "slice" ]
stackoverflow_0074467118_for_loop_list_python_python_3.x_slice.txt
Q: How can I stop AWS lambda from recursive invocations I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket. def lambda_handler(event, context): try: status = int(event['status']) if status: Reading_Bucket_Name = 'read-data-s3' Writing_Bucket_Name = 'write-excel-file-bucket' rowDataFile = 'Analyse.xlsx' HTMLfileName = 'index.html' url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName) status = 0 return {"statusCode": 200, "URL": url} else: return {"statusCode": 400, "Error": "The code could not be executed"} except Exception as e: print('#________ An error occurred while reading Status code int(event[status]) ________#') print(e) raise e return None The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function. But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by: { "status": "1" } How can I stop execution after getting the first output? Update: This will cause the problem by uploading a new file to an S3 bucket: s3_client = boto3.client('s3') fig.write_html('/tmp/' + HTMLfileName, auto_play=False) response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'}) return True A: Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function. To check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.
How can I stop AWS lambda from recursive invocations
I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket. def lambda_handler(event, context): try: status = int(event['status']) if status: Reading_Bucket_Name = 'read-data-s3' Writing_Bucket_Name = 'write-excel-file-bucket' rowDataFile = 'Analyse.xlsx' HTMLfileName = 'index.html' url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName) status = 0 return {"statusCode": 200, "URL": url} else: return {"statusCode": 400, "Error": "The code could not be executed"} except Exception as e: print('#________ An error occurred while reading Status code int(event[status]) ________#') print(e) raise e return None The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function. But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by: { "status": "1" } How can I stop execution after getting the first output? Update: This will cause the problem by uploading a new file to an S3 bucket: s3_client = boto3.client('s3') fig.write_html('/tmp/' + HTMLfileName, auto_play=False) response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'}) return True
[ "Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function.\nTo check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_lambda", "python" ]
stackoverflow_0074458771_amazon_s3_amazon_web_services_aws_lambda_python.txt
Q: How to make an inset plot with mollweide projection? I want to make a skymap using the Mollweide projection for a main set of axes and for an inset axes. This is easy for the main axes but not for the inset. I've tried a few different things but it doesn't work for the inset. Please help! Here you can find the latitude and longitude data, and here you can find the sky location probability density data. First, I make the main plot: xmin = min(l) xmax = max(l) ymin = min(b) ymax = max(b) X, Y = np.meshgrid(np.linspace(xmin, xmax, 100), np.linspace(ymin, ymax, 100)) mpl.rcParams["text.usetex"] = True fig = plt.figure(1) fig.set_figheight(8) fig.set_figwidth(8) ax = plt.axes(projection='mollweide') ax.grid() # skypost is the sky location probability-density data accessible above plt.contour(X, Y, skypost, colors='blue', levels=[5, 50, 95]) which works fine. Next, I define the inset axes and plot the contours, however there seems to be no way that completely works for this. What I want is for the inset to zoom-in on the contours while keeping the mollweide projection. I've tried to do as the example on ligo.skymaps, i.e., axesinset = plt.axes( [0.0, 0.2, 0.25, 0.25], projection='astro degrees zoom', center='110d +20d', radius='10 deg' ) plt.sca(axesinset) axesinset.contour(X, Y, skypost, colors='blue', levels=[5, 50, 95]) axesinset.grid() but this doesn't work since the contours don't even appear! I don't understand why they don't appear. I also do not understand why the x-axis of the inset is backwards? Instead, I've tried just plotting a new mollweide projection in the inset, and restricting the xlim and ylim, but it says these options are not supported for the mollweide projection. Is there a way around this to restrict the axes limits? Lastly, I've tried just doing a regular inset without the mollweide, which works, however the shape of the contours are distorted relative to the contours on the main mollweide plot which is physically relevant for my case. So this is very sub-optimal. Any suggestions and advice are greatly appreciated. A: To have the axis in the correct way, you can rotate the subplot by using rotate. Concerning the fact that your contour are not shown, it is probably because you have to add the transform keyword. If you don't specify it, it is plotted in pixel coordinates by default (https://docs.astropy.org/en/stable/visualization/wcsaxes/overlays.html). The example below shows that the desired point (in blue) is obtained by adding ax.get_transform("world"). The blue and green points are in the lower right corner because of the rotate. I guess that it should be the same for contour. ax = plt.subplot(111, projection='geo degrees zoom', center="0d - 0d", radius='10 deg', rotate='180 deg') ax.grid() ax.set_xlabel(r"$\phi \, [deg]$") ax.set_ylabel(r"$\theta \, [deg]$") ax.scatter(0,0, color = "blue") ax.scatter(100,0, color = "green") ax.scatter(0,0, color = "red", transform = ax.get_transform("world")) A: I'm a bit late to the party, but I thought its worth mentioning that I've created a nice inset-map functionality for EOmaps... It lets you create inset-maps in arbitrary projections and you can add whatever features you want! from eomaps import Maps m = Maps(Maps.CRS.Mollweide()) m.add_feature.preset.coastline() # create a rectangular inset-map that shows a 5 degree rectangle # centered around a given point inset = m.new_inset_map(xy=(6, 43), xy_crs=4326, radius=5, radius_crs=4326, inset_crs=Maps.CRS.Mollweide(), shape="rectangles") inset.add_feature.preset.coastline() inset.add_feature.preset.ocean() inset.add_feature.cultural_10m.urban_areas(fc="r", ec="none") m.apply_layout( {'0_map': [0.01, 0.17333, 0.98, 0.65333], '1_map': [0.05, 0.11667, 0.43341, 0.76667]})
How to make an inset plot with mollweide projection?
I want to make a skymap using the Mollweide projection for a main set of axes and for an inset axes. This is easy for the main axes but not for the inset. I've tried a few different things but it doesn't work for the inset. Please help! Here you can find the latitude and longitude data, and here you can find the sky location probability density data. First, I make the main plot: xmin = min(l) xmax = max(l) ymin = min(b) ymax = max(b) X, Y = np.meshgrid(np.linspace(xmin, xmax, 100), np.linspace(ymin, ymax, 100)) mpl.rcParams["text.usetex"] = True fig = plt.figure(1) fig.set_figheight(8) fig.set_figwidth(8) ax = plt.axes(projection='mollweide') ax.grid() # skypost is the sky location probability-density data accessible above plt.contour(X, Y, skypost, colors='blue', levels=[5, 50, 95]) which works fine. Next, I define the inset axes and plot the contours, however there seems to be no way that completely works for this. What I want is for the inset to zoom-in on the contours while keeping the mollweide projection. I've tried to do as the example on ligo.skymaps, i.e., axesinset = plt.axes( [0.0, 0.2, 0.25, 0.25], projection='astro degrees zoom', center='110d +20d', radius='10 deg' ) plt.sca(axesinset) axesinset.contour(X, Y, skypost, colors='blue', levels=[5, 50, 95]) axesinset.grid() but this doesn't work since the contours don't even appear! I don't understand why they don't appear. I also do not understand why the x-axis of the inset is backwards? Instead, I've tried just plotting a new mollweide projection in the inset, and restricting the xlim and ylim, but it says these options are not supported for the mollweide projection. Is there a way around this to restrict the axes limits? Lastly, I've tried just doing a regular inset without the mollweide, which works, however the shape of the contours are distorted relative to the contours on the main mollweide plot which is physically relevant for my case. So this is very sub-optimal. Any suggestions and advice are greatly appreciated.
[ "To have the axis in the correct way, you can rotate the subplot by using rotate.\nConcerning the fact that your contour are not shown, it is probably because you have to add the transform keyword. If you don't specify it, it is plotted in pixel coordinates by default (https://docs.astropy.org/en/stable/visualization/wcsaxes/overlays.html).\nThe example below shows that the desired point (in blue) is obtained by adding ax.get_transform(\"world\").\nThe blue and green points are in the lower right corner because of the rotate.\nI guess that it should be the same for contour.\nax = plt.subplot(111, projection='geo degrees zoom',\n center=\"0d - 0d\", radius='10 deg', rotate='180 deg')\nax.grid()\nax.set_xlabel(r\"$\\phi \\, [deg]$\")\nax.set_ylabel(r\"$\\theta \\, [deg]$\")\n\nax.scatter(0,0, color = \"blue\")\nax.scatter(100,0, color = \"green\")\nax.scatter(0,0, color = \"red\", transform = ax.get_transform(\"world\"))\n\n\n", "I'm a bit late to the party, but I thought its worth mentioning that I've created a nice inset-map functionality for EOmaps...\nIt lets you create inset-maps in arbitrary projections and you can add whatever features you want!\nfrom eomaps import Maps\n\nm = Maps(Maps.CRS.Mollweide())\nm.add_feature.preset.coastline()\n\n# create a rectangular inset-map that shows a 5 degree rectangle\n# centered around a given point\ninset = m.new_inset_map(xy=(6, 43), xy_crs=4326,\n radius=5, radius_crs=4326,\n inset_crs=Maps.CRS.Mollweide(),\n shape=\"rectangles\")\ninset.add_feature.preset.coastline()\ninset.add_feature.preset.ocean()\ninset.add_feature.cultural_10m.urban_areas(fc=\"r\", ec=\"none\")\n\nm.apply_layout(\n {'0_map': [0.01, 0.17333, 0.98, 0.65333],\n '1_map': [0.05, 0.11667, 0.43341, 0.76667]})\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "insets", "map_projections", "matplotlib", "python", "python_3.x" ]
stackoverflow_0073415539_insets_map_projections_matplotlib_python_python_3.x.txt
Q: Aggregate daily data by month and an additional column I've got a DataFrame storing daily-based data which is as below: Date Product Number Description Revenue 2010-01-04 4219-057 Product A 39.299999 2010-01-04 4219-056 Product A 39.520000 2010-01-04 4219-100 Product B 39.520000 2010-01-04 4219-056 Product A 39.520000 2010-01-05 4219-059 Product A 39.520000 2010-01-05 4219-056 Product A 39.520000 2010-01-05 4219-056 Product B 39.520000 2010-02-08 4219-123 Product A 39.520000 2010-02-08 4219-345 Product A 39.520000 2010-02-08 4219-456 Product B 39.520000 2010-02-08 4219-567 Product C 39.520000 2010-02-08 4219-789 Product D 39.520000 (Product number is just to give an idea) What I intend to do is to merge it into Monthly-based data. Something like: Date Description Revenue 2010-01-01 Product A 157.85000 (Sum of all Product A in Month 01) Product B 79.040000 Product C 00.000000 Product D 00.000000 2010-02-01 Product A 39.299999 (Sum of all Product A in Month 02) Product B 39.520000 Product C 39.520000 Product D 39.520000 The problem is I have 500+ products for every month I am new to python and don't know how to implement it. Currently, I am using import pandas as pd import numpy as np import matplotlib %matplotlib inline data.groupby(['DATE','REVENUE']).sum().unstack() but not grouping it with the Products. How can I implement this? A: Convert "Date" to datetime, then use groupby and sum: # Do this first, if necessary. df['Date'] = pd.to_datetime(df['Date'], errors='coerce') (df.groupby([pd.Grouper(key='Date', freq='MS'), 'Description'])['Revenue'] .sum() .reset_index()) Date Description Revenue 0 2010-01-01 A 197.379999 1 2010-01-01 B 79.040000 2 2010-02-01 A 79.040000 3 2010-02-01 B 39.520000 4 2010-02-01 C 39.520000 5 2010-02-01 D 39.520000 The fréquency "MS" specifies to group on dates and set the offset to the start of each month.
Aggregate daily data by month and an additional column
I've got a DataFrame storing daily-based data which is as below: Date Product Number Description Revenue 2010-01-04 4219-057 Product A 39.299999 2010-01-04 4219-056 Product A 39.520000 2010-01-04 4219-100 Product B 39.520000 2010-01-04 4219-056 Product A 39.520000 2010-01-05 4219-059 Product A 39.520000 2010-01-05 4219-056 Product A 39.520000 2010-01-05 4219-056 Product B 39.520000 2010-02-08 4219-123 Product A 39.520000 2010-02-08 4219-345 Product A 39.520000 2010-02-08 4219-456 Product B 39.520000 2010-02-08 4219-567 Product C 39.520000 2010-02-08 4219-789 Product D 39.520000 (Product number is just to give an idea) What I intend to do is to merge it into Monthly-based data. Something like: Date Description Revenue 2010-01-01 Product A 157.85000 (Sum of all Product A in Month 01) Product B 79.040000 Product C 00.000000 Product D 00.000000 2010-02-01 Product A 39.299999 (Sum of all Product A in Month 02) Product B 39.520000 Product C 39.520000 Product D 39.520000 The problem is I have 500+ products for every month I am new to python and don't know how to implement it. Currently, I am using import pandas as pd import numpy as np import matplotlib %matplotlib inline data.groupby(['DATE','REVENUE']).sum().unstack() but not grouping it with the Products. How can I implement this?
[ "Convert \"Date\" to datetime, then use groupby and sum:\n# Do this first, if necessary.\ndf['Date'] = pd.to_datetime(df['Date'], errors='coerce')\n\n(df.groupby([pd.Grouper(key='Date', freq='MS'), 'Description'])['Revenue']\n .sum()\n .reset_index())\n\n Date Description Revenue\n0 2010-01-01 A 197.379999\n1 2010-01-01 B 79.040000\n2 2010-02-01 A 79.040000\n3 2010-02-01 B 39.520000\n4 2010-02-01 C 39.520000\n5 2010-02-01 D 39.520000\n\nThe fréquency \"MS\" specifies to group on dates and set the offset to the start of each month. \n" ]
[ 0 ]
[ "This is a bit of a workaround but if you simply create a 'Month_Year' variable in a new column using -\ndf['Month_Year'] = df['Date'].dt.to_period('M')\n\nYou can then groupby that column and aggregate as needed, like so -\ndf_agg = df.groupby([\"Month_Year\", \"Description\"])['Revenue'].sum().reset_index()\n\n" ]
[ -1 ]
[ "group_by", "pandas", "pandas_groupby", "python" ]
stackoverflow_0056285925_group_by_pandas_pandas_groupby_python.txt
Q: I can't read date without time from CSV using pandas I have this dataframe: forecasts Out[15]: timestamp 1 2 0 2022-11-08 12:12:15 5679.658691 5400.217773 1 2022-11-08 12:38:49 5679.658691 5400.217773 2 2022-11-09 11:05:53 5863.616699 5619.101562 3 2022-11-10 10:46:27 6047.025391 5714.026367 4 2022-11-11 11:59:29 6147.197754 5750.312988 5 2022-11-12 11:56:45 6008.574707 5775.820312 And I'm trying to get the forecasts on a specific date without including the hour: forecasts = forecasts[forecasts['timestamp'] == pd.Timestamp(str(2022) + '-' + str(11) + '-' + str(11))] to read this date: 2022-11-11 11:59:29 But I receive an empty dataframe. How can I fix that? A: you can use: forecasts = forecasts[forecasts['timestamp'].dt.strftime('%Y-%m-%d') == '2022-11-11'] #or forecasts = forecasts[forecasts['timestamp'].dt.strftime('%Y-%m-%d') == (str(2022) + '-' + str(11) + '-' + str(11))]
I can't read date without time from CSV using pandas
I have this dataframe: forecasts Out[15]: timestamp 1 2 0 2022-11-08 12:12:15 5679.658691 5400.217773 1 2022-11-08 12:38:49 5679.658691 5400.217773 2 2022-11-09 11:05:53 5863.616699 5619.101562 3 2022-11-10 10:46:27 6047.025391 5714.026367 4 2022-11-11 11:59:29 6147.197754 5750.312988 5 2022-11-12 11:56:45 6008.574707 5775.820312 And I'm trying to get the forecasts on a specific date without including the hour: forecasts = forecasts[forecasts['timestamp'] == pd.Timestamp(str(2022) + '-' + str(11) + '-' + str(11))] to read this date: 2022-11-11 11:59:29 But I receive an empty dataframe. How can I fix that?
[ "you can use:\nforecasts = forecasts[forecasts['timestamp'].dt.strftime('%Y-%m-%d') == '2022-11-11']\n#or\nforecasts = forecasts[forecasts['timestamp'].dt.strftime('%Y-%m-%d') == (str(2022) + '-' + str(11) + '-' + str(11))]\n\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074467355_pandas_python.txt
Q: How to click all the fetched links from a search result in selenium using python? In selenium, I am grabbing some search result URL by XPATH. Now I want to click then one by one which will open then in the same browser one by one where the base URL is opened so that I can switch between then. How can I do that? I am giving my code below. import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By serv_obj = Service("F:\Softwares\Selenium WebDrivers\chromedriver.exe") driver = webdriver.Chrome(service=serv_obj) driver.maximize_window() driver.implicitly_wait(5) url = "https://testautomationpractice.blogspot.com/" driver.get(url) driver.find_element(By.XPATH, "//input[@id='Wikipedia1_wikipedia-search-input']").send_keys("selenium") driver.find_element(By.XPATH, "//input[@type='submit']").click() search_result = driver.find_elements(By.XPATH, "//div[@id='wikipedia-search-result-link']/a") links = [] for item in search_result: url_data = item.get_attribute("href") links.append(url_data) print(url_data) print(len(links)) print(links) I have grabbed all the links from the search result by using customized XPATH. I am being able yo print them also. But I want to open/click on the every resulted link one by one in the same browser. A: You can do that as following: Get the list of the links. In a loop click on grabbed links. When link is opened in a new tab switch the driver to the new opened tab. Do there what you want to do (I simulated this by a simple delay of 1 second). Close the new tab. Switch back to the first tab. Collect the list of links again since the previously collected links become Stale reference. The following code works: import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://testautomationpractice.blogspot.com/" driver.get(url) wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@id='Wikipedia1_wikipedia-search-input']"))).send_keys("selenium") wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@type='submit']"))).click() links = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//div[@id='wikipedia-search-result-link']/a"))) for index, link in enumerate(links): links[index].click() driver.switch_to.window(driver.window_handles[1]) time.sleep(1) driver.close() driver.switch_to.window(driver.window_handles[0]) links = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//div[@id='wikipedia-search-result-link']/a")))
How to click all the fetched links from a search result in selenium using python?
In selenium, I am grabbing some search result URL by XPATH. Now I want to click then one by one which will open then in the same browser one by one where the base URL is opened so that I can switch between then. How can I do that? I am giving my code below. import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By serv_obj = Service("F:\Softwares\Selenium WebDrivers\chromedriver.exe") driver = webdriver.Chrome(service=serv_obj) driver.maximize_window() driver.implicitly_wait(5) url = "https://testautomationpractice.blogspot.com/" driver.get(url) driver.find_element(By.XPATH, "//input[@id='Wikipedia1_wikipedia-search-input']").send_keys("selenium") driver.find_element(By.XPATH, "//input[@type='submit']").click() search_result = driver.find_elements(By.XPATH, "//div[@id='wikipedia-search-result-link']/a") links = [] for item in search_result: url_data = item.get_attribute("href") links.append(url_data) print(url_data) print(len(links)) print(links) I have grabbed all the links from the search result by using customized XPATH. I am being able yo print them also. But I want to open/click on the every resulted link one by one in the same browser.
[ "You can do that as following:\nGet the list of the links.\nIn a loop click on grabbed links.\nWhen link is opened in a new tab switch the driver to the new opened tab.\nDo there what you want to do (I simulated this by a simple delay of 1 second).\nClose the new tab.\nSwitch back to the first tab.\nCollect the list of links again since the previously collected links become Stale reference.\nThe following code works:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\n\nurl = \"https://testautomationpractice.blogspot.com/\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//input[@id='Wikipedia1_wikipedia-search-input']\"))).send_keys(\"selenium\")\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//input[@type='submit']\"))).click()\nlinks = wait.until(EC.presence_of_all_elements_located((By.XPATH, \"//div[@id='wikipedia-search-result-link']/a\")))\nfor index, link in enumerate(links):\n links[index].click()\n driver.switch_to.window(driver.window_handles[1])\n time.sleep(1)\n driver.close()\n driver.switch_to.window(driver.window_handles[0])\n links = wait.until(EC.presence_of_all_elements_located((By.XPATH, \"//div[@id='wikipedia-search-result-link']/a\")))\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "python", "selenium", "selenium_webdriver", "staleelementreferenceexception" ]
stackoverflow_0074466251_for_loop_python_selenium_selenium_webdriver_staleelementreferenceexception.txt
Q: No module named tensorflow in jupyter I have some imports in my jupyter notebook and among them is tensorflow: ImportError Traceback (most recent call last) <ipython-input-2-482704985f85> in <module>() 4 import numpy as np 5 import six.moves.copyreg as copyreg ----> 6 import tensorflow as tf 7 from six.moves import cPickle as pickle 8 from six.moves import range ImportError: No module named tensorflow I have it on my computer, in a special enviroment and all connected stuff also: Requirement already satisfied (use --upgrade to upgrade): tensorflow in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): protobuf==3.0.0b2 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): numpy>=1.10.1 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): wheel in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): setuptools in ./setuptools-23.0.0-py2.7.egg (from protobuf==3.0.0b2->tensorflow) I can import tensorflow on my computer: >>> import tensorflow as tf >>> So I'm confused why this is another situation in notebook? A: If you installed a TensorFlow as it said in official documentation: https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview I mean creating an environment called tensorflow and tested your installation in python, but TensorFlow can not be imported in jupyter, you have to install jupyter in your tensorflow environment too: conda install jupyter notebook After that I run a jupyter and it can import TensorFlow too: jupyter notebook A: Jupyter runs under the conda environment where as your tensorflow install lives outside conda. In order to install tensorflow under the conda virtual environment run the following command in your terminal: conda install -c conda-forge tensorflow A: I had the same problem, and solved it by looking at the output of: jupyter kernelspec list which outputs the kernel information: python2 /Users/Username/Library/Jupyter/kernels/python2 python3 /Users/Username/Library/Jupyter/kernels/python3 Notice that the path points to the Jupyter kernel for the user. To use it within the the Anaconda environment, it needs to point to the conda env you are using, and look something like Anaconda3\envs\Env_Name\share\jupyter\kernels\python3. So, to remove the Jupyter kernelspec, just use: jupyter kernelspec remove python3 or jupyter kernelspec remove python2 if you're using python 2 Now, the output of jupyter kernelspec list should point to the correct kernel. See https://github.com/jupyter/notebook/issues/397 for more information about this. A: Conda environment fetches the tensorflow package from the main system site-packages. Step 1: Just deactivate conda environment conda deactivate pip install tensorflow Step 2: Switch back to conda environment conda activate YOUR_ENV_NAME jupyter notebook Step 3: Run the cell with import tensorflow you should be able to import. Thanks A: I also had the same problem for a long time. I wanted to import tensorflow inside the jupyter notebook within windows 10. I followed all the instructions and commands that were suggested and it was not working from the command prompt. Finally, I tried this command with the Anaconda Prompt and it worked successfully. If you are using jupyter notebook within Anaconda then go goto the windows search terminal and type "Anaconda Prompt" and inside it type following command, It will install the tensorflow inside the jupyter notebook. conda install -c conda-forge tensorflow A: the problem may when the Jupyter notebook may launching from the default but for able to import tensorflow and keras libraries so you have to install jupyter notebook like what you have installed the libraries pip install jupyter A: Run python -m ipykernel install --user --name <Environment_Name>. This should add your environment to the jupyter kernel list. Change the kernel using Kernel->Change Kernel option or New-><Environment_Name>. Note : Replace <Environment_Name> with the actual name of the environment. A: run this command which will install tensorflow inside conda conda install -c conda-forge tensorflow A: This is what I did to fix this issue - I installed tensorflow for windows by using below link - https://www.tensorflow.org/install/install_windows Once done - I activated tensorflow by using below command - C:> activate tensorflow (tensorflow)C:> # Your prompt should change Once done I ran below command - (tensorflow)C:> conda install notebook Fetching package metadata ........... Solving package specifications: . Package plan for installation in environment The following NEW packages will be INSTALLED: bleach: 1.5.0-py35_0 colorama: 0.3.9-py35_0 decorator: 4.1.2-py35_0 entrypoints: 0.2.3-py35_0 html5lib: 0.9999999-py35_0 ipykernel: 4.6.1-py35_0 ---- --- jupyter_client 100% |###############################| Time: 0:00:00 6.77 MB/s nbformat-4.4.0 100% |###############################| Time: 0:00:00 8.10 MB/s ipykernel-4.6. 100% |###############################| Time: 0:00:00 9.54 MB/s nbconvert-5.2. 100% |###############################| Time: 0:00:00 9.59 MB/s notebook-5.0.0 100% |###############################| Time: 0:00:00 8.24 MB/s Once done I ran command (tensorflow)C:>jupyter notebook It opened new Juypter window and able to Run fine - import tensorflow as tf A: I was able to load tensorflow in Jupyter notebook on Windows by: first do conda create tensorflow install, then activate tensorflow at the command prompt , then execute "Jupyter notebook" from command line. Tensorflow imports at the notebook with no error. However, I was unable to import "Pandas" &"Matplotlib, ....etc" A: As suggested by @Jörg, if you have more than one kernel spec. You have to see the path it points to. In my case, it is actually the path that was to be corrected. When I created TensorFlow virtual env, the spec had the entry for python which was pointing to base env. Thus by changing W:\\miniconda\\python.exe to W:\\miniconda\\envs\\tensorflow\\python.exe solved the problem. So it is worth looking at your kernel spec. Delete that is not needed and keep those you want. Then look inside the JSON files where the path is given and change if needs be. I hope it helps. A: There are two ways to fix this issue. The foremost way is to create a new virtual environment and install all dependencies like jupyter notebook, tensorflow etc. conda install jupyter notebook conda install -c conda-forge tensorflow The other way around is to install tensorflow in the current environment (base or any activated environment). conda install -c conda-forge tensorflow Note: It is advisable to create a new virtual environment for every new project. The details how to create and manage virtual environment using conda can be find here: https://conda.io/docs/user-guide/tasks/manage-environments.html A: Probably there is a problem with the TensorFlow in your environment. In my case, After installing some libs, my TensorFlow stopped working. So I installed TensorFlow again using pip. like so: just run pip install tensorflow then I re-imported it into my jupyter notebook as : import tensorflow as ft In case you want to install jupyter and base libs try this: pip install jupyter tensorflow keras numpy scipy ipython pandas matplotlib sympy nose A: Other supported libraries are necessary to install with TensorFlow.Make sure if these libraries are installed: numpy scipy jupyter matplolib pillow scikit-learn tensorflow-addons, tensorflow.contrib This worked for me. I followed this: https://www.pythonpool.com/no-module-named-tensorflow-error-solved/ A: TensorFlow package doesn't come by default with the root environment in Jupyter, to install it do the following : Close Jupyter Notebook. Open Anaconda Navigator (In windows : you can find it using the search bar) On the sidebar, click on the Environments tab (by default you are using the root env). You can see the installed packages, on the top switch to not-installed packages and search for tensorflow, if it doesn't show, click on Update index and it will be displayed. The installation takes some time A: If you have installed TensorFlow globally then this issue should not be occurring. As you are saying you have installed it, maybe you did it in a virtual environment. Some background: By default, Jupyter will open with a global python interpreter kernel. Possible solutions: Change your jupyter notebook kernel to your virtual environment kernel. Please check here to see how to create a kernel out of your virtual environment. Troubleshooting: If the above solution dint work lets do some troubleshooting. When you add your new kernel to jupyter you might have got output like below Installed kernelspec thesis-venv in C:\Users\vishnunaik\AppData\Roaming\jupyter\kernels\venv Check the file kernel.json in this path, which might look something like below { "argv": [ "C:\\Users\\vishnunaik\\Desktop\\Demo\\CodeBase\\venv\\Scripts\\python.exe", "-m", "ipykernel_launcher", "-f", "{connection_file}" ], "display_name": "thesis-venv", "language": "python", "metadata": { "debugger": true } } Check the path to the python.exe is rightly pointing to your virtual environment python version or not. If not then update it accordingly. Now you should be able to use a virtual environment in your jupyter notebook. If your kernel takes a lot of time to respond see jupyter notebook server logs, sometimes you might get output like this [I 21:58:38.444 NotebookApp] Kernel started: adbd5551-cca3-4dad-a93f-974d7d25d53b, name: thesis-venv C:\\Users\\vishnunaik\\Desktop\\Demo\\CodeBase\\venv\\Scripts\\python.exe: No module named ipykernel_launcher This means your virtual environment doesnot have ipykernel installed. So install it in your virtual environment using below command. pip install ipykernel Now you have done everything possible, so I hope this will solve your issue.
No module named tensorflow in jupyter
I have some imports in my jupyter notebook and among them is tensorflow: ImportError Traceback (most recent call last) <ipython-input-2-482704985f85> in <module>() 4 import numpy as np 5 import six.moves.copyreg as copyreg ----> 6 import tensorflow as tf 7 from six.moves import cPickle as pickle 8 from six.moves import range ImportError: No module named tensorflow I have it on my computer, in a special enviroment and all connected stuff also: Requirement already satisfied (use --upgrade to upgrade): tensorflow in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): protobuf==3.0.0b2 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): numpy>=1.10.1 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): wheel in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): setuptools in ./setuptools-23.0.0-py2.7.egg (from protobuf==3.0.0b2->tensorflow) I can import tensorflow on my computer: >>> import tensorflow as tf >>> So I'm confused why this is another situation in notebook?
[ "If you installed a TensorFlow as it said in official documentation: https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview\nI mean creating an environment called tensorflow and tested your installation in python, but TensorFlow can not be imported in jupyter, you have to install jupyter in your tensorflow environment too:\nconda install jupyter notebook\n\nAfter that I run a jupyter and it can import TensorFlow too:\njupyter notebook\n\n", "Jupyter runs under the conda environment where as your tensorflow install lives outside conda. In order to install tensorflow under the conda virtual environment run the following command in your terminal:\n conda install -c conda-forge tensorflow \n\n", "I had the same problem, and solved it by looking at the output of:\njupyter kernelspec list\nwhich outputs the kernel information:\npython2 /Users/Username/Library/Jupyter/kernels/python2 \npython3 /Users/Username/Library/Jupyter/kernels/python3\n\nNotice that the path points to the Jupyter kernel for the user. To use it within the the Anaconda environment, it needs to point to the conda env you are using, and look something like Anaconda3\\envs\\Env_Name\\share\\jupyter\\kernels\\python3. \nSo, to remove the Jupyter kernelspec, just use:\njupyter kernelspec remove python3 \nor jupyter kernelspec remove python2 if you're using python 2\nNow, the output of jupyter kernelspec list should point to the correct kernel.\nSee https://github.com/jupyter/notebook/issues/397 for more information about this.\n", "Conda environment fetches the tensorflow package from the main system site-packages.\nStep 1: Just deactivate conda environment \nconda deactivate \n\npip install tensorflow \n\nStep 2: Switch back to conda environment \nconda activate YOUR_ENV_NAME\n\njupyter notebook\n\nStep 3: Run the cell with import tensorflow you should be able to import. \nThanks\n", "I also had the same problem for a long time. I wanted to import tensorflow inside the jupyter notebook within windows 10. I followed all the instructions and commands that were suggested and it was not working from the command prompt. Finally, I tried this command with the Anaconda Prompt and it worked successfully. If you are using jupyter notebook within Anaconda then go goto the windows search terminal and type \"Anaconda Prompt\" and inside it type following command, It will install the tensorflow inside the jupyter notebook. \nconda install -c conda-forge tensorflow\n\n", "the problem may when the Jupyter notebook may launching from the default but for able to import tensorflow and keras libraries so you have to install jupyter notebook like what you have installed the libraries\n\n\npip install jupyter\n\n\n", "Run python -m ipykernel install --user --name <Environment_Name>. This should add your environment to the jupyter kernel list.\nChange the kernel using Kernel->Change Kernel option or New-><Environment_Name>.\nNote : Replace <Environment_Name> with the actual name of the environment.\n", "run this command which will install tensorflow inside conda\nconda install -c conda-forge tensorflow\n\n", "This is what I did to fix this issue -\nI installed tensorflow for windows by using below link -\nhttps://www.tensorflow.org/install/install_windows\nOnce done - I activated tensorflow by using below command -\nC:> activate tensorflow\n (tensorflow)C:> # Your prompt should change \nOnce done I ran below command -\n(tensorflow)C:> conda install notebook\nFetching package metadata ...........\nSolving package specifications: .\nPackage plan for installation in environment \nThe following NEW packages will be INSTALLED:\nbleach: 1.5.0-py35_0\ncolorama: 0.3.9-py35_0\ndecorator: 4.1.2-py35_0\nentrypoints: 0.2.3-py35_0\nhtml5lib: 0.9999999-py35_0\nipykernel: 4.6.1-py35_0\n ----\n ---\n\njupyter_client 100% |###############################| Time: 0:00:00 6.77 MB/s\nnbformat-4.4.0 100% |###############################| Time: 0:00:00 8.10 MB/s\nipykernel-4.6. 100% |###############################| Time: 0:00:00 9.54 MB/s\nnbconvert-5.2. 100% |###############################| Time: 0:00:00 9.59 MB/s\nnotebook-5.0.0 100% |###############################| Time: 0:00:00 8.24 MB/s\nOnce done I ran command \n(tensorflow)C:>jupyter notebook\nIt opened new Juypter window and able to Run fine -\nimport tensorflow as tf\n", "I was able to load tensorflow in Jupyter notebook on Windows by: first do conda create tensorflow install, then activate tensorflow at the command prompt , then execute \"Jupyter notebook\" from command line. \nTensorflow imports at the notebook with no error. However, I was unable to import \"Pandas\" &\"Matplotlib, ....etc\" \n", "As suggested by @Jörg, if you have more than one kernel spec. You have to see the path it points to. In my case, it is actually the path that was to be corrected. \nWhen I created TensorFlow virtual env, the spec had the entry for python which was pointing to base env. Thus by changing W:\\\\miniconda\\\\python.exe to W:\\\\miniconda\\\\envs\\\\tensorflow\\\\python.exe solved the problem.\nSo it is worth looking at your kernel spec. Delete that is not needed and keep those you want. Then look inside the JSON files where the path is given and change if needs be. I hope it helps.\n", "There are two ways to fix this issue.\n\nThe foremost way is to create a new virtual environment and install all dependencies like jupyter notebook, tensorflow etc.\n\nconda install jupyter notebook\nconda install -c conda-forge tensorflow \n\nThe other way around is to install tensorflow in the current environment (base or any activated environment).\n\nconda install -c conda-forge tensorflow\nNote: It is advisable to create a new virtual environment for every new project. The details how to create and manage virtual environment using conda can be find here:\nhttps://conda.io/docs/user-guide/tasks/manage-environments.html\n", "Probably there is a problem with the TensorFlow in your environment. \nIn my case, After installing some libs, my TensorFlow stopped working. \nSo I installed TensorFlow again using pip. like so:\njust run \npip install tensorflow\n\nthen I re-imported it into my jupyter notebook as :\nimport tensorflow as ft\n\nIn case you want to install jupyter and base libs try this:\npip install jupyter tensorflow keras numpy scipy ipython pandas matplotlib sympy nose\n\n", "Other supported libraries are necessary to install with TensorFlow.Make sure if these libraries are installed:\n\nnumpy\nscipy\njupyter\nmatplolib\npillow\nscikit-learn\ntensorflow-addons,\ntensorflow.contrib\n\nThis worked for me. I followed this: https://www.pythonpool.com/no-module-named-tensorflow-error-solved/\n", "TensorFlow package doesn't come by default with the root environment in Jupyter, to install it do the following :\n\nClose Jupyter Notebook.\nOpen Anaconda Navigator (In windows : you can find it using the search bar)\nOn the sidebar, click on the Environments tab (by default you are using the root env).\nYou can see the installed packages, on the top switch to not-installed packages and search for tensorflow, if it doesn't show, click on Update index and it will be displayed.\n\nThe installation takes some time\n", "If you have installed TensorFlow globally then this issue should not be occurring. As you are saying you have installed it, maybe you did it in a virtual environment.\nSome background:\nBy default, Jupyter will open with a global python interpreter kernel.\nPossible solutions:\nChange your jupyter notebook kernel to your virtual environment kernel. Please check here to see how to create a kernel out of your virtual environment.\nTroubleshooting:\nIf the above solution dint work lets do some troubleshooting. When you add your new kernel to jupyter you might have got output like below\nInstalled kernelspec thesis-venv in C:\\Users\\vishnunaik\\AppData\\Roaming\\jupyter\\kernels\\venv\nCheck the file kernel.json in this path, which might look something like below\n{\n \"argv\": [\n \"C:\\\\Users\\\\vishnunaik\\\\Desktop\\\\Demo\\\\CodeBase\\\\venv\\\\Scripts\\\\python.exe\",\n \"-m\",\n \"ipykernel_launcher\",\n \"-f\",\n \"{connection_file}\"\n ],\n\"display_name\": \"thesis-venv\",\n\"language\": \"python\",\n\"metadata\": {\n \"debugger\": true\n}\n\n}\nCheck the path to the python.exe is rightly pointing to your virtual environment python version or not. If not then update it accordingly.\nNow you should be able to use a virtual environment in your jupyter notebook. If your kernel takes a lot of time to respond see jupyter notebook server logs, sometimes you might get output like this\n[I 21:58:38.444 NotebookApp] Kernel started: adbd5551-cca3-4dad-a93f-974d7d25d53b, name: thesis-venv C:\\\\Users\\\\vishnunaik\\\\Desktop\\\\Demo\\\\CodeBase\\\\venv\\\\Scripts\\\\python.exe: No module named ipykernel_launcher\n\nThis means your virtual environment doesnot have ipykernel installed. So install it in your virtual environment using below command.\npip install ipykernel\n\nNow you have done everything possible, so I hope this will solve your issue.\n" ]
[ 75, 23, 20, 5, 4, 3, 3, 3, 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "jupyter_notebook", "python", "tensorflow" ]
stackoverflow_0038221181_jupyter_notebook_python_tensorflow.txt
Q: Python Pydantic Get JSON Regardless of Validation I have a class in Pydantic that fails validation. I would like to fetch the JSON regardless of failure. Any ideas? from pydantic import BaseModel, Field, ValidationError class Model(BaseModel): a: float = Field(ge=1.0) try: m = Model(a=0.5) print(m.json()) except ValidationError as e: data = e.data() # fake method, would return '{"a": 0.5} data['errors'] = e.json() print(data) A: You can create a dict manually and then pass it further from pydantic import BaseModel, Field, ValidationError class Model(BaseModel): a: float = Field(ge=1.0) try: d = {'a': 0.5} m = Model.parse_obj(d) print(m.json()) except ValidationError as e: d['errors'] = e.json() print(d)
Python Pydantic Get JSON Regardless of Validation
I have a class in Pydantic that fails validation. I would like to fetch the JSON regardless of failure. Any ideas? from pydantic import BaseModel, Field, ValidationError class Model(BaseModel): a: float = Field(ge=1.0) try: m = Model(a=0.5) print(m.json()) except ValidationError as e: data = e.data() # fake method, would return '{"a": 0.5} data['errors'] = e.json() print(data)
[ "You can create a dict manually and then pass it further\nfrom pydantic import BaseModel, Field, ValidationError\n\nclass Model(BaseModel):\n a: float = Field(ge=1.0)\n\ntry:\n d = {'a': 0.5}\n m = Model.parse_obj(d)\n print(m.json())\nexcept ValidationError as e:\n d['errors'] = e.json()\n print(d)\n\n" ]
[ 0 ]
[]
[]
[ "pydantic", "python" ]
stackoverflow_0074467194_pydantic_python.txt
Q: How to Fix "AssertionError: CUDA unavailable, invalid device 0 requested" I'm trying to use my GPU to run the YOLOR model, and I keep getting the error: Traceback (most recent call last): File "D:\yolor\detect.py", line 198, in <module> detect() File "D:\yolor\detect.py", line 41, in detect device = select_device(opt.device) File "D:\yolor\utils\torch_utils.py", line 47, in select_device assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity AssertionError: CUDA unavailable, invalid device 0 requested When I try to check if CUDA is available with the following: python3 >>import torch >>print(torch.cuda.is_available()) I get False, which explains the problem. I tried running the command py -m pip install torch1.9.0+cu111 torchvision0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html I get the error: ERROR: Invalid requirement: 'torch1.9.0+cu111' Running nvcc --version, I get: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:41:42_Pacific_Daylight_Time_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 Thus, I'm not really sure what the issue is, or how to fix it. EDIT: As @Ivan pointed out, I added the == sign, but still get False when checking if CUDA is available. A: You forgot to put the == signs between the packages and the version number. According to the PyTorch installation page: py -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html A: Ok after 1 week of pain I have founded this solution 1- After download NVIDIA Driver: Go to your window and search for "NVIDIA Control Panel" Then at the bottom left there should be "System Information" Then look for "CUDA Cores" Mine is 384 (year my laptop is antique) (NVIDIA GeForce GT 750M) For CUDA Cores: 384 (corresponds to CUDA Toolkit 9.0) For CUDA Cores: 387 (corresponds to CUDA Toolkit 9.1) For other CUDA Cores you will need to do some more research yourself because I'm honestly don't know where to find this (if you are curious about where I found the one above, its on the second comments "https://github.com/pytorch/pytorch/issues/4546" 2- (Optional) Download Anaconda This is the system I use, the choice is your If you are using Anaconda and have been installing and uninstall to fix this problem. I recommend you to clean uninstall the environment since in my case my file got crash because of repeated install and un-install Here is the link to show you how to do it "https://www.youtube.com/watch?v=dcvdOuvWI-Q&t=107s" 3- After install the right CUDA toolkit for your system Go to "https://pytorch.org" Put in your system details and install the right PyTorch for your system (Optional) if you use Tensorflow as well, go here and install the right version for your CUDA 4- After all of that, in your Anaconda environment (or any environment you are using), type: import torch print(torch.cuda.is_available()) if return True then good job if not: good luck on your coming week hope it help and good luck on your journey with yolor (I'm learning it too) A: If you are working with VSCode dev containers maybe you forgot to add the GPU to the container. This can be fixed adding to .devcontainer/devcontainer.json "runArgs": ["--gpus", "all"] A: Just do this solution, I am here using TensorFlow 2.5.0, change it to what is suitable for you... !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb !dpkg -i libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb !ls -l /usr/lib/x86_64-linux-gnu/libcudnn.so.* !pip install --upgrade tensorflow==2.5.0
How to Fix "AssertionError: CUDA unavailable, invalid device 0 requested"
I'm trying to use my GPU to run the YOLOR model, and I keep getting the error: Traceback (most recent call last): File "D:\yolor\detect.py", line 198, in <module> detect() File "D:\yolor\detect.py", line 41, in detect device = select_device(opt.device) File "D:\yolor\utils\torch_utils.py", line 47, in select_device assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity AssertionError: CUDA unavailable, invalid device 0 requested When I try to check if CUDA is available with the following: python3 >>import torch >>print(torch.cuda.is_available()) I get False, which explains the problem. I tried running the command py -m pip install torch1.9.0+cu111 torchvision0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html I get the error: ERROR: Invalid requirement: 'torch1.9.0+cu111' Running nvcc --version, I get: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:41:42_Pacific_Daylight_Time_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 Thus, I'm not really sure what the issue is, or how to fix it. EDIT: As @Ivan pointed out, I added the == sign, but still get False when checking if CUDA is available.
[ "You forgot to put the == signs between the packages and the version number. According to the PyTorch installation page:\npy -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n", "Ok after 1 week of pain I have founded this solution\n1- After download NVIDIA Driver:\n\nGo to your window and search for \"NVIDIA Control Panel\"\nThen at the bottom left there should be \"System Information\"\nThen look for \"CUDA Cores\"\nMine is 384 (year my laptop is antique) (NVIDIA GeForce GT 750M)\nFor CUDA Cores: 384 (corresponds to CUDA Toolkit 9.0)\nFor CUDA Cores: 387 (corresponds to CUDA Toolkit 9.1)\nFor other CUDA Cores you will need to do some more research yourself because I'm honestly don't know where to find this (if you are curious about where I found the one above, its on the second comments \"https://github.com/pytorch/pytorch/issues/4546\"\n\n2- (Optional) Download Anaconda\n\nThis is the system I use, the choice is your\nIf you are using Anaconda and have been installing and uninstall to\nfix this problem. I recommend you to clean uninstall the environment\nsince in my case my file got crash because of repeated install and\nun-install\nHere is the link to show you how to do it\n\"https://www.youtube.com/watch?v=dcvdOuvWI-Q&t=107s\"\n\n3- After install the right CUDA toolkit for your system\n\nGo to \"https://pytorch.org\"\nPut in your system details and install the right PyTorch for your\nsystem\n(Optional) if you use Tensorflow as well, go here and install the\nright version for your CUDA\n\n4- After all of that, in your Anaconda environment (or any environment you are using), type:\n\nimport torch\nprint(torch.cuda.is_available())\n\nif return True then good job\nif not: good luck on your coming week\nhope it help and good luck on your journey with yolor (I'm learning it too)\n", "If you are working with VSCode dev containers maybe you forgot to add the GPU to the container.\nThis can be fixed adding to .devcontainer/devcontainer.json\n\"runArgs\": [\"--gpus\", \"all\"]\n\n", "Just do this solution, I am here using TensorFlow 2.5.0, change it to what is suitable for you...\n!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb\n!dpkg -i libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb\n!ls -l /usr/lib/x86_64-linux-gnu/libcudnn.so.*\n!pip install --upgrade tensorflow==2.5.0\n\n" ]
[ 2, 2, 0, 0 ]
[]
[]
[ "deep_learning", "python", "pytorch" ]
stackoverflow_0068562730_deep_learning_python_pytorch.txt
Q: use type error message in pytest parametrize I have a function which raises a TypeError when some conditions are met. def myfunc(..args here...): ... raise TypeError('Message') I want to test this message using pytest parametrize. But, because I am using other arguments also I want to have a setup like this: testdata = [ (..args here..., 'Message'), # Message is the expected output ] @pytest.mark.parametrize( "..args here..., expected_output", testdata) def test_myfunc( ..args here..., expected_output): obs = myfunc() assert obs == expected_output Simple putting the Message as the expected output in the parametrize testdata, gives me a failing test. A: You can't expect message error as a normal output of myfunc. There is a special context manager for this - pytest.raises. For example, if you want to expect some error and its message def test_raises(): with pytest.raises(Exception) as excinfo: raise Exception('some info') assert str(excinfo.value) == 'some info' So, in your case, this is going to be something like testdata = [ (..args here..., 'Message') ] @pytest.mark.parametrize("..args here..., expected_exception_message", testdata) def test_myfunc(..args here..., expected_exception_message): with pytest.raises(TypeError) as excinfo: obs = myfunc(..args here...) assert str(excinfo.value) == expected_exception_message A: The following is from the pytest docs here: Parametrizing conditional raising Use pytest.raises() with the pytest.mark.parametrize decorator to write parametrized tests in which some tests raise exceptions and others do not. It may be helpful to use nullcontext as a complement to raises. For example: from contextlib import nullcontext as does_not_raise import pytest @pytest.mark.parametrize( "example_input,expectation", [ (3, does_not_raise()), (2, does_not_raise()), (1, does_not_raise()), (0, pytest.raises(ZeroDivisionError)), ], ) def test_division(example_input, expectation): """Test how much I know division.""" with expectation: assert (6 / example_input) is not None In the example above, the first three test cases should run unexceptionally, while the fourth should raise ZeroDivisionError. But that didn't quite work for me... The example in the Pytest docs caused me to get the error AttributeError: __enter__. It seems that my Python's nullcontext doesn't have an __enter__ method implemented. Therefore I had to create my own version like this: class MyNullContext: def __enter__(self, *args, **kwargs): pass def __exit__(self, *args, **kwargs): pass does_not_raise = MyNullContext() and use that instead of importing the builtin nullcontext. You could throw that in a conftest.py file so that it's available for all your tests.
use type error message in pytest parametrize
I have a function which raises a TypeError when some conditions are met. def myfunc(..args here...): ... raise TypeError('Message') I want to test this message using pytest parametrize. But, because I am using other arguments also I want to have a setup like this: testdata = [ (..args here..., 'Message'), # Message is the expected output ] @pytest.mark.parametrize( "..args here..., expected_output", testdata) def test_myfunc( ..args here..., expected_output): obs = myfunc() assert obs == expected_output Simple putting the Message as the expected output in the parametrize testdata, gives me a failing test.
[ "You can't expect message error as a normal output of myfunc. There is a special context manager for this - pytest.raises.\nFor example, if you want to expect some error and its message\n\ndef test_raises():\n with pytest.raises(Exception) as excinfo: \n raise Exception('some info') \n assert str(excinfo.value) == 'some info'\n\n\nSo, in your case, this is going to be something like\ntestdata = [\n (..args here..., 'Message')\n]\n\n@pytest.mark.parametrize(\"..args here..., expected_exception_message\", testdata)\n def test_myfunc(..args here..., expected_exception_message):\n with pytest.raises(TypeError) as excinfo: \n obs = myfunc(..args here...)\n assert str(excinfo.value) == expected_exception_message\n\n", "The following is from the pytest docs here:\nParametrizing conditional raising\nUse pytest.raises() with the pytest.mark.parametrize decorator to write parametrized tests in which some tests raise exceptions and others do not.\nIt may be helpful to use nullcontext as a complement to raises.\nFor example:\nfrom contextlib import nullcontext as does_not_raise\n\nimport pytest\n\n\n@pytest.mark.parametrize(\n \"example_input,expectation\",\n [\n (3, does_not_raise()),\n (2, does_not_raise()),\n (1, does_not_raise()),\n (0, pytest.raises(ZeroDivisionError)),\n ],\n)\ndef test_division(example_input, expectation):\n \"\"\"Test how much I know division.\"\"\"\n with expectation:\n assert (6 / example_input) is not None\n\nIn the example above, the first three test cases should run unexceptionally, while the fourth should raise ZeroDivisionError.\nBut that didn't quite work for me...\nThe example in the Pytest docs caused me to get the error AttributeError: __enter__.\nIt seems that my Python's nullcontext doesn't have an __enter__ method implemented. Therefore I had to create my own version like this:\nclass MyNullContext:\n def __enter__(self, *args, **kwargs):\n pass\n def __exit__(self, *args, **kwargs):\n pass\ndoes_not_raise = MyNullContext()\n\nand use that instead of importing the builtin nullcontext. You could throw that in a conftest.py file so that it's available for all your tests.\n" ]
[ 2, 1 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0041936456_pytest_python.txt
Q: If there is no way to put a timeout in pandas read_csv, how to proceed? The CSV files linked to Google Sheets if by any chance there is a problem, it can't finish executing the task and stays in the same place for eternity, so I need to add a timeout in the attempt to import the CSV. I am currently test the situation with func-timeout: from func_timeout import func_timeout, FunctionTimedOut import pandas as pd try: csv_file = 'https://docs.google.com/spreadsheets/d/e/XXXX/pub?gid=0&single=true&output=csv' df = func_timeout(30, pd.read_csv, args=(csv_file)) except FunctionTimedOut: print('timeout') except Exception as e: print(e) But return this error (which apparently besides not having worked, in the future it will become unusable because there is the FutureWarning alert): FutureWarning: In a future version of pandas all arguments of read_csv except for the argument 'filepath_or_buffer' will be keyword-only. self._target(*self._args, **self._kwargs) read_csv() takes from 1 to 52 positional arguments but 168 were given When my expected output is: SS_Id SS_Match xGoals_Id xGoals_Match Bf_Id Bf_Match 0 10341056 3219 x 65668 NaN x 31539043 194508 x 5408226 1 10340808 3217 x 3205 NaN x 31537759 220949 x 1213581 2 10114414 2022 x 1972 NaN x 31535268 4525642 x 200603 3 10114275 1974 x 39634 NaN x 31535452 198124 x 6219238 I would like some assistance in finding the best solution for my current situation and need. A: There's a syntax error here: args=(csv_file) which leads to the FutureWarning down the line. You want a singlet (tuple with 1 value) like this: args=(csv_file, ) The comma makes the tuple! (Riddle: Why did it say you passed 168 arguments?) # it should work with a proper argument tuple. df = func_timeout(30, pd.read_csv, args=(csv_file, )) A: Using the library func_timeout is not strictly necessary. Pandas uses urllib to fetch urls and this library wraps the lower level socket library which has a timeout parameter. However Pandas doesn't expose that timeout parameter to the user, but you can set it through socket.setdefaulttimeout before launching the main program. So at the beginning define: TIMEOUT_SEC = 10 # default timeount in seconds import socket socket.setdefaulttimeout(TIMEOUT_SEC) import pandas as pd and then your code: try: csv_file = 'https://docs.google.com/spreadsheets/d/e/XXXX/pub?gid=0&single=true&output=csv' df = pd.read_csv(csv_file) except Exception as e: print(e)
If there is no way to put a timeout in pandas read_csv, how to proceed?
The CSV files linked to Google Sheets if by any chance there is a problem, it can't finish executing the task and stays in the same place for eternity, so I need to add a timeout in the attempt to import the CSV. I am currently test the situation with func-timeout: from func_timeout import func_timeout, FunctionTimedOut import pandas as pd try: csv_file = 'https://docs.google.com/spreadsheets/d/e/XXXX/pub?gid=0&single=true&output=csv' df = func_timeout(30, pd.read_csv, args=(csv_file)) except FunctionTimedOut: print('timeout') except Exception as e: print(e) But return this error (which apparently besides not having worked, in the future it will become unusable because there is the FutureWarning alert): FutureWarning: In a future version of pandas all arguments of read_csv except for the argument 'filepath_or_buffer' will be keyword-only. self._target(*self._args, **self._kwargs) read_csv() takes from 1 to 52 positional arguments but 168 were given When my expected output is: SS_Id SS_Match xGoals_Id xGoals_Match Bf_Id Bf_Match 0 10341056 3219 x 65668 NaN x 31539043 194508 x 5408226 1 10340808 3217 x 3205 NaN x 31537759 220949 x 1213581 2 10114414 2022 x 1972 NaN x 31535268 4525642 x 200603 3 10114275 1974 x 39634 NaN x 31535452 198124 x 6219238 I would like some assistance in finding the best solution for my current situation and need.
[ "There's a syntax error here: args=(csv_file) which leads to the FutureWarning down the line. You want a singlet (tuple with 1 value) like this: args=(csv_file, )\nThe comma makes the tuple!\n(Riddle: Why did it say you passed 168 arguments?)\n# it should work with a proper argument tuple.\ndf = func_timeout(30, pd.read_csv, args=(csv_file, ))\n\n", "Using the library func_timeout is not strictly necessary.\nPandas uses urllib to fetch urls and this library wraps the lower level socket library which has a timeout parameter. However Pandas doesn't expose that timeout parameter to the user, but you can set it through socket.setdefaulttimeout before launching the main program.\nSo at the beginning define:\nTIMEOUT_SEC = 10 # default timeount in seconds\nimport socket\nsocket.setdefaulttimeout(TIMEOUT_SEC)\nimport pandas as pd\n\nand then your code:\ntry:\n csv_file = 'https://docs.google.com/spreadsheets/d/e/XXXX/pub?gid=0&single=true&output=csv'\n df = pd.read_csv(csv_file)\nexcept Exception as e:\n print(e)\n\n" ]
[ 2, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0072750327_pandas_python.txt
Q: Python Django - delaying ValidationError until for loop completes I'm working on an app that simulates a social media site. I currently have a form where users can enter in their friends' emails so they can be invited to join the app. Let's say we have a user who enters in three email addresses to the email form which are then saved as a list of strings: emails_to_invite = ["jen@website.com", "mike@website.com", "joe@website.com"] In the database, we already have a list of users who have already been invited to the site: current_users = ["jen@website.com", "mike@website.com", "dan@website.com", "kim@website.com"] So we have two users who have already been invited: jen@website.com and mike@website.com. I'm trying to write some code that returns a ValidationError and can list both matched users in the message. Here's what I have so far: for email in emails_to_invite: if email in current_users: raise forms.ValidationError(f"{email} is already in the database.") Here's how I want this error to display: jen@website.com is already in the database. mike@website.com is already in the database. But right now, the error only displays the first email: jen@website.com is already in the database. I also need mike@website.com to display too. It appears that the for loop stops once it recognizes one match, but I need it to keep going until it recognizes all matches. Can anyone offer some suggestions? A: If you don't want an Exception in a code-block to halt your execution (and hide further exceptions, as you've found), put the susceptible code in a a try/except block to handle the error as you see fit. To later raise the exception, consider using something like: raised_exceptions = [] <loop that might raise exceptions> try: <loop that might raise exceptions> except Exception as e: raised_exceptions.append(e) <do something with the exceptions you saved> That being said, IMO you shouldn't be using exceptions in this way - consider returning a series of lists, one per possible outcome, instead: email sent and already invited (and/or joined user)
Python Django - delaying ValidationError until for loop completes
I'm working on an app that simulates a social media site. I currently have a form where users can enter in their friends' emails so they can be invited to join the app. Let's say we have a user who enters in three email addresses to the email form which are then saved as a list of strings: emails_to_invite = ["jen@website.com", "mike@website.com", "joe@website.com"] In the database, we already have a list of users who have already been invited to the site: current_users = ["jen@website.com", "mike@website.com", "dan@website.com", "kim@website.com"] So we have two users who have already been invited: jen@website.com and mike@website.com. I'm trying to write some code that returns a ValidationError and can list both matched users in the message. Here's what I have so far: for email in emails_to_invite: if email in current_users: raise forms.ValidationError(f"{email} is already in the database.") Here's how I want this error to display: jen@website.com is already in the database. mike@website.com is already in the database. But right now, the error only displays the first email: jen@website.com is already in the database. I also need mike@website.com to display too. It appears that the for loop stops once it recognizes one match, but I need it to keep going until it recognizes all matches. Can anyone offer some suggestions?
[ "If you don't want an Exception in a code-block to halt your execution (and hide further exceptions, as you've found), put the susceptible code in a a try/except block to handle the error as you see fit.\nTo later raise the exception, consider using something like:\nraised_exceptions = []\n<loop that might raise exceptions>\n try:\n <loop that might raise exceptions>\n except Exception as e:\n raised_exceptions.append(e)\n\n<do something with the exceptions you saved>\n\nThat being said, IMO you shouldn't be using exceptions in this way - consider returning a series of lists, one per possible outcome, instead: email sent and already invited (and/or joined user)\n" ]
[ 1 ]
[]
[]
[ "error_handling", "for_loop", "if_statement", "python", "validation" ]
stackoverflow_0074467510_error_handling_for_loop_if_statement_python_validation.txt
Q: 'DateField' object has no attribute 'value_from_datadict' I've been researching everywhere for an answer for this, but I'm just trying to add a widget to my DateField created on my models.py, where you can see the actual calendar, as if you were doing it directly through html with an input type=date. Since I have a few date fields, this has become a problem because they all need to have the same format as the rest of the form and the widget. Feel like the question has been answered but none of the answers or things I've found have returned a correct answer. models.py class InfoPersonal(models.Model): Fecha = models.DateField() cargo_act = models.CharField(max_length=100) Nombres_y_Apellidos_completos = models.CharField(max_length=100) Lugar = models.CharField(max_length=100) Fecha_de_Nacimiento = models.DateField(null=True) Discapacidad= models.BooleanField() grado = models.CharField(max_length=100, blank=True) Edad = models.IntegerField(validators=[MinValueValidator(18), MaxValueValidator(80)]) Tipo_de_Sangre = models.CharField(max_length=50, choices=sangre_choice) Estatura = models.FloatField(validators=[MaxValueValidator(3.0), MinValueValidator(0.5)]) Direccion_Domicilio_actual = models.CharField(max_length=100) Manzana = models.CharField(max_length=100) Villa = models.CharField(max_length=100) parroquia = models.CharField(max_length=100) Telefono_Domicilio = models.IntegerField(blank=True, null=True) Telefono_Celular = models.IntegerField(blank=True, null=True) Telefono_Familiar = models.IntegerField(blank=True, null=True) cedula = models.IntegerField() estado_civil = models.CharField(max_length=50, choices=list_estado_civil) #Conyuge Nombre_completo_del_conyuge= models.CharField(max_length=100,blank=True) Direccion_Domiciliaria=models.CharField(max_length=100,blank=True) Telefono=models.IntegerField(blank=True, null=True) Cedula_de_Identidad=models.IntegerField(blank=True, null=True) Fecha_de_NacimientoC=models.DateField(blank=True, null=True) Direccion_Trabajo=models.CharField(max_length=100,blank=True) Telefono_del_trabajo=models.IntegerField(blank=True,null=True) #Hijos Nombres= models.CharField(max_length=100,blank=True) Lugar_y_Fecha_de_NacimientoH = models.CharField(max_length=100,blank=True) Esposo_con_Discapacidad = models.BooleanField(blank=True) Hijos_con_Discapacidad= models.BooleanField(blank=True) #InfoFamiliares Apellidos_y_Nombres_1= models.CharField(max_length=100,blank=True) Telefono_Familiar_1 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_1 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_1 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_2= models.CharField(max_length=100,blank=True) Telefono_Familiar_2 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_2 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_2 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_3= models.CharField(max_length=100,blank=True) Telefono_Familiar_3 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_3 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_3 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_4= models.CharField(max_length=100,blank=True) Telefono_Familiar_4 = models.IntegerField(blank=True, null=True) Fecha_Nacimiento_4 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_4 = models.CharField(max_length=100,blank=True) Trabajan_familiares = models.BooleanField(blank=True) Trabajan_Amistades = models.BooleanField(blank=True) #estudiosRealizados Primaria=models.CharField(max_length=100) Lugar_Primaria= models.CharField(max_length=100) Curso_Primaria= models.CharField(max_length=100) Año_Primaria=models.IntegerField() Titulo_Primaria=models.CharField(max_length=100) Secunadaria=models.CharField(max_length=100) Lugar_Secundaria=models.CharField(max_length=100) Curso_Secundaria=models.CharField(max_length=100) Año_Secundaria=models.IntegerField() Titulo_Secundaria=models.CharField(max_length=100) Superior=models.CharField(max_length=100) Lugar_Superior=models.CharField(max_length=100) Curso_Superior=models.CharField(max_length=100) Año_Superior=models.IntegerField() Titulo_Superior=models.CharField(max_length=100) Otros=models.CharField(max_length=100,blank=True) Lugar_Otros=models.CharField(max_length=100,blank=True) Curso_Otros=models.CharField(max_length=100,blank=True) Año_Otros=models.IntegerField(blank=True, null=True) Titulo_Otros=models.CharField(max_length=100,blank=True) idioma=models.CharField(max_length=100) forms.py class Form_InfoPersonal(ModelForm): class Meta: model = InfoPersonal fields = '__all__' widgets = { 'Nombres_y_Apellidos_completos': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'cargo_act': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'Lugar': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Fecha': forms.DateField(widget=NumberInput(attrs={'type':'date'})), 'grado': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':False}), 'Edad' : forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Tipo_de_Sangre' : forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'Estatura': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Direccion_Domicilio_actual': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Manzana':forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Villa': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'parroquia': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Telefono_Domicilio' : forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75' }), 'Telefono_Celular': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Telefono_Familiar': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'cedula': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), } views.py def formu_view(request): if request.method == 'POST': form = Form_InfoPersonal(request.POST) if form.is_valid(): form.save() messages.success(request, 'Su formulario ha sido llenado y guardado correctamente') return render(request, '') else: form= Form_InfoPersonal() return render(request, 'users/formu.html', context={'form':form}) error message Internal Server Error: /formu Traceback (most recent call last): File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "D:\Descargas\RP3 trabajo\RP3 trabajo\users\views.py", line 247, in formu_view if form.is_valid(): File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 205, in is_valid return self.is_bound and not self.errors File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 200, in errors self.full_clean() File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 437, in full_clean self._clean_fields() File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 444, in _clean_fields value = bf.initial if field.disabled else bf.data File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\boundfield.py", line 127, in data return self.form._widget_data_value(self.field.widget, self.html_name) File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 224, in _widget_data_value return widget.value_from_datadict(self.data, self.files, html_name) AttributeError: 'DateField' object has no attribute 'value_from_datadict' A: You are using the wrong class for the widget. Change 'Fecha': forms.DateField(widget=NumberInput(attrs={'type':'date'})), to 'Fecha': forms.DateInput(widget=NumberInput(attrs={'type':'date'})), ^^^^^^^^^ forms.DateField is intended for declaring the field of a form. It doesn't have a method value_from_datadict() as required by validation. forms.DateInput is the widget used for a date input. It has the desired method.
'DateField' object has no attribute 'value_from_datadict'
I've been researching everywhere for an answer for this, but I'm just trying to add a widget to my DateField created on my models.py, where you can see the actual calendar, as if you were doing it directly through html with an input type=date. Since I have a few date fields, this has become a problem because they all need to have the same format as the rest of the form and the widget. Feel like the question has been answered but none of the answers or things I've found have returned a correct answer. models.py class InfoPersonal(models.Model): Fecha = models.DateField() cargo_act = models.CharField(max_length=100) Nombres_y_Apellidos_completos = models.CharField(max_length=100) Lugar = models.CharField(max_length=100) Fecha_de_Nacimiento = models.DateField(null=True) Discapacidad= models.BooleanField() grado = models.CharField(max_length=100, blank=True) Edad = models.IntegerField(validators=[MinValueValidator(18), MaxValueValidator(80)]) Tipo_de_Sangre = models.CharField(max_length=50, choices=sangre_choice) Estatura = models.FloatField(validators=[MaxValueValidator(3.0), MinValueValidator(0.5)]) Direccion_Domicilio_actual = models.CharField(max_length=100) Manzana = models.CharField(max_length=100) Villa = models.CharField(max_length=100) parroquia = models.CharField(max_length=100) Telefono_Domicilio = models.IntegerField(blank=True, null=True) Telefono_Celular = models.IntegerField(blank=True, null=True) Telefono_Familiar = models.IntegerField(blank=True, null=True) cedula = models.IntegerField() estado_civil = models.CharField(max_length=50, choices=list_estado_civil) #Conyuge Nombre_completo_del_conyuge= models.CharField(max_length=100,blank=True) Direccion_Domiciliaria=models.CharField(max_length=100,blank=True) Telefono=models.IntegerField(blank=True, null=True) Cedula_de_Identidad=models.IntegerField(blank=True, null=True) Fecha_de_NacimientoC=models.DateField(blank=True, null=True) Direccion_Trabajo=models.CharField(max_length=100,blank=True) Telefono_del_trabajo=models.IntegerField(blank=True,null=True) #Hijos Nombres= models.CharField(max_length=100,blank=True) Lugar_y_Fecha_de_NacimientoH = models.CharField(max_length=100,blank=True) Esposo_con_Discapacidad = models.BooleanField(blank=True) Hijos_con_Discapacidad= models.BooleanField(blank=True) #InfoFamiliares Apellidos_y_Nombres_1= models.CharField(max_length=100,blank=True) Telefono_Familiar_1 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_1 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_1 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_2= models.CharField(max_length=100,blank=True) Telefono_Familiar_2 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_2 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_2 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_3= models.CharField(max_length=100,blank=True) Telefono_Familiar_3 = models.IntegerField(blank=True,null=True) Fecha_Nacimiento_3 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_3 = models.CharField(max_length=100,blank=True) Apellidos_y_Nombres_4= models.CharField(max_length=100,blank=True) Telefono_Familiar_4 = models.IntegerField(blank=True, null=True) Fecha_Nacimiento_4 = models.DateField(blank=True,null=True) Relacion_de_Parentesco_4 = models.CharField(max_length=100,blank=True) Trabajan_familiares = models.BooleanField(blank=True) Trabajan_Amistades = models.BooleanField(blank=True) #estudiosRealizados Primaria=models.CharField(max_length=100) Lugar_Primaria= models.CharField(max_length=100) Curso_Primaria= models.CharField(max_length=100) Año_Primaria=models.IntegerField() Titulo_Primaria=models.CharField(max_length=100) Secunadaria=models.CharField(max_length=100) Lugar_Secundaria=models.CharField(max_length=100) Curso_Secundaria=models.CharField(max_length=100) Año_Secundaria=models.IntegerField() Titulo_Secundaria=models.CharField(max_length=100) Superior=models.CharField(max_length=100) Lugar_Superior=models.CharField(max_length=100) Curso_Superior=models.CharField(max_length=100) Año_Superior=models.IntegerField() Titulo_Superior=models.CharField(max_length=100) Otros=models.CharField(max_length=100,blank=True) Lugar_Otros=models.CharField(max_length=100,blank=True) Curso_Otros=models.CharField(max_length=100,blank=True) Año_Otros=models.IntegerField(blank=True, null=True) Titulo_Otros=models.CharField(max_length=100,blank=True) idioma=models.CharField(max_length=100) forms.py class Form_InfoPersonal(ModelForm): class Meta: model = InfoPersonal fields = '__all__' widgets = { 'Nombres_y_Apellidos_completos': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'cargo_act': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'Lugar': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Fecha': forms.DateField(widget=NumberInput(attrs={'type':'date'})), 'grado': forms.TextInput(attrs={'class':'form-control form-control-lg mt-3 ml-4 w-75','required':False}), 'Edad' : forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Tipo_de_Sangre' : forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75','required':True}), 'Estatura': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Direccion_Domicilio_actual': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Manzana':forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Villa': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'parroquia': forms.TextInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Telefono_Domicilio' : forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75' }), 'Telefono_Celular': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'Telefono_Familiar': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), 'cedula': forms.NumberInput(attrs={'class': 'form-control form-control-lg mt-3 ml-4 w-75', 'required':True}), } views.py def formu_view(request): if request.method == 'POST': form = Form_InfoPersonal(request.POST) if form.is_valid(): form.save() messages.success(request, 'Su formulario ha sido llenado y guardado correctamente') return render(request, '') else: form= Form_InfoPersonal() return render(request, 'users/formu.html', context={'form':form}) error message Internal Server Error: /formu Traceback (most recent call last): File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "D:\Descargas\RP3 trabajo\RP3 trabajo\users\views.py", line 247, in formu_view if form.is_valid(): File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 205, in is_valid return self.is_bound and not self.errors File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 200, in errors self.full_clean() File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 437, in full_clean self._clean_fields() File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 444, in _clean_fields value = bf.initial if field.disabled else bf.data File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\boundfield.py", line 127, in data return self.form._widget_data_value(self.field.widget, self.html_name) File "C:\Users\ricar\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\django\forms\forms.py", line 224, in _widget_data_value return widget.value_from_datadict(self.data, self.files, html_name) AttributeError: 'DateField' object has no attribute 'value_from_datadict'
[ "You are using the wrong class for the widget. Change\n 'Fecha': forms.DateField(widget=NumberInput(attrs={'type':'date'})),\n\nto\n 'Fecha': forms.DateInput(widget=NumberInput(attrs={'type':'date'})),\n ^^^^^^^^^\n\nforms.DateField is intended for declaring the field of a form. It doesn't have a method value_from_datadict() as required by validation. forms.DateInput is the widget used for a date input. It has the desired method.\n" ]
[ 0 ]
[]
[]
[ "backend", "django", "python" ]
stackoverflow_0074463462_backend_django_python.txt
Q: How to continuously copy new S3 files to another S3 bucket How can I continuously copy one S3 bucket to another? I want to copy the files every time a new file has been added. I've tried using the boto3 copy_object however I require the key each time which won't work if I'm getting a new file each time. A: From Replicating objects - Amazon Simple Storage Service: To automatically replicate new objects as they are written to the bucket use live replication, such as Same-Region Replication (SRR) or Cross-Region Replication (CRR). S3 Replication will automatically create new objects in another bucket as soon as they are created. (Well, it can take a few seconds.) Alternatively, you could configure the S3 bucket to trigger an AWS Lambda function that uses the CopyObject() command to copy the object to another location. This method is useful if you want to selectively copy files, by having the Lambda function perform some logic before performing the copy (such as checking the file type). A: Please look at this: https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket/ You can use the aws cli s3 sync command to achieve this.
How to continuously copy new S3 files to another S3 bucket
How can I continuously copy one S3 bucket to another? I want to copy the files every time a new file has been added. I've tried using the boto3 copy_object however I require the key each time which won't work if I'm getting a new file each time.
[ "From Replicating objects - Amazon Simple Storage Service:\n\nTo automatically replicate new objects as they are written to the bucket use live replication, such as Same-Region Replication (SRR) or Cross-Region Replication (CRR).\n\nS3 Replication will automatically create new objects in another bucket as soon as they are created. (Well, it can take a few seconds.)\nAlternatively, you could configure the S3 bucket to trigger an AWS Lambda function that uses the CopyObject() command to copy the object to another location. This method is useful if you want to selectively copy files, by having the Lambda function perform some logic before performing the copy (such as checking the file type).\n", "Please look at this: https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket/\nYou can use the aws cli s3 sync command to achieve this.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_lambda", "python" ]
stackoverflow_0074463875_amazon_s3_amazon_web_services_aws_lambda_python.txt
Q: Extra line generated when writing serial data to file using pyserial I am reading a string from the serial port using pySerial and then writing data to a file with a time stamp. For some reason a new line is written with an empty data entry ( with the time stamp) every time I connect the serial port. I have set the write to file as append so that every time I read data from the port I can use the same file. Is there anything fundamental that I am missing in setting up the serial port? I have attached the code and the output written in the file. Thanks a lot! sPrt = serial.Serial( port = 5, baudrate = 9600 , bytesize = 8 ) sPrt.flushInput() while True: data = sPrt.readline().decode('utf-8')[:-2] print(data) dateTimeObj = datetime.now() timeStamp = dateTimeObj.strftime("%d-%b-%Y %H:%M") with open(fileName,"ab") as f: writer = csv.writer(f,delimiter=",") writer.writerow([timeStamp,data]) and the output in the file is: Here I started data logging at 14:43, disconnected the port after two data points and then connected it again at 14.44. Each time a new connection was made a line without any data got added to the saved file 16-Nov-2022 14:43, 16-Nov-2022 14:43,"A" 16-Nov-2022 14:43,"B" 16-Nov-2022 14:44, 16-Nov-2022 14:44,"A" The output of the print(data) line is: A B A I tried to check if the data variable is a "\n" and to only write to file if it isnt but that did not seem to do anything . A: I don't see the problem here, apparently data is either something like 'A' (no newline) or '' (no newline, just an empty string). In either case, .writerow() will write a full row, followed by a newline. If you don't want newlines written to the output file: with open(fileName, "a", newline='') as f: ... I don't see it working with "ab" anyway, since the csv.writer expects to be writing to a text file, not a binary one. As do you apparently, since you called .decode()
Extra line generated when writing serial data to file using pyserial
I am reading a string from the serial port using pySerial and then writing data to a file with a time stamp. For some reason a new line is written with an empty data entry ( with the time stamp) every time I connect the serial port. I have set the write to file as append so that every time I read data from the port I can use the same file. Is there anything fundamental that I am missing in setting up the serial port? I have attached the code and the output written in the file. Thanks a lot! sPrt = serial.Serial( port = 5, baudrate = 9600 , bytesize = 8 ) sPrt.flushInput() while True: data = sPrt.readline().decode('utf-8')[:-2] print(data) dateTimeObj = datetime.now() timeStamp = dateTimeObj.strftime("%d-%b-%Y %H:%M") with open(fileName,"ab") as f: writer = csv.writer(f,delimiter=",") writer.writerow([timeStamp,data]) and the output in the file is: Here I started data logging at 14:43, disconnected the port after two data points and then connected it again at 14.44. Each time a new connection was made a line without any data got added to the saved file 16-Nov-2022 14:43, 16-Nov-2022 14:43,"A" 16-Nov-2022 14:43,"B" 16-Nov-2022 14:44, 16-Nov-2022 14:44,"A" The output of the print(data) line is: A B A I tried to check if the data variable is a "\n" and to only write to file if it isnt but that did not seem to do anything .
[ "I don't see the problem here, apparently data is either something like 'A' (no newline) or '' (no newline, just an empty string). In either case, .writerow() will write a full row, followed by a newline.\nIf you don't want newlines written to the output file:\nwith open(fileName, \"a\", newline='') as f:\n ...\n\nI don't see it working with \"ab\" anyway, since the csv.writer expects to be writing to a text file, not a binary one. As do you apparently, since you called .decode()\n" ]
[ 0 ]
[]
[]
[ "csvwriter", "pyserial", "python" ]
stackoverflow_0074467096_csvwriter_pyserial_python.txt
Q: Best Way to Count Occurences of Each Character in a Large Dataset I am trying to count the number of occurrences of each character within a large dateset. For example, if the data was the numpy array ['A', 'AB', 'ABC'] then I would want {'A': 3, 'B': 2, 'C': 1} as the output. I currently have an implementation that looks like this: char_count = {} for c in string.printable: char_count[c] = np.char.count(data, c).sum() The issue I am having is that this takes too long for my data. I have ~14,000,000 different strings that I would like to count and this implementation is not efficient for that amount of data. Any help is appreciated! A: Another way. import collections c = collections.Counter() for thing in data: c.update(thing) Same basic advantage - only iterates the data once. A: One approach: import numpy as np from collections import defaultdict data = np.array(['A', 'AB', 'ABC']) counts = defaultdict(int) for e in data: for c in e: counts[c] += 1 print(counts) Output defaultdict(<class 'int'>, {'A': 3, 'B': 2, 'C': 1}) Note that your code iterates len(string.printable) times over data in contrast my proposal iterates one time. One alternative using a dictionary: data = np.array(['A', 'AB', 'ABC']) counts = dict() for e in data: for c in e: counts[c] = counts.get(c, 0) + 1 print(counts)
Best Way to Count Occurences of Each Character in a Large Dataset
I am trying to count the number of occurrences of each character within a large dateset. For example, if the data was the numpy array ['A', 'AB', 'ABC'] then I would want {'A': 3, 'B': 2, 'C': 1} as the output. I currently have an implementation that looks like this: char_count = {} for c in string.printable: char_count[c] = np.char.count(data, c).sum() The issue I am having is that this takes too long for my data. I have ~14,000,000 different strings that I would like to count and this implementation is not efficient for that amount of data. Any help is appreciated!
[ "Another way.\nimport collections\nc = collections.Counter()\nfor thing in data:\n c.update(thing)\n\nSame basic advantage - only iterates the data once.\n", "One approach:\nimport numpy as np\nfrom collections import defaultdict\n\ndata = np.array(['A', 'AB', 'ABC'])\n\ncounts = defaultdict(int)\nfor e in data:\n for c in e:\n counts[c] += 1\n\nprint(counts)\n\nOutput\ndefaultdict(<class 'int'>, {'A': 3, 'B': 2, 'C': 1})\n\nNote that your code iterates len(string.printable) times over data in contrast my proposal iterates one time.\nOne alternative using a dictionary:\ndata = np.array(['A', 'AB', 'ABC'])\n\ncounts = dict()\nfor e in data:\n for c in e:\n counts[c] = counts.get(c, 0) + 1\n\nprint(counts)\n\n" ]
[ 2, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074467540_numpy_python.txt
Q: how to access object properties in json using beautifulsoup? python from bs4 import BeautifulSoup import fake_useragent import requests ua = fake_useragent.UserAgent() import soupsieve as sv url = "https://search-maps.yandex.ru/v1/?text=%D0%9F%D0%BE%D1%87%D1%82%D0%B0%20%D0%A0%D0%BE%D1%81%D1%81%D0%B8%D0%B8,%20%D0%9A%D1%80%D0%B0%D1%81%D0%BD%D0%BE%D0%B4%D0%B0%D1%80&results=500&type=biz&lang=ru_RU&apikey=d9168899-cf24-452a-95cf-06d7ac5a982b" r = requests.get(url, headers={"User-Agent": ua.random}) soup = BeautifulSoup(r.text, 'lxml') print(soup.find("p")) i want to choose from this list only two properties like "boundedBy" and "coordinates" How can i do it?I ve checked the whole bs documentation, but didnt find a solution A: The result from the server is in Json format, so use json parser or .json() method to decode it: import json import requests url = "https://search-maps.yandex.ru/v1/?text=%D0%9F%D0%BE%D1%87%D1%82%D0%B0%20%D0%A0%D0%BE%D1%81%D1%81%D0%B8%D0%B8,%20%D0%9A%D1%80%D0%B0%D1%81%D0%BD%D0%BE%D0%B4%D0%B0%D1%80&results=500&type=biz&lang=ru_RU&apikey=d9168899-cf24-452a-95cf-06d7ac5a982b" data = requests.get(url).json() # uncomment this to print all data: # print(json.dumps(data, indent=4)) print(data["properties"]["ResponseMetaData"]["SearchRequest"]["boundedBy"]) Prints: [[37.048427, 55.43644866], [38.175903, 56.04690174]] A: Use the .json() method of the response, since the data is JSON. You can then iterate over the features in the response. Note you can set the parameters separate from the URL so they are readable and easier to change: import requests import json url = 'https://search-maps.yandex.ru/v1' params = {'text': 'Почта России, Краснодар', 'results': 500, 'type': 'biz', 'lang': 'ru_RU', 'apikey': 'd9168899-cf24-452a-95cf-06d7ac5a982b'} r = requests.get(url, params=params) if r.ok: data = r.json() for feature in data['features']: x,y = feature["geometry"]["coordinates"] (x1,y1),(x2,y2) = feature["properties"]["boundedBy"] print(f'coordinates ({x:.6f}, {y:.6f}): bounds ({x1:.6f}, {y1:.6f})-({x2:.6f}, {y2:.6f})') Output: coordinates (38.969711, 45.028356): bounds (38.965656, 45.025508)-(38.973867, 45.031330) coordinates (38.969660, 45.028365): bounds (38.965656, 45.025508)-(38.973867, 45.031330) coordinates (38.993199, 45.063675): bounds (38.989012, 45.060879)-(38.997223, 45.066698) coordinates (39.029821, 45.048676): bounds (39.025753, 45.045304)-(39.033964, 45.051124) coordinates (38.992736, 45.034352): bounds (38.988590, 45.031496)-(38.996801, 45.037318) ...
how to access object properties in json using beautifulsoup? python
from bs4 import BeautifulSoup import fake_useragent import requests ua = fake_useragent.UserAgent() import soupsieve as sv url = "https://search-maps.yandex.ru/v1/?text=%D0%9F%D0%BE%D1%87%D1%82%D0%B0%20%D0%A0%D0%BE%D1%81%D1%81%D0%B8%D0%B8,%20%D0%9A%D1%80%D0%B0%D1%81%D0%BD%D0%BE%D0%B4%D0%B0%D1%80&results=500&type=biz&lang=ru_RU&apikey=d9168899-cf24-452a-95cf-06d7ac5a982b" r = requests.get(url, headers={"User-Agent": ua.random}) soup = BeautifulSoup(r.text, 'lxml') print(soup.find("p")) i want to choose from this list only two properties like "boundedBy" and "coordinates" How can i do it?I ve checked the whole bs documentation, but didnt find a solution
[ "The result from the server is in Json format, so use json parser or .json() method to decode it:\nimport json\nimport requests\n\n\nurl = \"https://search-maps.yandex.ru/v1/?text=%D0%9F%D0%BE%D1%87%D1%82%D0%B0%20%D0%A0%D0%BE%D1%81%D1%81%D0%B8%D0%B8,%20%D0%9A%D1%80%D0%B0%D1%81%D0%BD%D0%BE%D0%B4%D0%B0%D1%80&results=500&type=biz&lang=ru_RU&apikey=d9168899-cf24-452a-95cf-06d7ac5a982b\"\ndata = requests.get(url).json()\n\n# uncomment this to print all data:\n# print(json.dumps(data, indent=4))\n\nprint(data[\"properties\"][\"ResponseMetaData\"][\"SearchRequest\"][\"boundedBy\"])\n\nPrints:\n[[37.048427, 55.43644866], [38.175903, 56.04690174]]\n\n", "Use the .json() method of the response, since the data is JSON. You can then iterate over the features in the response. Note you can set the parameters separate from the URL so they are readable and easier to change:\nimport requests\nimport json\n\nurl = 'https://search-maps.yandex.ru/v1'\n\nparams = {'text': 'Почта России, Краснодар',\n 'results': 500,\n 'type': 'biz',\n 'lang': 'ru_RU',\n 'apikey': 'd9168899-cf24-452a-95cf-06d7ac5a982b'}\n\nr = requests.get(url, params=params)\nif r.ok:\n data = r.json()\n for feature in data['features']:\n x,y = feature[\"geometry\"][\"coordinates\"]\n (x1,y1),(x2,y2) = feature[\"properties\"][\"boundedBy\"]\n print(f'coordinates ({x:.6f}, {y:.6f}): bounds ({x1:.6f}, {y1:.6f})-({x2:.6f}, {y2:.6f})')\n\nOutput:\ncoordinates (38.969711, 45.028356): bounds (38.965656, 45.025508)-(38.973867, 45.031330)\ncoordinates (38.969660, 45.028365): bounds (38.965656, 45.025508)-(38.973867, 45.031330)\ncoordinates (38.993199, 45.063675): bounds (38.989012, 45.060879)-(38.997223, 45.066698)\ncoordinates (39.029821, 45.048676): bounds (39.025753, 45.045304)-(39.033964, 45.051124)\ncoordinates (38.992736, 45.034352): bounds (38.988590, 45.031496)-(38.996801, 45.037318)\n...\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "beautifulsoup", "json", "python", "python_requests" ]
stackoverflow_0074467316_arrays_beautifulsoup_json_python_python_requests.txt
Q: What am I doing wrong here (trying to print employee class) Traceback (most recent call last): File "C:/Users/cenni/OneDrive/Desktop/Computer science work and notes/Chapter 11 #1.py", line 20, in <module> main() File "C:/Users/cenni/OneDrive/Desktop/Computer science work and notes/Chapter 11 #1.py", line 18, in main print('Your name is ' + self.name(), + ' your employee number is ' + self.number(), + 'your shift number is ', + self.Snumber(), + ' your pay is ', + self.pay(), ' an hour.') NameError: name 'self' is not defined class Employee: def __init__(self, name, number, Snumber, Pay): self.name = name self.number = number def ProductionWorker(self, Snumber, pay): self.Snumber = Snumber self.pay = pay def main(): employee_name = input("Please enter your name: ") employee_number = input("Please enter your employee number: ") employee_Snumber = input("Please enter your shift number: ") employee_pay = input("Please enter your hourly wage: ") employee_info = Employee(employee_name, employee_number, employee_Snumber, employee_pay) print('Your name is ' + self.name(), + ' your employee number is ' + self.number(), + 'your shift number is ', + self.Snumber(), + ' your pay is ', + self.pay(), ' an hour.') main() I am unsure of how to fix this issue. i am trying to design a program that prints all of the classes and subclasses values. A: self is a local variable in the class methods. Outside the methods, the variable that contains the employee is employee_info, so use that in the print() call. __init__() needs to call self.productionWorker() to set self.Snumber and self.pay. You shouldn't have () after employee_info.name, in the print() call. These are data attributes, not methods, so you don't call them. class Employee: def __init__(self, name, number, Snumber, Pay): self.name = name self.number = number self.productionWorker(Snumber, Pay) def ProductionWorker(self, Snumber, pay): self.Snumber = Snumber self.pay = pay def main(): employee_name = input("Please enter your name: ") employee_number = input("Please enter your employee number: ") employee_Snumber = input("Please enter your shift number: ") employee_pay = input("Please enter your hourly wage: ") employee_info = Employee(employee_name, employee_number, employee_Snumber, employee_pay) print('Your name is ' + employee_info.name, + ' your employee number is ' + employee_info.number, + 'your shift number is ', + employee_info.Snumber, + ' your pay is ', + employee_info.pay, ' an hour.') main()
What am I doing wrong here (trying to print employee class)
Traceback (most recent call last): File "C:/Users/cenni/OneDrive/Desktop/Computer science work and notes/Chapter 11 #1.py", line 20, in <module> main() File "C:/Users/cenni/OneDrive/Desktop/Computer science work and notes/Chapter 11 #1.py", line 18, in main print('Your name is ' + self.name(), + ' your employee number is ' + self.number(), + 'your shift number is ', + self.Snumber(), + ' your pay is ', + self.pay(), ' an hour.') NameError: name 'self' is not defined class Employee: def __init__(self, name, number, Snumber, Pay): self.name = name self.number = number def ProductionWorker(self, Snumber, pay): self.Snumber = Snumber self.pay = pay def main(): employee_name = input("Please enter your name: ") employee_number = input("Please enter your employee number: ") employee_Snumber = input("Please enter your shift number: ") employee_pay = input("Please enter your hourly wage: ") employee_info = Employee(employee_name, employee_number, employee_Snumber, employee_pay) print('Your name is ' + self.name(), + ' your employee number is ' + self.number(), + 'your shift number is ', + self.Snumber(), + ' your pay is ', + self.pay(), ' an hour.') main() I am unsure of how to fix this issue. i am trying to design a program that prints all of the classes and subclasses values.
[ "self is a local variable in the class methods. Outside the methods, the variable that contains the employee is employee_info, so use that in the print() call.\n__init__() needs to call self.productionWorker() to set self.Snumber and self.pay.\nYou shouldn't have () after employee_info.name, in the print() call. These are data attributes, not methods, so you don't call them.\nclass Employee:\n def __init__(self, name, number, Snumber, Pay):\n self.name = name\n self.number = number\n self.productionWorker(Snumber, Pay)\n \n def ProductionWorker(self, Snumber, pay):\n self.Snumber = Snumber\n self.pay = pay\n \n def main():\n employee_name = input(\"Please enter your name: \")\n employee_number = input(\"Please enter your employee number: \")\n employee_Snumber = input(\"Please enter your shift number: \")\n employee_pay = input(\"Please enter your hourly wage: \")\n employee_info = Employee(employee_name, employee_number, employee_Snumber, employee_pay)\n \n print('Your name is ' + employee_info.name, + ' your employee number is ' + employee_info.number, + 'your shift number is ', + employee_info.Snumber, + ' your pay is ', + employee_info.pay, ' an hour.')\n \nmain()\n\n" ]
[ 1 ]
[]
[]
[ "class", "python" ]
stackoverflow_0074467632_class_python.txt
Q: Converting Multiple .xlsx Files to .csv - Pandas reading only 1 column `` Hello everyone, I am working on a deep learning project. The data I will use for the project consists of multiple excel files. Since I will be using the pd.read_csv command of the Pandas library, I used a VBA code that automatically converts all excel files to csv format. Here is the VBA CODE: (xlsx to csv) Sub WorkbooksSaveAsCsvToFolder() 'UpdatebyExtendoffice20181031 Dim xObjWB As Workbook Dim xObjWS As Worksheet Dim xStrEFPath As String Dim xStrEFFile As String Dim xObjFD As FileDialog Dim xObjSFD As FileDialog Dim xStrSPath As String Dim xStrCSVFName As String Dim xS As String Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlCalculationManual Application.DisplayAlerts = False On Error Resume Next Set xObjFD = Application.FileDialog(msoFileDialogFolderPicker) xObjFD.AllowMultiSelect = False xObjFD.Title = "Kutools for Excel - Select a folder which contains Excel files" If xObjFD.Show <> -1 Then Exit Sub xStrEFPath = xObjFD.SelectedItems(1) & "\" Set xObjSFD = Application.FileDialog(msoFileDialogFolderPicker) xObjSFD.AllowMultiSelect = False xObjSFD.Title = "Kutools for Excel - Select a folder to locate CSV files" If xObjSFD.Show <> -1 Then Exit Sub xStrSPath = xObjSFD.SelectedItems(1) & "\" xStrEFFile = Dir(xStrEFPath & "*.xlsx*") Do While xStrEFFile <> "" xS = xStrEFPath & xStrEFFile Set xObjWB = Application.Workbooks.Open(xS) xStrCSVFName = xStrSPath & Left(xStrEFFile, InStr(1, xStrEFFile, ".") - 1) & ".csv" xObjWB.SaveAs Filename:=xStrCSVFName, FileFormat:=xlCSV xObjWB.Close savechanges:=False xStrEFFile = Dir Loop Application.Calculation = xlCalculationAutomatic Application.EnableEvents = True Application.ScreenUpdating = True Application.DisplayAlerts = True End Sub With this code, thousands of .xlsx files become .csv. The problem here is that although the conversion happens correctly, when I use the pd.read_csv command, it only reads 1 column. As it seems: 0 0 PlatformData,2,0.020000,43.000000,33.000000,32... 1 PlatformData,1,0.020000,42.730087,33.000000,25... 2 PlatformData,2,0.040000,43.000000,33.000000,32... 3 PlatformData,1,0.040000,42.730141,33.000006,25... 4 PlatformData,2,0.060000,43.000000,33.000000,32... ... ... 9520 PlatformData,1,119.520000,42.931132,33.056849,... 9521 PlatformData,1,119.540000,42.931184,33.056868,... 9522 PlatformData,1,119.560000,42.931184,33.056868,... 9523 PlatformData,1,119.580000,42.931237,33.056887,... 9524 PlatformData,1,119.600000,42.931237,33.056887,... Because the column part is not correct, it combines the data and prevents me from training the model. Afterwards, in order to understand what the problem was, I saw that the problem disappeared when I converted only 1 excel file to .csv format manually using the "Save as" command and read it using the pandas library. Which looks like this: 0 1 2 3 4 5 6 7 8 9 10 11 0 PlatformData 2 0.02 43.000000 33.000000 3200.0 0.000000 0.0 0.0 0.000000 0.000000 -0.0 1 PlatformData 1 0.02 42.730087 33.000000 3050.0 60.000029 0.0 0.0 74.999931 129.903854 -0.0 2 PlatformData 2 0.04 43.000000 33.000000 3200.0 0.000000 -0.0 0.0 0.000000 0.000000 -0.0 3 PlatformData 1 0.04 42.730114 33.000064 3050.0 60.000029 0.0 0.0 74.999931 129.903854 -0.0 4 PlatformData 2 0.06 43.000000 33.000000 3200.0 0.000000 -0.0 0.0 0.000000 0.000000 -0.0 ... ... ... ... ... ... ... ... ... ... ... ... ... 57867 PlatformData 1 119.72 42.891333 33.019166 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57868 PlatformData 1 119.74 42.891333 33.019166 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57869 PlatformData 1 119.76 42.891387 33.019172 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57870 PlatformData 1 119.78 42.891387 33.019172 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57871 PlatformData 1 119.80 42.891441 33.019178 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 As seen here, each comma is separated as a separate column. I need to convert multiple files using VBA or some other convert technique because I have so many excel files. But as you can see, even though the format of the files is translated correctly, it is read incorrectly by pandas. I've tried converting with a bunch of different VBA codes so far. Then I tried to read it with the read_excel command on python and then convert it with to_csv, but I encountered the same problem again. (Reading only 1 column) What do I need to do to make it look like it was when I changed the format manually? Is there an error in the VBA code or do I need to implement another method for this operation? Thank you for your interest. Thanks in advance for any help A: Dealing with CSV is a tricky thing (not only in Excel). "CSV" stands for "comma separated values", and Excel takes this literally: When you use SaveAs FileFormat:=xlCSV, it will put a comma between your values. Except if you are using local setting on your computer that have a different separator defined, then Excel is using that separator (on my computer, for example, a semicolon). Your Pandas seems to expect tab characters as separator. You could try SaveAs FileFormat:=xlText or xlTextWindows - on my computer that generated tab separated files, but I couldn't find a documentation telling that this is always the case. The alternative is to use a small routine that writes the file manually - see for example VBA code to save Excel sheet as tab-delimited text file However, I doubt that you cannot bring Pandas to read comma separated files. According to https://pandas.pydata.org/docs/user_guide/io.html#io-read-csv-table, you should be able to define the separation character. A: I'm not sure how to change your OS separator like @FunThomas suggested, perhaps you could instead specify the delimiter used for read_csv() or writing out to_csv() Have you tried specifying a delimiter? i.e. import pandas as pd df = pd.read_csv('Book1.csv', sep='\t') print(df) See more here: https://www.geeksforgeeks.org/pandas-dataframe-to-csv-file-using-tab-separator/ Note the link above shows to_csv, but the param sep exists for read_csv too. See docs here.
Converting Multiple .xlsx Files to .csv - Pandas reading only 1 column
`` Hello everyone, I am working on a deep learning project. The data I will use for the project consists of multiple excel files. Since I will be using the pd.read_csv command of the Pandas library, I used a VBA code that automatically converts all excel files to csv format. Here is the VBA CODE: (xlsx to csv) Sub WorkbooksSaveAsCsvToFolder() 'UpdatebyExtendoffice20181031 Dim xObjWB As Workbook Dim xObjWS As Worksheet Dim xStrEFPath As String Dim xStrEFFile As String Dim xObjFD As FileDialog Dim xObjSFD As FileDialog Dim xStrSPath As String Dim xStrCSVFName As String Dim xS As String Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlCalculationManual Application.DisplayAlerts = False On Error Resume Next Set xObjFD = Application.FileDialog(msoFileDialogFolderPicker) xObjFD.AllowMultiSelect = False xObjFD.Title = "Kutools for Excel - Select a folder which contains Excel files" If xObjFD.Show <> -1 Then Exit Sub xStrEFPath = xObjFD.SelectedItems(1) & "\" Set xObjSFD = Application.FileDialog(msoFileDialogFolderPicker) xObjSFD.AllowMultiSelect = False xObjSFD.Title = "Kutools for Excel - Select a folder to locate CSV files" If xObjSFD.Show <> -1 Then Exit Sub xStrSPath = xObjSFD.SelectedItems(1) & "\" xStrEFFile = Dir(xStrEFPath & "*.xlsx*") Do While xStrEFFile <> "" xS = xStrEFPath & xStrEFFile Set xObjWB = Application.Workbooks.Open(xS) xStrCSVFName = xStrSPath & Left(xStrEFFile, InStr(1, xStrEFFile, ".") - 1) & ".csv" xObjWB.SaveAs Filename:=xStrCSVFName, FileFormat:=xlCSV xObjWB.Close savechanges:=False xStrEFFile = Dir Loop Application.Calculation = xlCalculationAutomatic Application.EnableEvents = True Application.ScreenUpdating = True Application.DisplayAlerts = True End Sub With this code, thousands of .xlsx files become .csv. The problem here is that although the conversion happens correctly, when I use the pd.read_csv command, it only reads 1 column. As it seems: 0 0 PlatformData,2,0.020000,43.000000,33.000000,32... 1 PlatformData,1,0.020000,42.730087,33.000000,25... 2 PlatformData,2,0.040000,43.000000,33.000000,32... 3 PlatformData,1,0.040000,42.730141,33.000006,25... 4 PlatformData,2,0.060000,43.000000,33.000000,32... ... ... 9520 PlatformData,1,119.520000,42.931132,33.056849,... 9521 PlatformData,1,119.540000,42.931184,33.056868,... 9522 PlatformData,1,119.560000,42.931184,33.056868,... 9523 PlatformData,1,119.580000,42.931237,33.056887,... 9524 PlatformData,1,119.600000,42.931237,33.056887,... Because the column part is not correct, it combines the data and prevents me from training the model. Afterwards, in order to understand what the problem was, I saw that the problem disappeared when I converted only 1 excel file to .csv format manually using the "Save as" command and read it using the pandas library. Which looks like this: 0 1 2 3 4 5 6 7 8 9 10 11 0 PlatformData 2 0.02 43.000000 33.000000 3200.0 0.000000 0.0 0.0 0.000000 0.000000 -0.0 1 PlatformData 1 0.02 42.730087 33.000000 3050.0 60.000029 0.0 0.0 74.999931 129.903854 -0.0 2 PlatformData 2 0.04 43.000000 33.000000 3200.0 0.000000 -0.0 0.0 0.000000 0.000000 -0.0 3 PlatformData 1 0.04 42.730114 33.000064 3050.0 60.000029 0.0 0.0 74.999931 129.903854 -0.0 4 PlatformData 2 0.06 43.000000 33.000000 3200.0 0.000000 -0.0 0.0 0.000000 0.000000 -0.0 ... ... ... ... ... ... ... ... ... ... ... ... ... 57867 PlatformData 1 119.72 42.891333 33.019166 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57868 PlatformData 1 119.74 42.891333 33.019166 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57869 PlatformData 1 119.76 42.891387 33.019172 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57870 PlatformData 1 119.78 42.891387 33.019172 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 57871 PlatformData 1 119.80 42.891441 33.019178 2550.0 5.000000 0.0 0.0 149.429214 13.073360 -0.0 As seen here, each comma is separated as a separate column. I need to convert multiple files using VBA or some other convert technique because I have so many excel files. But as you can see, even though the format of the files is translated correctly, it is read incorrectly by pandas. I've tried converting with a bunch of different VBA codes so far. Then I tried to read it with the read_excel command on python and then convert it with to_csv, but I encountered the same problem again. (Reading only 1 column) What do I need to do to make it look like it was when I changed the format manually? Is there an error in the VBA code or do I need to implement another method for this operation? Thank you for your interest. Thanks in advance for any help
[ "Dealing with CSV is a tricky thing (not only in Excel). \"CSV\" stands for \"comma separated values\", and Excel takes this literally: When you use SaveAs FileFormat:=xlCSV, it will put a comma between your values. Except if you are using local setting on your computer that have a different separator defined, then Excel is using that separator (on my computer, for example, a semicolon).\nYour Pandas seems to expect tab characters as separator. You could try SaveAs FileFormat:=xlText or xlTextWindows - on my computer that generated tab separated files, but I couldn't find a documentation telling that this is always the case. The alternative is to use a small routine that writes the file manually - see for example VBA code to save Excel sheet as tab-delimited text file\nHowever, I doubt that you cannot bring Pandas to read comma separated files. According to https://pandas.pydata.org/docs/user_guide/io.html#io-read-csv-table, you should be able to define the separation character.\n", "I'm not sure how to change your OS separator like @FunThomas suggested, perhaps you could instead specify the delimiter used for read_csv() or writing out to_csv()\nHave you tried specifying a delimiter? i.e.\nimport pandas as pd\ndf = pd.read_csv('Book1.csv', sep='\\t')\nprint(df)\n\nSee more here: https://www.geeksforgeeks.org/pandas-dataframe-to-csv-file-using-tab-separator/\nNote the link above shows to_csv, but the param sep exists for read_csv too. See docs here.\n" ]
[ 1, 1 ]
[]
[]
[ "excel", "file_conversion", "pandas", "python", "vba" ]
stackoverflow_0074431920_excel_file_conversion_pandas_python_vba.txt
Q: Python indexing question - 'IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed' Please can someone tell me why the following code does not work, and what the best work arounds for this are? Choices # variable containing True or False in each element. Choices.shape = (18978,) BestOption # variable containing 1 or 2 in each element. BestOption.shape = (18978, 1) Choices[BestOption==1] # I want to look up the values in choices for all instances where BestOption is 1. I get the following error: IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed A: BestOption is a 1-D "column vector" that's actually made up of many rows and is treated like a 2-D matrix. You can simply reshape it back to a 1-D "row vector": Choices[BestOption.reshape(-1)==1]
Python indexing question - 'IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed'
Please can someone tell me why the following code does not work, and what the best work arounds for this are? Choices # variable containing True or False in each element. Choices.shape = (18978,) BestOption # variable containing 1 or 2 in each element. BestOption.shape = (18978, 1) Choices[BestOption==1] # I want to look up the values in choices for all instances where BestOption is 1. I get the following error: IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
[ "BestOption is a 1-D \"column vector\" that's actually made up of many rows and is treated like a 2-D matrix. You can simply reshape it back to a 1-D \"row vector\":\nChoices[BestOption.reshape(-1)==1]\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "indexing", "numpy", "python" ]
stackoverflow_0074467609_arrays_indexing_numpy_python.txt
Q: Using multiplication in a pulp constraint I'm trying to solve a problem similar to this simpler example. Target Constraint 12 25 15 50 14 10 8 2 etc I'm trying to maximize the sum of a selection of the target column while keeping the product of the constraint column < a certain number. So for example, if the constraint was 500, one possible solution would be 34, and another would be 29. How would I code that constraint? A: As @AirSquid has pointed out multiplication of variables is not allowed in the objective or constraints of a linear program (this would make it non-linear). However, the problem you have described can be straight-forwardly and exactly linearised by taking logs. The log of a product of some numbers is equal to the sum of the logs of those numbers. So somthing like: import pulp import numpy as np targets = [12, 15, 14, 8] constrs = [25, 50, 10, 2] max_prod = 500 row_idxs = range(len(targets)) log_constrs = [np.log(i) for i in constrs] log_max_prod = np.log(max_prod) prob = pulp.LpProblem('so_74304315', pulp.LpMaximize) z = pulp.LpVariable.dicts('z', indexs=row_idxs, cat='Binary') # Objective prob += pulp.lpSum([targets[i]*z[i] for i in row_idxs]) # Constraint (linearised from product to sum of logs) prob += pulp.lpSum([log_constrs[i]*z[i] for i in row_idxs]) <= log_max_prod # Solve & print results: prob.solve() print("Status:", pulp.LpStatus[prob.status]) print("Objective value: ", pulp.value(prob.objective)) print ("Decision variables: ") for v in prob.variables(): print(v.name, "=", v.varValue) Which gives me: Status: Optimal Objective value: 34.0 Decision variables: z_0 = 1.0 z_1 = 0.0 z_2 = 1.0 z_3 = 1.0
Using multiplication in a pulp constraint
I'm trying to solve a problem similar to this simpler example. Target Constraint 12 25 15 50 14 10 8 2 etc I'm trying to maximize the sum of a selection of the target column while keeping the product of the constraint column < a certain number. So for example, if the constraint was 500, one possible solution would be 34, and another would be 29. How would I code that constraint?
[ "As @AirSquid has pointed out multiplication of variables is not allowed in the objective or constraints of a linear program (this would make it non-linear).\nHowever, the problem you have described can be straight-forwardly and exactly linearised by taking logs. The log of a product of some numbers is equal to the sum of the logs of those numbers. So somthing like:\nimport pulp\nimport numpy as np\n\ntargets = [12, 15, 14, 8]\nconstrs = [25, 50, 10, 2]\nmax_prod = 500\nrow_idxs = range(len(targets))\n\nlog_constrs = [np.log(i) for i in constrs]\nlog_max_prod = np.log(max_prod)\n\nprob = pulp.LpProblem('so_74304315', pulp.LpMaximize)\nz = pulp.LpVariable.dicts('z', indexs=row_idxs, cat='Binary')\n\n# Objective\nprob += pulp.lpSum([targets[i]*z[i] for i in row_idxs])\n\n# Constraint (linearised from product to sum of logs)\nprob += pulp.lpSum([log_constrs[i]*z[i] for i in row_idxs]) <= log_max_prod\n\n# Solve & print results:\nprob.solve()\nprint(\"Status:\", pulp.LpStatus[prob.status])\nprint(\"Objective value: \", pulp.value(prob.objective))\nprint (\"Decision variables: \")\nfor v in prob.variables():\n print(v.name, \"=\", v.varValue)\n\nWhich gives me:\nStatus: Optimal\nObjective value: 34.0\nDecision variables:\nz_0 = 1.0\nz_1 = 0.0\nz_2 = 1.0\nz_3 = 1.0\n\n" ]
[ 0 ]
[]
[]
[ "pulp", "python" ]
stackoverflow_0074304315_pulp_python.txt
Q: How to control number of cores of a method I have the following code: from sklearn_extra.clusters import KMedoids def _compute_medoids(df, k): k_medoids = KMedoids(n_clusters=k, metric='precomputed', init='k-medoids++').fit(df) medoid_index=k_medoids.medoid_indices_ labels=k_medoids.labels_ return medoid_index, labels for k in range(1, 6): medoid_ids, labels = _compute_medoids(df, n_clusters=k) Executing the code this way, I get a bad performance. Unlike sklearn's models, sklearn_extra.cluster.KMedoids doesn't have a n_jobs parameter, and checking the core usage, most of the time the process is using just one core. I tried to use joblib: Parallel(n_jobs=os.cpu_count())(delayed(_compute_medoids)(df, k) for k in range(1, 6)) I got some performance improvement, but not enough for my task. And also, increasing the number of cores from 4 to 8 or 16 did not return a proportional amount of performance improvement. As I understand, these multiprocessing libs like joblib or multiprocessing can control the number of workers in parallel, but not the core usage of the processing function. Am I right? I was wondering if there exists a way to force _compute_medoids to be executed on a fixed number of cores, so that I can process as many workers I can (Example - Using 16 cores to set 4 workers to execute 4 compute_medoids method, each one using 4 cores). Is it possible? A: The kmedoids package has faster algorithms, including a parallel version of FasterPAM. https://python-kmedoids.readthedocs.io/en/latest/#kmedoids.fasterpam def _compute_medoids(df, k): import kmedoids km = kmedoids.fasterpam(df, k) return km.medoids, km.labels
How to control number of cores of a method
I have the following code: from sklearn_extra.clusters import KMedoids def _compute_medoids(df, k): k_medoids = KMedoids(n_clusters=k, metric='precomputed', init='k-medoids++').fit(df) medoid_index=k_medoids.medoid_indices_ labels=k_medoids.labels_ return medoid_index, labels for k in range(1, 6): medoid_ids, labels = _compute_medoids(df, n_clusters=k) Executing the code this way, I get a bad performance. Unlike sklearn's models, sklearn_extra.cluster.KMedoids doesn't have a n_jobs parameter, and checking the core usage, most of the time the process is using just one core. I tried to use joblib: Parallel(n_jobs=os.cpu_count())(delayed(_compute_medoids)(df, k) for k in range(1, 6)) I got some performance improvement, but not enough for my task. And also, increasing the number of cores from 4 to 8 or 16 did not return a proportional amount of performance improvement. As I understand, these multiprocessing libs like joblib or multiprocessing can control the number of workers in parallel, but not the core usage of the processing function. Am I right? I was wondering if there exists a way to force _compute_medoids to be executed on a fixed number of cores, so that I can process as many workers I can (Example - Using 16 cores to set 4 workers to execute 4 compute_medoids method, each one using 4 cores). Is it possible?
[ "The kmedoids package has faster algorithms, including a parallel version of FasterPAM.\nhttps://python-kmedoids.readthedocs.io/en/latest/#kmedoids.fasterpam\ndef _compute_medoids(df, k):\n import kmedoids\n km = kmedoids.fasterpam(df, k)\n return km.medoids, km.labels\n\n" ]
[ 0 ]
[]
[]
[ "joblib", "multiprocessing", "python" ]
stackoverflow_0073977052_joblib_multiprocessing_python.txt
Q: I have two excel list with different PDF's that need to be merged. Is there anyway to merge them using code rather than doing manually (takes hours)? I have two Excel list indicating the path of PDF files that I need to merge- Is there anyway to do this using code? As the manual process takes hours to process. I've tried using VBA but IO don't have access to adobe API, so that's been stuck down. I am thinking python, any thoughts? A: Check out PyPDF2 Example from pypdf2.readthedocs.io from PyPDF2 import PdfMerger merger = PdfMerger() for pdf in ["file1.pdf", "file2.pdf", "file3.pdf"]: merger.append(pdf) merger.write("merged-pdf.pdf") merger.close() A: Python is the way to go. You can do this quite easily by using the pandas and PyMuPDF libraries. # pip install PyMuPDF import pandas as pd import fitz PDFs = pd.read_excel(«pdfs.xlsx») new_pdf = fitz.open() for row in PDFs.iterrows(): filename = row[«pdf_column»] in_pdf = fitz.open(filename) new_pdf.insert_pdf(in_pdf) new_pdf.save("merged.pdf")
I have two excel list with different PDF's that need to be merged. Is there anyway to merge them using code rather than doing manually (takes hours)?
I have two Excel list indicating the path of PDF files that I need to merge- Is there anyway to do this using code? As the manual process takes hours to process. I've tried using VBA but IO don't have access to adobe API, so that's been stuck down. I am thinking python, any thoughts?
[ "Check out PyPDF2\nExample from pypdf2.readthedocs.io\nfrom PyPDF2 import PdfMerger\n\nmerger = PdfMerger()\n\nfor pdf in [\"file1.pdf\", \"file2.pdf\", \"file3.pdf\"]:\n merger.append(pdf)\n\nmerger.write(\"merged-pdf.pdf\")\nmerger.close()\n\n", "Python is the way to go.\nYou can do this quite easily by using the pandas and PyMuPDF libraries.\n# pip install PyMuPDF\nimport pandas as pd\nimport fitz\n\nPDFs = pd.read_excel(«pdfs.xlsx»)\nnew_pdf = fitz.open() \n\nfor row in PDFs.iterrows():\n filename = row[«pdf_column»]\n in_pdf = fitz.open(filename)\n\n new_pdf.insert_pdf(in_pdf)\n\nnew_pdf.save(\"merged.pdf\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "excel", "pdf", "python" ]
stackoverflow_0074467349_excel_pdf_python.txt
Q: Using pandas I need to create a new column that takes a value from a previous row I have many rows of data and one of the columns is a flag. I have 3 identifiers that need to match between rows. What I have: partnumber, datetime1, previousdatetime1, datetime2, previousdatetime2, flag What I need: partnumber, datetime1, previousdatetime1, datetime2, previousdatetime2, flag, previous_flag I need to find flag from the row where partnumber matches, and where the previousdatetime1(current row*) == datetime1(other row)*, and the previousdatetime2(current row) == datetime2(other row). *To note, the rows are not necessarily in order so the previous row may come later in the dataframe I'm not quite sure where to start. I got this logic working in PBI using a LookUpValue and basically finding where partnumber = Value(partnumber), datetime1 = Value(datetime1), datetime2 = Value(datetime2). Thanks for the help! A: Okay, so assuming you've read this in as a pandas dataframe df1: (1) Make a copy of the dataframe: df2=df1.copy() (2) For sanity, drop some columns in df2 df2.drop(['previousdatetime1','previousdatetime2'],axis=1,inplace=True) Now you have a df2 that has columns: ['partnumber','datetime1','datetime2','flag'] (3) Merge the two dataframes newdf=df1.merge(df2,how='left',left_on=['partnumber','previousdatetime1'],right_on=['partnumber','datetime1'],suffixes=('','_previous')) Now you have a newdf that has columns: ['partnumber','datetime1','previousdatetime1','datetime2','previousdatetime2','flag','partnumber_previous','datetime1_previous','datetime2_previous','flag_previous'] (4) Drop the unnecessary columns newdf.drop(['partnumber_previous', 'datetime1_previous', 'datetime2_previous'],axis=1,inplace=True) Now you have a newdf that has columns: ['partnumber','datetime1','previousdatetime1','datetime2','previousdatetime2','flag','flag_previous']
Using pandas I need to create a new column that takes a value from a previous row
I have many rows of data and one of the columns is a flag. I have 3 identifiers that need to match between rows. What I have: partnumber, datetime1, previousdatetime1, datetime2, previousdatetime2, flag What I need: partnumber, datetime1, previousdatetime1, datetime2, previousdatetime2, flag, previous_flag I need to find flag from the row where partnumber matches, and where the previousdatetime1(current row*) == datetime1(other row)*, and the previousdatetime2(current row) == datetime2(other row). *To note, the rows are not necessarily in order so the previous row may come later in the dataframe I'm not quite sure where to start. I got this logic working in PBI using a LookUpValue and basically finding where partnumber = Value(partnumber), datetime1 = Value(datetime1), datetime2 = Value(datetime2). Thanks for the help!
[ "Okay, so assuming you've read this in as a pandas dataframe df1:\n(1) Make a copy of the dataframe:\ndf2=df1.copy()\n\n(2) For sanity, drop some columns in df2\ndf2.drop(['previousdatetime1','previousdatetime2'],axis=1,inplace=True) \n\nNow you have a df2 that has columns:\n['partnumber','datetime1','datetime2','flag']\n\n(3) Merge the two dataframes\nnewdf=df1.merge(df2,how='left',left_on=['partnumber','previousdatetime1'],right_on=['partnumber','datetime1'],suffixes=('','_previous')) \n\nNow you have a newdf that has columns:\n['partnumber','datetime1','previousdatetime1','datetime2','previousdatetime2','flag','partnumber_previous','datetime1_previous','datetime2_previous','flag_previous']\n\n(4) Drop the unnecessary columns\nnewdf.drop(['partnumber_previous', 'datetime1_previous', 'datetime2_previous'],axis=1,inplace=True)\n\nNow you have a newdf that has columns:\n['partnumber','datetime1','previousdatetime1','datetime2','previousdatetime2','flag','flag_previous']\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074466651_dataframe_pandas_python.txt
Q: Solve optimization problem with python library which has a logarithmic objective function How can I solve optimization problem: subject to: (I am looking for a library that its objective function can accept logarithms.) I found glpk and gurobipy but they don't seem to be able to do it. A: Based on your comments under the question, I am just going to refer you to one of the more standard libraries to solve this problem. Note the your objective concave and its a maximization problem. So, it is straightforward to rewrite it as a convex minimization problem and your constraints are linear. For such problems, you can use CVXOPT(https://cvxopt.org/index.html). In particular, look at some of the examples for how to use the library: https://cvxopt.org/examples/index.html#book-examples
Solve optimization problem with python library which has a logarithmic objective function
How can I solve optimization problem: subject to: (I am looking for a library that its objective function can accept logarithms.) I found glpk and gurobipy but they don't seem to be able to do it.
[ "Based on your comments under the question, I am just going to refer you to one of the more standard libraries to solve this problem. Note the your objective concave and its a maximization problem. So, it is straightforward to rewrite it as a convex minimization problem and your constraints are linear. For such problems, you can use CVXOPT(https://cvxopt.org/index.html). In particular, look at some of the examples for how to use the library: https://cvxopt.org/examples/index.html#book-examples\n" ]
[ 0 ]
[]
[]
[ "mathematical_optimization", "python" ]
stackoverflow_0074467453_mathematical_optimization_python.txt
Q: Does order of methods within a class matter? My problems or rather my misunderstanding are next. First one: Basically i made my linked list class, and now as you can see in following code in constructor i called append method before it was actually created and the code run without an error, so i am really interested to know why i didn't encountered any error there. class Node: def __init__(self, value): self.value = value self.next = None class Linkedlist: def __init__(self, *value): if len(value) == 1: new_node = Node(value[0]) self.head = new_node self.tail = new_node self.lenght = 1 else: self.__init__(value[0]) other_values = value[1::] for i in other_values: self.append(i) print('test1') def append(self, *value): for i in value: new_node = Node(i) if self.head == None: self.head = new_node self.tail = new_node else: self.tail.next = new_node self.tail = new_node self.lenght += 1 print('test2') return True Second one: As you can see i left print function in both constructor and append method in order to see how things are going. when i execute next code: my_linked_list = Linkedlist(3, 2, 7, 9) i get the output as following: test1, test2, test2, test2, test1 and i was expecting only test2, test2, test2, test1, i am curious why does it print test1 first. Sorry if my question was too long. I am quite new to programming and really curious about a lot of things. Answer would be greatly appreciated. A: Functions being defined is different from them being run; in your code, you define __init__ before you define append, but you don't actually call append until later. By the time you call it, it's been defined. For the order of prints, __init__ is called implicitly when you create the LinkedList. A: Your definition of Linkedlist.__init__ includes a recursive call to Likedlist.__init__. You don't need that, nor do you need to treat a single argument as a special case. You can simply write def __init__(self, *values): self.length = 0 for v in values: self.append(v) You'll need to adjust append slightly to ensure that self.length is always incremented, even if the list was originally empty. append becomes even simpler if you define your list to always include a dummy node that self.head references. You can store the length in this node. def __init__(self, *values): self.head = self.tail = Node(0) for v in values: self.append(v) I leave it as an exercise to write append under this assumption.
Does order of methods within a class matter?
My problems or rather my misunderstanding are next. First one: Basically i made my linked list class, and now as you can see in following code in constructor i called append method before it was actually created and the code run without an error, so i am really interested to know why i didn't encountered any error there. class Node: def __init__(self, value): self.value = value self.next = None class Linkedlist: def __init__(self, *value): if len(value) == 1: new_node = Node(value[0]) self.head = new_node self.tail = new_node self.lenght = 1 else: self.__init__(value[0]) other_values = value[1::] for i in other_values: self.append(i) print('test1') def append(self, *value): for i in value: new_node = Node(i) if self.head == None: self.head = new_node self.tail = new_node else: self.tail.next = new_node self.tail = new_node self.lenght += 1 print('test2') return True Second one: As you can see i left print function in both constructor and append method in order to see how things are going. when i execute next code: my_linked_list = Linkedlist(3, 2, 7, 9) i get the output as following: test1, test2, test2, test2, test1 and i was expecting only test2, test2, test2, test1, i am curious why does it print test1 first. Sorry if my question was too long. I am quite new to programming and really curious about a lot of things. Answer would be greatly appreciated.
[ "Functions being defined is different from them being run; in your code, you define __init__ before you define append, but you don't actually call append until later. By the time you call it, it's been defined.\nFor the order of prints, __init__ is called implicitly when you create the LinkedList.\n", "Your definition of Linkedlist.__init__ includes a recursive call to Likedlist.__init__. You don't need that, nor do you need to treat a single argument as a special case. You can simply write\ndef __init__(self, *values):\n self.length = 0\n\n for v in values:\n self.append(v)\n\nYou'll need to adjust append slightly to ensure that self.length is always incremented, even if the list was originally empty.\nappend becomes even simpler if you define your list to always include a dummy node that self.head references. You can store the length in this node.\ndef __init__(self, *values):\n self.head = self.tail = Node(0)\n for v in values:\n self.append(v)\n\nI leave it as an exercise to write append under this assumption.\n" ]
[ 1, 0 ]
[]
[]
[ "constructor", "methods", "oop", "python" ]
stackoverflow_0074467567_constructor_methods_oop_python.txt
Q: How do I screenshot a single monitor using OpenCV? I am trying to devleope a device that changes the RGB led strips according to the colour of my display. To this I am planning on screnshotiing the screen an normalising/taking the mean of the colours of individual pixels in the display. I have figured out how to screenshot a single monitor but want to make it work with a multi monitor setup. Here's my basic code. Any help would be greatly appreciated. import numpy as np import cv2 import pyautogui # take screenshot using pyautogui image = pyautogui.screenshot() # since the pyautogui takes as a # PIL(pillow) and in RGB we need to # convert it to numpy array and BGR # so we can write it to the disk image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) I tried this using the mss module but it isn't working. It's having an issue where the secondary display is just clipping in the final image. import numpy as np import cv2 import pyautogui import mss with mss.mss() as sct: # Get information of monitor 2 monitor_number = 1 mon = sct.monitors[monitor_number] # The screen part to capture monitor = { "top": mon["top"], "left": mon["left"], "width": mon["width"], "height": mon["height"], "mon": monitor_number, } output = "sct-mon{mon}_{top}x{left}_{width}x{height}.png".format(**monitor) # Grab the data sct_img = sct.grab(monitor) img = np.array(sct.grab(monitor)) # BGR Image A: Using python-mss, we may iterate the list of monitors, and grab a frame from each monitor in a loop (we may place that loop in an endless loop). Example for iterating the monitors: for monitor_number, mon in enumerate(sct.monitors[1:]): We are ignoring index 0 (it looks like sct.monitors[0] applies a large combined monitor). enumerate is used as shortcut for iterating the list, and also getting an index. The following code sample grabs a frame from each monitor (in an endless loop), down-scale the frame, and shows it for testing (using cv2.imshow). Each monitor has a separate window, with monitor index shown at the title. Code sample: import numpy as np import cv2 import mss with mss.mss() as sct: # Grab frames in an endless lopp until q key is pressed while True: # Iterate over the list of monitors, and grab one frame from each monitor (ignore index 0) for monitor_number, mon in enumerate(sct.monitors[1:]): monitor = {"top": mon["top"], "left": mon["left"], "width": mon["width"], "height": mon["height"], "mon": monitor_number} # Not used in the example # Grab the data img = np.array(sct.grab(mon)) # BGRA Image (the format BGRA, at leat in Windows 10). # Show down-scaled image for testing # The window name is img0, img1... applying different monitors. cv2.imshow(f'img{monitor_number}', cv2.resize(img, (img.shape[1]//4, img.shape[0]//4))) key = cv2.waitKey(1) if key == ord('q'): break cv2.destroyAllWindows() Sample output (using two monitors): The above sample demonstrates the grabbing process. You still have to compute the normalized mean of colour... Note the the pixel format is BGRA and not BGR (the last channel is alpha (transparency) channel, that may be ignored).
How do I screenshot a single monitor using OpenCV?
I am trying to devleope a device that changes the RGB led strips according to the colour of my display. To this I am planning on screnshotiing the screen an normalising/taking the mean of the colours of individual pixels in the display. I have figured out how to screenshot a single monitor but want to make it work with a multi monitor setup. Here's my basic code. Any help would be greatly appreciated. import numpy as np import cv2 import pyautogui # take screenshot using pyautogui image = pyautogui.screenshot() # since the pyautogui takes as a # PIL(pillow) and in RGB we need to # convert it to numpy array and BGR # so we can write it to the disk image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) I tried this using the mss module but it isn't working. It's having an issue where the secondary display is just clipping in the final image. import numpy as np import cv2 import pyautogui import mss with mss.mss() as sct: # Get information of monitor 2 monitor_number = 1 mon = sct.monitors[monitor_number] # The screen part to capture monitor = { "top": mon["top"], "left": mon["left"], "width": mon["width"], "height": mon["height"], "mon": monitor_number, } output = "sct-mon{mon}_{top}x{left}_{width}x{height}.png".format(**monitor) # Grab the data sct_img = sct.grab(monitor) img = np.array(sct.grab(monitor)) # BGR Image
[ "Using python-mss, we may iterate the list of monitors, and grab a frame from each monitor in a loop (we may place that loop in an endless loop).\n\nExample for iterating the monitors:\nfor monitor_number, mon in enumerate(sct.monitors[1:]):\n\n\nWe are ignoring index 0 (it looks like sct.monitors[0] applies a large combined monitor).\nenumerate is used as shortcut for iterating the list, and also getting an index.\n\n\nThe following code sample grabs a frame from each monitor (in an endless loop), down-scale the frame, and shows it for testing (using cv2.imshow).\nEach monitor has a separate window, with monitor index shown at the title.\nCode sample:\nimport numpy as np\nimport cv2\nimport mss \n\nwith mss.mss() as sct:\n # Grab frames in an endless lopp until q key is pressed\n while True:\n # Iterate over the list of monitors, and grab one frame from each monitor (ignore index 0)\n for monitor_number, mon in enumerate(sct.monitors[1:]):\n monitor = {\"top\": mon[\"top\"], \"left\": mon[\"left\"], \"width\": mon[\"width\"], \"height\": mon[\"height\"], \"mon\": monitor_number} # Not used in the example\n\n # Grab the data\n img = np.array(sct.grab(mon)) # BGRA Image (the format BGRA, at leat in Windows 10).\n\n # Show down-scaled image for testing\n # The window name is img0, img1... applying different monitors.\n cv2.imshow(f'img{monitor_number}', cv2.resize(img, (img.shape[1]//4, img.shape[0]//4)))\n key = cv2.waitKey(1)\n if key == ord('q'):\n break\n\ncv2.destroyAllWindows()\n\n\nSample output (using two monitors):\n\n\nThe above sample demonstrates the grabbing process.\nYou still have to compute the normalized mean of colour...\nNote the the pixel format is BGRA and not BGR (the last channel is alpha (transparency) channel, that may be ignored).\n" ]
[ 2 ]
[]
[]
[ "image_processing", "python", "python_3.x" ]
stackoverflow_0074462726_image_processing_python_python_3.x.txt