questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Python Requests/Selenium hard scraping tables website is: https://www.jao.eu/auctions#/I need to get/scrape and save the tables ('AUCTION SPECIFICATIONS & RESULTS') present in jao.eu/auctions# that come after I operate selections in the OUT AREA, IN AREA, TYPE, AUCTION ID, etc., fieldsExample, I need to get the table that displays after selecting: OUT AREA = 'AT', IN AREA = 'CH', type = 'Daily', Date = '23/05/2021', AUCTION ID = ...Thank you very much
To get the data you can simulate Ajax request with requests module. For example:import jsonimport requestsurl = "https://www.jao.eu/api/v1/auction/calls/getauctions"payload = { "corridor": "AT-CH", "fromdate": "2021-05-22-22:00:00", "horizon": "Daily", "todate": "2021-05-23-21:59:59",}data = requests.post(url, json=payload).json()# uncomment this to print all data:# print(json.dumps(data, indent=4))for r in sorted(data[0]["results"], key=lambda k: k["productHour"]): print( r["productHour"], r["offeredCapacity"], r["requestedCapacity"], r["allocatedCapacity"], r["auctionPrice"], )Prints:00:00-01:00 403 2379 403 14.2301:00-02:00 403 2440 403 14.902:00-03:00 403 2290 403 8.4503:00-04:00 403 2290 403 5.3404:00-05:00 403 2215 403 205:00-06:00 403 2240 403 1.9806:00-07:00 403 2125 403 0.5607:00-08:00 403 2102 403 0.5308:00-09:00 403 2100 403 0.3109:00-10:00 403 2100 403 0.3110:00-11:00 403 2106 403 0.3111:00-12:00 403 2189 403 0.8812:00-13:00 403 2230 403 113:00-14:00 403 2240 403 114:00-15:00 403 2242 403 115:00-16:00 403 2148 403 0.6816:00-17:00 403 1990 403 0.3317:00-18:00 403 2007 403 0.3418:00-19:00 403 1935 403 0.319:00-20:00 403 1935 402 0.1520:00-21:00 403 1935 402 0.1521:00-22:00 403 1935 402 0.1522:00-23:00 403 1935 402 0.1523:00-24:00 403 2036 403 0.25
Rename key field in all documents: duplicate key error I have 4000 of documents I need to change the one key of whole documentswhat I have tried is db.qa_opportunities.updateMany({},{$rename :{"tx_date":"review_date"}}) but it is created two one is tx_date and another is review_date some of the values are moved to tx_date and some are review_date and the error is :WriteResult({ "nMatched" : 0, "nUpserted" : 0, "nModified" : 0, "writeError" : { "code" : 11000, "errmsg" : "E11000 duplicate key error index: fielding.qa_opportunities.$tx_date_1_emp_no_1_chat_id_1 dup key: { : null, : \"P111993\", : 4343675 }" }})I need all values to represent only review date. Can anyone help?
You currently have an index which includes the tx_date field (among others), and because the index is configured to be unique, when you remove the tx_date field you end up with duplicate index keys.I would try the following:Analyse all your indexes, and make a note of any which reference the tx_date fieldRemove those indexesRename the key from tx_date to review_dateRecreate each index, to reference the new key name review_date in place of the old key name tx_date.
Call a python script with parser arguments in .NET Sorry if this is a duplicate question but I could not find clear answer.I am trying to call a python script (dnstwist.py) from within .NET using IronPython. public static void runDNSTwistPython(string url){ var ipy = Python.CreateRuntime(); dynamic test = ipy.UseFile("dnstwist-master\\dnstwist.py"); test.main();}I need a way to call this and put the output in a .csv file. I can easily do this from command line but can't figure out how to do it in my C# method.python dnstwist.py --csv google.com > output.csvThanks
@jacob-hall you can use C# Process class and pass the command line to be executed this class documentation can be found here Processand you code will be something like this:Process myProcess = new Process();myProcess.StartInfo.UseShellExecute = false;myProcess.StartInfo.FileName = "python dnstwist.py --csv google.com > output.csv";myProcess.StartInfo.CreateNoWindow = true;myProcess.Start();
bin data depending on values of a separate column I have a dataset which looks somehow like this toy example:s1 = pd.Series(np.random.rand(5))s2 = pd.Series(np.random.rand(5) * 10)cat1 = pd.Series(['s1'] * 5)cat2 = pd.Series(['s2'] * 5)s = s1.append(s2).reset_index(drop=True)c = cat1.append(cat2).reset_index(drop=True)data = pd.DataFrame({'cat': c,'s': s})print data cat s0 s1 0.681 s1 0.612 s1 0.433 s1 0.684 s1 0.115 s2 4.826 s2 8.197 s2 3.888 s2 5.519 s2 1.20I would like to bin the data, using a different binning range depending on the values in the column cat. This is what I tried:def bucketing_fun(x, cat): if cat == 's1': return np.digitize([x], s1_buckets)[0] else: return np.digitize([x], s2_buckets)[0]data['Buckets'] = data[['s', 'cat']].apply(lambda x: bucketing_fun(x[0], x[1]), axis=1)print dataThis works but I have performance issues on the real dataset which is about 0.5mn rows.
You're probably losing out on the vectorization speedupTry this:buckets = dict(s1=s1_buckets, s2=s2_buckets)data['Buckets'] = data.groupby(['cat']).apply(lambda df: np.digitize(df.s, buckets[df.cat.irow(0)]))
How to return void in python This is the famous coin change dp problem - given some coins return possible amount 11=2+2+2+5arr=[2,5]def Recur(amount,seq): if amount==0: print(seq) return if amount<0: return for coin in arr: seq+=str(coin) Recur(amount-coin,seq)Recur(11,"")Whatever I try to return the function returns one 2 more, that is it continues after the amount has reached 0. I tried to return 0,None, just return -- nothing works? It always continues after <0
arr=[2,5]def Recur(amount,seq): if amount==0: print(seq) return if amount<0: return for coin in arr: seq+=str(coin) # error is in this line since you are #trying to update seq which persist across the method stack due #to for loop calls Recur method again with this updated seq #leading to extra addition of extra coin in seq. Recur(amount-coin,seq)Recur(11,"")You need to update seq across the call so that hence do following modification:arr=[2,5]def Recur(amount,seq): if amount==0: print(seq) return if amount<0: return for coin in arr: Recur(amount-coin,seq+str(coin))Recur(11,"")
How do I get this command to work in different places at the same time?? discord.py I want this command to work in different places at the same time. When I run it on one channel, and my friend runs it on another channel, the command starts to be duplicated when one of us presses the button. I don't click anything, but if my friend clicks on the button in his channel, the message will be sent in both channels.import randomimport stringimport discordfrom discord_components import DiscordComponents, Button, ButtonStyle@bot.command()async def random_screenshot(ctx): while True: letters = string.ascii_lowercase + string.digits rand_string = ''.join(random.choice(letters) for i in range(6)) link = "https://prnt.sc/" + str(rand_string) await ctx.send(link, components=[ Button(style=ButtonStyle.blue, label="Next", emoji="➡")]) await bot.wait_for('button_click')This usually happens with all commands when i use the while loop
The while loop isn't the problem here (though it's a separate problem).What's happening is that await bot.wait_for("button_click") doesn't care what button is pressed. This means that when the command is run twice, then a button is clicked, both messages respond.You'll want to make sure that the wait_for only continues if the button is our message's button (instead of another message's button). To do that, you'll need to do two things.First, we need to generate a random string and set our button's custom id to it. This is so that we can know which button was pressed depending on its custom id. But wait, the discord_components library already generates an ID for us when we create the Button object, so we can just remember its ID like so:button = Button(style=ButtonStyle.blue, label="Next", emoji="➡")button_id = button.custom_idawait ctx.send(link, components=[button])Second, we'll need to pass a function to wait_for as the check keyword argument that only returns True if the button clicked was our button. Something like so:def check_interaction(interaction): return interaction.custom_id == button_idawait bot.wait_for("button_click", check=check_interaction)Now your command should only respond to its own button presses :D
ORM works, Declarative doesn't. Why? This should be self explanatory. I am able to expose a database through the Object Relational approach (ORM), but not through the Declarative approach. Am I failing to instantiate the class? What is the missing step here?from sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy import create_enginefrom sqlalchemy.orm import sessionmakerfrom sqlalchemy import Table, Column, Integer, Stringengine = create_engine('my connection details', echo=False)Session = sessionmaker(bind=engine)session = Session()Base = declarative_base()my_table = Table('my_table', Base.metadata, autoload=True, autoload_with=engine, schema='my_schema')class MyClass(Base): __tablename__ = 'my_table' int_col = Column(Integer, primary_key=True) str_col = Column(String)>>> for stuff in session.query(my_table).all():... print stuff # Works perfectly>>> for stuff in session.query(MyClass).all():... print stuff # DatabaseError: table or view does not exist
Try this:class MyClass(Base): __table__ = my_table int_col = Column(Integer, primary_key=True) str_col = Column(String)Declarative creates table for each mapper, so 'my_table' in declarative is other table.You may also use database reflection:metadata = sqlalchemy.MetaData(bind=engine)metadata.reflect()class MyClass(Base): __table__ = metadata.tables['my_table'] int_col = Column(Integer, primary_key=True) str_col = Column(String)
numpy function IOError On my macbook air running OSX Mavericks (I'm almost certain this wasn't happening the other day on a PC running Windows 7 running virtually identical code) the following code gives me the following error.import numpy as npmassFile='Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'print massFilesampleInfo=np.genfromtxt(fname=massFile,skip_header=2,usecols=(2,3,4),dtype=float)massfile is printed out as expected as 'Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'but I get the errorTraceback (most recent call last): File "test.py", line 7, in <module> sampleInfo=np.genfromtxt(fname=massFile,skip_header=2,usecols=(2,3,4),dtype=float) File "//anaconda/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1317, in genfromtxt fhd = iter(np.lib._datasource.open(fname, 'rbU')) File "//anaconda/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 145, in open return ds.open(path, mode) File "//anaconda/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 477, in open return _file_openers[ext](found, mode=mode)IOError: [Errno 2] No such file or directory: '/Users/BigD/Dropbox/PhD/PPMS/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'it appears to be trying to be using half of the path and then adding the full path file to the end of it.Does anyone know why this is happening or can suggest a work around?
The path you're supplying in massFile is relative to the directory you're executing the script in.To see where you are, just type pwd in your shell. In your case, it will return /Users/BigD/Dropbox/PhD/PPMS/. So this value is silently prepended to your path:massFile='/Users/BigD/Dropbox/PhD/PPMS/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'This is also the value you're seing in your traceback.There are two ways to fix this:To mark a path to be absolute just prefix the path with a /:massFile='/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'or to keep it relative you have to remove the unneeded bits:massFile='DATA/DB/HeatCap/HeatCapMass.txt'I would suggest picking the latter, that way you can move the project around without breaking all your paths.
update python module while it is active Is it possible to update a python module while it is used in a running script?The situation is the following:1) I have a script running using pandas 0.15.2. It is a long data processing task and should continue running for at least another week.2) I would like to run, on the same machine, another script, which requires pandas 0.16.Is it possible for me to do 2) without interrupting 1)?
If the script is still running, it's likely that replacing the dependency will not affect it at all - the code will already be in memory.Still, it's better to be safe than sorry. I would install the other script inside a virtualenv, in which you can install whichever versions of modules you want without affecting anything else.
Can't successfully convert to int/float within average calculation in python Here is my code, without the last part it splits the name and score. I'm trying to work out the average by using sum/len. I need to convert the score to float somewhere, whenever I try I get the following message:for name in sorted(user_scores): # get the highest score in the list. average = sum(user_scores[name])/len[name] print(name, average)
Your average calculation is wrong. It should be average = sum(user_scores[name]) / len(user_scores[name])(Probably this was some kind of copy-paste error. With len[name] you are using name as an index to len, not as a parameter (hence the not subscriptable error), and with len(name) you would divide by the number of characters in the name.)
Error using python built in function **abs** in pyspark-2.3 I was trying to convert negative number to positive by using python built in abs function in pyspark shell-2.3.numb = -2print(abs(numb))Its throwing me a weird error:py4j.protocol.Py4JError: An error occurred while calling z:org.apache.spark.sql.functions.abs. Trace:**py4j.Py4JException: Method abs([class java.lang.Integer]) does not exist at**py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:339) at py4j.Gateway.invoke(Gateway.java:276) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:745) could you please help me ??
That's not how you use pyspark.sql.functions. There are not designed to be evaluated outside DataFrame context, and operate on Columns.You could use literal Column:from pyspark.sql.functions import abs, lit abs(lit(numb))but it'll give you yet another Column:Column<b'abs(-2)'>While in theory such objects can be evaluated locally, it is not intended for public usage.If you want to operate on plain Python numerics just stick to Python's built-in abs.If you've shaded built-in functions you can express the function from the comments as:def math_result(current_val, value): result = ((value - current_val) / value) *100 return __builtins__.abs(__builtins__.round(result, 2)) math_result(1, 3) ## 66.67
problems in read/write program with python so I have an archive were there is a number, say 1, and I want to do a program that reads that 1, then add 1 so 1+1=2 and then print in the archive 2, so now there is only a 2 in the archive. What I did is this outfile = open('text.txt', 'r')m=outfile.readline()g=m+1outfile.close()outfile = open('text.txt', 'w')outfile.write(str(g))outfile.close()but it keeps saying:TypeError: can't concatenate 'str' and 'int' objectsI know the error that I am doing... but I can't solve it. Mind helping me in this one? Thank you cleverer people!
When you read m, it is a string. You need to cast it to int before adding 1.with open("text.txt") as inf: m = inf.read()g = int(m) + 1with open("text.txt", "w") as outf: outf.write(str(g))
Are there any pros to having a convolution layer using a filter the same size as the input data? Are there any pros to having a convolution layer using a filter the same size as the input data (i.e. the filter can only fit over the input one way)?
A filter the same size as the input data will collapse the output dimensions to 1 x 1 x n_filters, which could be useful towards the end of a network that has a low dimensional output like a single number for example.One place this is used is in sliding window object detection, where redundant computation is saved by making only one forward pass to compute the output on all windows.However, it is more typical to add one or more dense layers that give the desired output dimension instead of fully collapsing your data with convolution layers.
How to use emmet for Python in Vs code How can i use emmet in for python in Vs code?What item and value should I enter to add emmet for python and other languages?
in .vscode folder > setting.json add this:{ "emmet.includeLanguages": { "javascript": "javascriptreact", "razor": "html", "plaintext": "pug", "django-html":"html" }}For more info follow this link: Emmet in Visual Studio Code
Label not updating in tkinter I have asked a very similar question I believe 2 days ago and it got closed as a mod told me to look at other similar questions but none of the solutions worked. Does anyone have any idea how to fix the error. Here is the code at the moment all it does is print and display the first number in the sequence:import tkinter as tk#import timewindow = tk.Tk()window.title("Hello wold")window.geometry("300x300")timer = int(input("time in seconds "))def update(): global timer timer -= 1 print(timer) hello = tk.Label(window, textvariable = timer) hello.pack()for i in range(timer): window.after(1000, update) tk.mainloop()
There are few issues in your code:it is not recommended to use console input() in a GUI applicationthe for loop will be blocked by tk.mainloop() until the root window is closed. However the next iteration will raise exception since the root window is destroyed. Actually the for loop is not necessary.Below is a modified example:import tkinter as tkwindow = tk.Tk()window.title("Hello World")window.geometry("300x300")# it is not recommended to use console input() in a GUI app#timer = int(input("time in seconds: "))timer = 10 # use sample input valuedef update(timer=timer): hello['text'] = timer if timer > 0: # schedule to run update() again one second later hello.after(1000, update, timer-1)# create label once and update its text inside update()hello = tk.Label(window)hello.pack()update() # start the after loop# should run mainloop() only oncewindow.mainloop()
assigning a json dictionary key value to a variable I have a json with the following structure.{ "source": { "excelsheetname": { "convert_to_csv": "True", "tgt_folder": "archieve/datetime=", "brand": "chicken", "sheets": { "sheet1":{ "headers": 5, "columns": 1 }, "sheet2": { "headers": 3, "columns": 2 }, "sheet3": { "headers": 5, "columns": 3 }, "sheet4": { "headers": 3, "columns": 2 } } }What I am trying to do is create a for loop assign the sheet headers and columns to a variable.This is what my code looks like:for i in config['source']['excelsheetname']['sheets'][i]: if i == "headers": value = i['headers']But this code doesnt work, as I keep getting the "TypeError: string indices must be integers"
First, you want to parse the JSON into a Python dictionary:import jsonconfig_str = """{ "source": { "excelsheetname": { "convert_to_csv": "True", "tgt_folder": "archieve/datetime=", "brand": "chicken", "sheets": { "sheet1":{ "headers": 5, "columns": 1 }, "sheet2": { "headers": 3, "columns": 2 }, "sheet3": { "headers": 5, "columns": 3 }, "sheet4": { "headers": 3, "columns": 2 } } } }}"""config = json.loads(config_str)From your question, I understood that you want to get a list of the headers and a list of the columns. When you iterate over a dictionary in Python, you have two values: the key and the value. We can use the values() method to iterate only over the values. Your code iterates over an expression that includes the loop variable, which doesn't make sense, as the variable doesn't exist until then. You could use a simple for loop to get what you want:for sheetname, sheet in config["source"]["excelsheetname"]["sheets"].items(): headers = sheet["headers"] columns = sheet["columns"] # Processing print(headers + columns, sheetname) # Just an example of processing
Convert Pandas Dataframe Strings to Decimal (Inc. Empty Strings) I have written a utility function that will convert strings to decimals- it also returns a zero decimal if the string is empty.from decimal import * def convert_string_to_decimal(some_string): return Decimal('0.00') if (some_string == '' or some_string.isspace()) else Decimal(some_string)I have a pandas dataframe of a bank statement with two columns that I would now like to convert to decimals. They are called debit and credit. How best should I go about using the function above? Secondly, is this even recommended? I read somewhere that one should use Decimal for currency.
There is no need for new function, you can do it with astype...import pandas as pddata = {'id': ['A', 'B', 'C', 'D', 'E'], 'debit': ['1.11','', '2.22', '3.33', ' '], 'credit': ['1.2345', '2.3456', '3.00', '4', '5']}df = pd.DataFrame(data)print(df)''' id debit credit0 A 1.11 11 B 22 C 2.22 33 D 3.33 44 E 5'''df['debit'] = df['debit'].replace(' ', '').replace('', '0.00').astype(float)df['credit'] = df['credit'].replace(' ', '').replace('', '0.00').astype(float)print(df)''' id debit credit0 A 1.11 1.23451 B 0.00 2.34562 C 2.22 3.00003 D 3.33 4.00004 E 0.00 5.0000'''
Schedule an event for a certain unix timestamp in Python How to implement the following?def thirty_seconds_in(): print('meow')at_time( time() + 30, thirty_seconds_in )Do I need my own thread/runloop with a sleep(.01) in it?
The first solution is, like you said, to implement your own loop.The second solution is to use some given library functions like sched.But in fact you need a runtime loop that will perform a check.
How to convert a datetime ('2019-06-05T10:37:29.353+0100') to UTC timestamp using Python3? I want to convert datetime, i.e. 2019-06-05T10:37:29.353+0100, to UTC timestamp in Python3.I understand that +0100 represents the timezone. Why do +0100, +0200, and +0300 all convert to the same timestamp?How can I convert a datetime containing a timezone to a UTC timestamp?>>> d=datetime.datetime.strptime('2019-06-05T10:37:29.353+0100', '%Y-%m-%dT%H:%M:%S.%f%z')>>> unixtime = time.mktime(d.timetuple())>>> unixtime1559723849.0>>> d=datetime.datetime.strptime('2019-06-05T10:37:29.353+0200', '%Y-%m-%dT%H:%M:%S.%f%z')>>> unixtime = time.mktime(d.timetuple())>>> unixtime1559723849.0>>> d=datetime.datetime.strptime('2019-06-05T10:37:29.353+0300', '%Y-%m-%dT%H:%M:%S.%f%z')>>> unixtime = time.mktime(d.timetuple())>>> unixtime1559723849.0
here's some more explanations (see comments) how to convert back and forth between timestamps as strings with UTC offset and POSIX timestamps.from datetime import datetime, timezones = '2019-06-05T10:37:29.353+0100'# to datetime objectdt = datetime.strptime('2019-06-05T10:37:29.353+0100', '%Y-%m-%dT%H:%M:%S.%f%z')# note that the object has tzinfo set to a specific timedelta:print(repr(dt))>>> datetime.datetime(2019, 6, 5, 10, 37, 29, 353000, tzinfo=datetime.timezone(datetime.timedelta(seconds=3600)))# you could store this infodt_UTCoffset = dt.utcoffset() # datetime.timedelta(seconds=3600)# to get POSIX seconds since the epoch:ts = dt.timestamp()# and back to datetime:dt_from_ts = datetime.fromtimestamp(ts, tz=timezone.utc)# note that this is a UTC timestamp; the UTC offset is zero:print(dt_from_ts.isoformat())>>> 2019-06-05T09:37:29.353000+00:00# instead of UTC, you could also set a UTC offset:dt_from_ts = datetime.fromtimestamp(ts, tz=timezone(dt_UTCoffset))print(dt_from_ts.isoformat())>>> 2019-06-05T10:37:29.353000+01:00...And a note on a pitfall when working with datetime in Python: if you convert from timestamp to datetime and don't set the tz property, local time is returned (same applies the other way 'round!):print(datetime.fromtimestamp(ts)) # I'm on CEST at the moment, so UTC+2>>> 2019-06-05 11:37:29.353000
How does Tensorflow or Keras handle model weight inititialization and when does it happen? After reading the answer to this question I am a bit confused as to when exactly TensorFlow initializes the weight and bias variables.As per the answers, Compile defines the loss function, the optimizer and the metrics. That's all.Since the compile() method doesn't initialize it then that would suggest that it happens during the fit() method run.However the issue with that is, in case of loading models or loading weights how would fit() know that the weights, its presented with, are actually useful and should not be thrown away and then assigned random values in place of those.We pass the type of intitializer in the argument kernel_initializer while declaring the layer. For example:dense02 = tf.keras.layers.Dense(units=10, kernel_initializer='glorot_uniform', bias_initializer='zeros')So an obvious question would be whether the weights are initialized layer by layer during the first epoch forward pass or does it happen for all layers before the first epoch.(What I am trying to say is that if there say 5 Dense layers in the model, then does the initialization happen say a layer at a time, i.e. the first Dense layer gets initialized then the forward pass happens for that layer, then the second layer is initialized and the forward pass for second Dense layer happens and so on)Another aspect is regarding transfer learning, when stacking custom layers on top of a trained model, the trained model layers have the weights, while the layers that I added wouldn't have any useful layers. So how would TensorFlow know to only initialize the variables of the layers I added and not the mess up the layers of the transferred model (provided, I don't have trainable=False)How does TensorFlow or Keras handle weight initialization?
The weights are initialized when the model is created (when each layer in model is initialized), i.e before the compile() and fit():import tensorflow as tffrom tensorflow.keras import models, layersinputs = layers.Input((3, ))outputs = layers.Dense(units=10, kernel_initializer='glorot_uniform', bias_initializer='zeros')(inputs)model = models.Model(inputs=inputs, outputs=outputs)for layer in model.layers: print("Config:\n{}\nWeights:\n{}\n".format(layer.get_config(), layer.get_weights()))Outputs:Config:{'batch_input_shape': (None, 3), 'dtype': 'float32', 'sparse': False, 'ragged': False, 'name': 'input_1'}Weights:[]Config:{'name': 'dense', 'trainable': True, 'dtype': 'float32', 'units': 10, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}Weights:[array([[-0.60352975, 0.08275259, -0.6521113 , -0.5860774 , -0.42276743, -0.3142944 , -0.28118378, 0.07770532, -0.5644444 , -0.47069687], [ 0.4611913 , 0.35170448, -0.62191975, 0.5837332 , -0.3390234 , -0.4033073 , 0.03493106, -0.06078851, -0.53159714, 0.49872506], [ 0.43685734, 0.6160207 , 0.01610583, -0.3673877 , -0.14144647, -0.3792309 , 0.05478126, 0.602067 , -0.47438127, 0.36463356]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)]
How to apply average pooling at each time step of lstm output? I'm trying to apply average pooling at each time step of lstm output, please find my architecture as belowX_input = tf.keras.layers.Input(shape=(64,35))X= tf.keras.layers.LSTM(512,activation="tanh",return_sequences=True,kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X_input)X= tf.keras.layers.LSTM(256,activation="tanh",return_sequences=True,kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X)X = tf.keras.layers.GlobalAvgPool1D()(X)X = tf.keras.layers.Dense(128,activation="relu",kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X)X = tf.keras.layers.Dense(64,activation="relu",kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X)X = tf.keras.layers.Dense(32,activation="relu",kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X)# X = tf.keras.layers.Dense(16,activation="relu",kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2(0.1))(X)output_layer = tf.keras.layers.Dense(10,activation='softmax', kernel_initializer=tf.keras.initializers.he_uniform(seed=45))(X)model2 = tf.keras.Model(inputs = X_input,outputs = output_layer)I want to take average at each time step, not on each unitFor example now I'm getting the shape (None,256) but I want to get the shape (None,64) from global average pooling layer, what I need to do for that.
I am not sure this is the most efficient way, but you can try this :X = tf.keras.layers.Reshape(target_shape=(64,256,1))(X)X = tf.keras.layers.TimeDistributed(tf.keras.layers.GlobalAveragePooling1D())(X)X = tf.keras.layers.Reshape(target_shape=(64,))(X)instead of :X = tf.keras.layers.GlobalAvgPool1D()(X)The summary is now :Model: "functional_13"_________________________________________________________________Layer (type) Output Shape Param # =================================================================input_14 (InputLayer) [(None, 64, 35)] 0 _________________________________________________________________lstm_26 (LSTM) (None, 64, 512) 1122304 _________________________________________________________________lstm_27 (LSTM) (None, 64, 256) 787456 _________________________________________________________________reshape_2 (Reshape) (None, 64, 256, 1) 0 _________________________________________________________________time_distributed_8 (TimeDist (None, 64, 1) 0 _________________________________________________________________reshape_3 (Reshape) (None, 64) 0 _________________________________________________________________dense_61 (Dense) (None, 128) 8320 _________________________________________________________________dense_62 (Dense) (None, 64) 8256 _________________________________________________________________dense_63 (Dense) (None, 32) 2080 _________________________________________________________________dense_64 (Dense) (None, 10) 330 =================================================================Total params: 1,928,746Trainable params: 1,928,746Non-trainable params: 0
How to unify date format from HTML timestamps I am scraping the publish date of articles from a number of publishers' websites using a python script. This data is found in HTML attributes or tags identified variously by "time", "timestamp", and "published_date", among others, and provides the time in, for example, the following formats:<time class="timestamp article__timestamp flexbox__flex--1"> Updated Aug. 18, 2021 3:54 pm ET </time><time class="css-x7rtpa e16638kd0" datetime="2021-08-18T19:10:54-04:00">Aug. 18, 2021</time><time datetime="2021-08-18T15:45:33-04:00"><span class="date">August 18, 2021</span><span class="time">3:45 PM ET</span></time><div class="timestamp"><span aria-label="Published on August 19, 2021 12:36 AM ET" class="timestamp__date--published"><span aria-hidden="true">08/19/2021 12:36 am ET</span></span></div><div class="article-date"><strong>Published</strong> <time> 8 hours ago</time></div>'published_time': '2021-08-18T05:33:59ZThis is what the text of those dates will typically look like after I grab it from those HTML tags:Aug. 18, 2021 6:56 am ETAug. 18, 2021Updated Aug. 18, 2021 3:54 pm ETPublished 6 hours ago2021-08-18T08:00:00ZI plan to scrape additional publishers' sites in the future, so before I write my own script, I'm curious if there's an existing solution or framework that unifies this format.The above tags and resulting text aren't shown in a 1:1 relationship because there's enough variation to the point where that's somewhat irrelevant for a solution beyond writing my own script. The solutions I've found so far reference unifying dates in Javascript, but not when extracting from HTML tags.These dates will ultimately be consumed by a server app written in Swift.
The dateparser python library looks like the best solution to my needs.Support for almost every existing date format: absolute dates, relative dates ("two weeks ago" or "tomorrow"), timestamps, etc.Support for more than 200 language locales.Language autodetection.Customizable behavior through settings.Support for non-Gregorian calendar systems.Support for dates with timezones abbreviations or UTC offsets ("August 14, 2015 EST", "21 July 2013 10:15 pm +0500"...)Search dates in longer texts.And for long-term stability, always a consideration for production deployments:Actively supportedOver 7.4k users90+ contributors
400 Bad Request POST request I'm programing in Python some API application, using POSTMAN, and a Bearer token. I already receive the token, and to some GET with success response.But when doing a insert of a record I got 400 Bad request error, this is the code I'm using for adding the recorddef add_identity(token, accountid, newIdentity): end_point = f"https://identityservice-demo.clearid.io/api/v2/accounts/{accountid}/identities/" headers = CaseInsensitiveDict() headers["Content-type"] = "application/json; charset=utf-8" headers["Authorization"] = f"Bearer {token}" response = requests.request("POST", end_point, data=newIdentity, headers=headers) print(f"{response.reason} - {response.status_code}")the variable newIdentity has the following datanID = { "privateData": { "birthday": "1985-30-11T18:23:27.955Z", "employeeNumber": "99999999", "secondaryEmail": "", "cityOfResidence": "Wakanda", "stateOfResidence": "Florida", "zipCode": "102837", "phoneNumberPrimary": "(999)-999-999)", "phoneNumberSecondary": "+5-(999)-999-9999" }, "companyData": { "approvers": [ { "approverId": "" } ], "supervisorName": "Roger Rabbit", "departmentName": "Presidency", "jobTitle": "President", "siteId": "string", "companyName": "ACME Inc", "workerTypeDescription": "", "workerTypeCode": "" }, "systemData": { "hasExtendedTime": "true", "activationDateUtc": "2022-03-16T18:23:27.955Z", "expirationDateUtc": "2022-03-16T18:23:27.955Z", "externalId": "999999", "externalSyncTimeUtc": "2022-03-16T18:23:27.955Z", "provisioningAttributes": [ { "name": "" } ], "customFields": [ { "customFieldType": "string", "customFieldName": "SSNO", "customFieldValue": "9999999" } ] }, "nationalIdentities": [ { "nationalIdentityNumber": "0914356777", "name": "Passport", "issuer": "Wakanda" } ], "description": "1st Record ever", "status": "Active", "firstName": "Bruce", "lastName": "Wayne", "middleName": "Covid", "displayName": "Bruce Wayne", "countryCode": "WK", "email": "bruce.wayne@wakanda.com", "creationOnBehalf": "ACME" }what could solve the problem?the swagger for the API ishttps://identityservice-demo.clearid.io/swagger/index.html#/Identities/get_api_v2_accounts__accountId__identitiesThanks for your help in advance
data have to be a dict ,, you can try import json and data=json.dumps(newIdentity) ,and if it keeps returning 400 , check well that all the parameters are accepted by the api by recreating the request with Postman or any request editor, and if the api uses any web interface check what is the reason for that 400 . This was translated by Google so I don't know if I said something nonsense :)
Unable to consume a .NET WCF service in Python I am trying to access a .NET WCF web service from Python and am getting the following error - would anyone be able to let me know what I am doing wrong:File "C:\temp\anaconda3\lib\site-packages\suds\mx\literal.py", line 87, in start raise TypeNotFound(content.tag)suds.TypeNotFound: Type not found: 'authToken'Below is the python code that I have:import uuidfrom suds.client import Clientfrom suds.xsd.doctor import Import, ImportDoctorurl = 'http://something/something?wsdl'imp = Import('http://www.w3.org/2001/XMLSchema', location='http://www.w3.org/2001/XMLSchema.xsd')imp = Import('http://schemas.xmlsoap.org/soap/encoding/')imp = Import('http://schemas.xmlsoap.org/soap/encoding/')imp.filter.add('http://tempuri.org/')doctor = ImportDoctor(imp)client = Client(url, doctor=doctor, headers={'Content-Type': 'application/soap+xml'})logging.basicConfig(level=logging.INFO)logging.getLogger('suds.client').setLevel(logging.DEBUG)logging.getLogger('suds.transport').setLevel(logging.DEBUG)logging.getLogger('suds.xsd.schema').setLevel(logging.DEBUG)logging.getLogger('suds.wsdl').setLevel(logging.DEBUG)client.set_optionsmyMethod = client.factory.create('myMethod')myMethod.authToken = uuid.UUID('xxxxxxxx-35f4-4b7b-accf-yyyyyyyyyyyy')print(f'CLIENT: {client}')print(f'myMethod: {myMethod}')ls_Token = client.service.myMethod(myMethod)print(f'ACCESSTOKEN: {ls_Token}')
Create ResponseData object, the type is defined in wsdl, if there are multiple schemas, you need to add a prefix, such as ns0, ns1, etc.ResponseData = client.factory.create('ns1:ResponseData')ResponseData.token = "Test"Make sure that the properties of the object you created exist,You can view the properties of the object after successfully creating the object.ResponseData = client.factory.create('ns1:ResponseData')ResponseData.token = "Test"print ResponseDataThe following picture is the property of ResponseData object:If I use the following code I will get the same error as you:ResponseData = client.factory.create('ns1:ResponseData')ResponseData.authToken = "Test"So you need to check whether the myMethod object has authToken property.
How to split row in multiple rows and add new column also in pandas? I am trying to split row into multiple rows but I need to add one more column when the split will happen. Can you please help me how to do this?Example:df1: rule_id priority_order comb_fld_order R162 2.3 1 R162 2.3.1 1 R162 2.6 2 R162 2.6.1 2 R162 3.0.4 3.2,3.1,3Expected Output:df2:rule_id priority_order comb_fld_order comb_fld_order_1R162 2.3 1 R162 2.3.1 1 R162 2.6 2 R162 2.6.1 2 R162 3.0.4 3.2 dummyR162 3.0.4 3.1 dummyR162 3.0.4 3 dummyFor splitting i am using below code but I don't know how to add extra column.df1 = (df.set_index(['rule_id', 'priority_order']).apply(lambda x: x.str.split(',').explode()).reset_index())
No need for the lambdas function. You can just explode and call the column name:import pandas as pddf1 = pd.DataFrame({ 'rule_id': ['R162', 'R162', 'R162', 'R162', 'R162'], 'priority_order': ['2.3', '2.3.1', '2.6', '2.6.1', '3.0.4'], 'comb_fld_order': ['1', '1', '2', '2', ('3.2', '3.1', '3')]})df2 = df1.set_index(['rule_id', 'priority_order']).explode('comb_fld_order')rule_id priority_order comb_fld_order R162 2.3 1 2.3.1 1 2.6 2 2.6.1 2 3.0.4 3.2 3.0.4 3.1 3.0.4 3EDIT:To address your 2nd requirement, you can do something like this:exploded = df2.groupby('priority_order').count().to_dict()['comb_fld_order']df2['was_exploded'] = df2.apply(lambda x: 'exploded' if int(exploded[x.name[1]]) > 1 else '', axis=1).valuesIt's not pretty, and I'm certain there's a more concise way, but the idea is that you perform a groupby on the original dataframe and take the values associated with each 'priority_order'. Then you go through your newly constructed dataframe and access the 2nd level of the multi-index to pass that as a key into a dictionary of counts associated with the 'priority_order' values.x.name[1] for a lambdas function accesses the 2nd level of the index associated with the row value of x. Useful little feature that one.
Reverse for 'new_entry' with arguments '('',)' not found. 1 pattern(s) tried: ['new_entry/(?P[0-9]+)/$'] I'm new learning Django and I want a study tracker web app with Django. The error came up when I was creating the function and template for new entries - this will allow a user write a detailed note about a topic they are currently learning about but whenever I run the template I get an error message: Reverse for 'new_entry' with arguments '('',)' not found. 1 pattern(s) tried: ['new_entry/(?P[0-9]+)/$']. I've tried looking at the latest new_entry() function I wrote and tweak the topic variable name but the did not solve the issue. I also cross checked my url paths for any misspelling or whitespaces but there isn't. Here are my project files.urls.pyfrom django.urls import pathfrom . import viewsapp_name = 'django_apps'urlpatterns = [ # Home page path('', views.index, name='index'), # Page that shows all topics. path('topics/', views.topics, name='topics'), # Detail page for a single topic. path('topics/<int:topic_id>/', views.topic, name='topic'), # Page for adding a new topic. path('new_topic/', views.new_topic, name='new_topic'), # Page for adding a new entry. path('new_entry/<int:topic_id>/', views.new_entry, name='new_entry'),]views.py:from django.shortcuts import render, redirectfrom .models import Topicfrom .forms import TopicForm, EntryForm# Create your views here.def index(request): """The home page for django app.""" return render(request, 'django_apps/index.html')def topics(request): """Show all topic""" topics_list = Topic.objects.order_by('id') context = {'topics_list': topics_list} return render(request, 'django_apps/topics.html', context)def topic(request, topic_id): """Get topic and all entries associated with it.""" topic_list = Topic.objects.get(id=topic_id) entries = topic_list.entry_set.order_by('-date_added') context = {'topic_list': topic_list, 'entries': entries} return render(request, 'django_apps/topic.html', context)def new_topic(request): """Add a new topic.""" if request.method != 'POST': # No data submitted; create a blank form. form = TopicForm() else: # POST data submitted; process data. form = TopicForm(data=request.POST) if form.is_valid(): form.save() return redirect('django_apps:topics') # Display a blank name or invalid form. context = {'form': form} return render(request, 'django_apps/new_topic.html', context)def new_entry(request, topic_id): """Add a new entry for a topic.""" topic_list = Topic.objects.get(id=topic_id) if request.method != 'POST': # NO data submitted; create a blank form. form = EntryForm() else: # POST data submitted; process data. form = EntryForm(data=request.POST) if form.is_valid(): latest_entry = form.save(commit=False) latest_entry.topic = topic_list latest_entry.save() return redirect('django_apps:topic', topic_id=topic_id) # Display a blank name or invalid form. context = {'topic_list': topic_list, 'form': form} return render(request, 'django_apps/new_entry.html', context)new_entry.html(updated!):<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <title>Study Tracker - Entry</title></head><body>{% extends 'django_apps/base.html' %}{% block content %} {% for topic in topic_list %} <p><a href="{% url 'django_apps:topic' topic_id=topic.id %}">{{ topic }}</a></p> <p>Add a new entry:</p> <form action="{% url 'django_apps:new_entry' topic_id=topic.id %}" method="post"> {% csrf_token %} {{ form.as_p }} <button name="submit">Add entry</button> </form> {% endfor %}{% endblock content %}</body></html>base.html:<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <title>Base template</title></head><body><p> <a href="{% url 'django_apps:index' %}">Study Tracker</a> - <a href="{% url 'django_apps:topics' %}">Topics</a></p>{% block content %}{% endblock content %}</body></html>forms.py:from django import formsfrom .models import Topic, Entryclass TopicForm(forms.ModelForm): """A class that defines the form in which a user can enter in a topic.""" class Meta: """This class tells django which model to base the form on and the fields to include in the form.""" model = Topic fields = ['text'] labels = {'text': ''}class EntryForm(forms.ModelForm): """A class that defines the form in which a user can fill in an entry to a topic.""" class Meta: """This meta class tells django which model to base the form for entries on and the fields to include in the form.""" model = Entry fields = ['text'] labels = {'text': 'Entry:'} widgets = {'text': forms.Textarea(attrs={'cols': 80})}I do not know how to go about fixing this issue. I've checked all my inheritance urls and templates for errors but I can't figure out what seems to the problem. Update: Here is my topic.html template:<!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8"> <title>Study Tracker - Topic</title> </head> <body> {% extends 'django_apps/base.html' %} {% block content %} <p>Topic: {{ topic }}</p> <p>Entries:</p> <p> <a href="{% url 'django_apps:new_entry' topic.id %}">Add new entry</a> </p> <ul> {% for entry in entries %} <li> <p>{{ entry.date_added|date:'M d, Y H:i' }}</p> <p>{{ entry.text|linebreaks }}</p> </li> {% empty %} <li>There are currently no entries for this topic.</li> {% endfor %} </ul> {% endblock content %} </body></html>I also checked the topic and entry ids in shell too.
You passed your template topic_list variable but in your template you used topic. I think you would set a loop. Because you have no any variable named topic. If you changed them, it will work.
Inheriting from BaseException vs Exception I know what is difference between Exception and BaseException in Python. I wonder what is a good practice and more pythonic:Should my exceptions inherit from BaseException or Exception?
By default, all user-defined exceptions should inherit from Exception. This is recommended in the documentation: exception Exception All built-in, non-system-exiting exceptions are derived from this class. All user-defined exceptions should also be derived from this class.This is also recommend by and motivated in PEP 8: Derive exceptions from Exception rather than BaseException. Direct inheritance from BaseException is reserved for exceptions where catching them is almost always the wrong thing to do.In general, exceptions deriving from Exception are intended to be handled by regular code. In contrast, exceptions deriving directly from BaseException are associated with special situations; handling them like normal exceptions can lead to unexpected behaviour. This is why an idiomatic "catch all" handler only handles Exception:def retry(func): while True: try: return func() except Exception as err: print(f"retrying after {type(err)}: {err}")Builtin exceptions inheriting directly from BaseException currently are KeyboardInterrupt, SystemExit, and GeneratorExit which are associated with shutdown of the program, thread or generator/coroutine. Incorrectly handling them will prevent a graceful shutdown.Note that while the default should be to inherit from Exception, it is fine to inherit from BaseException if there is a good reason to do so. For example, asyncio.CancelledError also inherits from BaseException since it represents shutdown of asyncio's thread equivalent, the Task.
Dimension in Tensorflow / keras and sparse_categorical_crossentropy I cannot understand how to use tensorflow dataset as input for my model. I have a X as (n_sample, max_sentence_size) and a y as (n_sample) but I cannot match the dimension, I am not sure what tensorflow do internaly.Below you can find a reprroducible example with empty matrix, but my data is not empty, it is an integer representation of text.X_train = np.zeros((16, 6760))y_train = np.zeros((16))train = tf.data.Dataset.from_tensor_slices((X_train, y_train))# Prepare for tensorflowBUFFER_SIZE = 10000BATCH_SIZE = 64VOCAB_SIZE = 5354train = train.shuffle(BUFFER_SIZE)#.batch(BATCH_SIZE)# Select index of interest in textimport tensorflow as tfmodel = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=VOCAB_SIZE, output_dim=64, mask_zero=False), tf.keras.layers.Bidirectional(tf.keras.layers.GRU(64)), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(VOCAB_SIZE, activation='softmax'),])model.compile(loss="sparse_categorical_crossentropy", # loss=tf.keras.losses.MeanAbsoluteError(), optimizer=tf.keras.optimizers.Adam(1e-4), metrics=['sparse_categorical_accuracy'])history = model.fit(train, epochs=3, ) ValueError Traceback (most recent call last) <ipython-input-74-3a160a5713dd> in <module> ----> 1 history = model.fit(train, epochs=3, 2 # validation_data=test, 3 # validation_steps=30 4 ) /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing,**kwargs) 817 max_queue_size=max_queue_size, 818 workers=workers, --> 819 use_multiprocessing=use_multiprocessing) 820 821 def evaluate(self, /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing,**kwargs) 340 mode=ModeKeys.TRAIN, 341 training_context=training_context, --> 342 total_epochs=epochs) 343 cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN) 344 /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs) 126 step=step, mode=mode, size=current_batch_size) as batch_logs: 127 try: --> 128 batch_outs = execution_function(iterator) 129 except (StopIteration, errors.OutOfRangeError): 130 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError? /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn) 96 # `numpy` translates Tensors to values in Eager mode. 97 return nest.map_structure(_non_none_constant_value, ---> 98 distributed_function(input_fn)) 99 100 return execution_function /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds) 566 xla_context.Exit() 567 else: --> 568 result = self._call(*args, **kwds) 569 570 if tracing_count == self._get_tracing_count(): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds) 613 # This is the first call of __call__, so we have to initialize. 614 initializers = [] --> 615 self._initialize(args, kwds, add_initializers_to=initializers) 616 finally: 617 # At this point we know that the initialization is complete (or less /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 495 self._concrete_stateful_fn = ( 496 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 497 *args, **kwds)) 498 499 def invalid_creator_scope(*unused_args, **unused_kwds): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args,**kwargs) 2387 args, kwargs = None, None 2388 with self._lock: -> 2389 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2390 return graph_function 2391 /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2701 2702 self._function_cache.missed.add(call_context_key) -> 2703 graph_function = self._create_graph_function(args, kwargs) 2704 self._function_cache.primary[cache_key] = graph_function 2705 return graph_function, args, kwargs /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2591 arg_names=arg_names, 2592 override_flat_arg_shapes=override_flat_arg_shapes, -> 2593 capture_by_value=self._capture_by_value), 2594 self._function_attributes, 2595 # Tell the ConcreteFunction to clean up its graph once it goes out of /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 976 converted_func) 977 --> 978 func_outputs = python_func(*func_args, **func_kwargs) 979 980 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds) 437 # __wrapped__ allows AutoGraph to swap in a converted function. We give 438 # the function a weak reference to itself to avoid a reference cycle. --> 439 return weak_wrapped_fn().__wrapped__(*args, **kwds) 440 weak_wrapped_fn = weakref.ref(wrapped_fn) 441 /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in distributed_function(input_iterator) 83 args = _prepare_feed_values(model, input_iterator, mode, strategy) 84 outputs = strategy.experimental_run_v2( ---> 85 per_replica_function, args=args) 86 # Out of PerReplica outputs reduce or pick values to return. 87 all_outputs = dist_utils.unwrap_output_dict( /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in experimental_run_v2(self, fn, args, kwargs) 761 fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(), 762 convert_by_default=False) --> 763 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) 764 765 def reduce(self, reduce_op, value, axis): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs) 1817 kwargs = {} 1818 with self._container_strategy().scope(): -> 1819 return self._call_for_each_replica(fn, args, kwargs) 1820 1821 def _call_for_each_replica(self, fn, args, kwargs): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs) 2162 self._container_strategy(), 2163 replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)): -> 2164 return fn(*args, **kwargs) 2165 2166 def _reduce_to(self, reduce_op, value, destinations): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs) 290 def wrapper(*args, **kwargs): 291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED): --> 292 return func(*args, **kwargs) 293 294 if inspect.isfunction(func) or inspect.ismethod(func): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics, standalone) 431 y, 432 sample_weights=sample_weights, --> 433 output_loss_metrics=model._output_loss_metrics) 434 435 if reset_metrics: /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in train_on_batch(model, inputs, targets, sample_weights, output_loss_metrics) 310 sample_weights=sample_weights, 311 training=True, --> 312 output_loss_metrics=output_loss_metrics)) 313 if not isinstance(outs, list): 314 outs = [outs] /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _process_single_batch(model, inputs, targets, output_loss_metrics, sample_weights, training) 251 output_loss_metrics=output_loss_metrics, 252 sample_weights=sample_weights, --> 253 training=training)) 254 if total_loss is None: 255 raise ValueError('The model cannot be run ' /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _model_loss(model, inputs, targets, output_loss_metrics, sample_weights, training) 165 166 if hasattr(loss_fn, 'reduction'): --> 167 per_sample_losses = loss_fn.call(targets[i], outs[i]) 168 weighted_losses = losses_utils.compute_weighted_loss( 169 per_sample_losses, /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/losses.py in call(self, y_true, y_pred) 219 y_pred, y_true = tf_losses_util.squeeze_or_expand_dimensions( 220 y_pred, y_true) --> 221 return self.fn(y_true, y_pred, **self._fn_kwargs) 222 223 def get_config(self): /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/losses.py in sparse_categorical_crossentropy(y_true, y_pred, from_logits, axis) 976 def sparse_categorical_crossentropy(y_true, y_pred, from_logits=False, axis=-1): 977 return K.sparse_categorical_crossentropy( --> 978 y_true, y_pred, from_logits=from_logits, axis=axis) 979 980 /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in sparse_categorical_crossentropy(target, output, from_logits, axis) 4571 with get_graph().as_default(): 4572 res = nn.sparse_softmax_cross_entropy_with_logits_v2( -> 4573 labels=target, logits=output) 4574 else: 4575 res = nn.sparse_softmax_cross_entropy_with_logits_v2( /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/nn_ops.py in sparse_softmax_cross_entropy_with_logits_v2(labels, logits, name) 3535 """ 3536 return sparse_softmax_cross_entropy_with_logits( -> 3537 labels=labels, logits=logits, name=name) 3538 3539 /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/nn_ops.py in sparse_softmax_cross_entropy_with_logits(_sentinel, labels, logits, name) 3451 "should equal the shape of logits except for the last " 3452 "dimension (received %s)." % (labels_static_shape, -> 3453 logits.get_shape())) 3454 # Check if no reshapes are required. 3455 if logits.get_shape().ndims == 2: ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (6760, 5354)).
this works for me in Tensorflow 2.0. import numpy as np# Prepare for tensorflowBUFFER_SIZE = 10000BATCH_SIZE = 64VOCAB_SIZE = 5354X_train = np.zeros((16,6760))y_train = np.zeros((16,1)) # This is changedtrain = tf.data.Dataset.from_tensor_slices((X_train, y_train))train = train.shuffle(BUFFER_SIZE).batch(8) # This is changed# Select index of interest in textmodel = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=VOCAB_SIZE, output_dim=64,input_length= 6760, mask_zero=False), tf.keras.layers.Bidirectional(tf.keras.layers.GRU(64)), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(VOCAB_SIZE, activation='softmax'),])print(model.summary())model.compile(loss="sparse_categorical_crossentropy", # loss=tf.keras.losses.MeanAbsoluteError(), optimizer=tf.keras.optimizers.Adam(1e-4), metrics=['sparse_categorical_accuracy'])history = model.fit(train, epochs=3)
how to import django variable to html? I have to send data from the rs232 port and display it in a graph, use java scripthttps://canvasjs.com/html5-javascript-dynamic-chart/I need help importing the views.py variable in my htmlviews.pyimport iofrom django.http import HttpResponsefrom django.shortcuts import renderfrom random import sample def about(read): import serial, time arduino = serial.Serial('COM3', 115200) time.sleep(2) read = arduino.readline() arduino.close() read = int(read) return HttpResponse(str(read), '')Htmlvar updateChart = function (count) { count = count || 1; for (var j = 0; j < count; j++) {<!-- this is where I try to read the variable --> yVal = href="{% 'about' %}" dps.push({ x: xVal, y: yVal }); xVal++; }
write this on your views and point a url to itfrom django.shortcuts import renderdef indexw(request): context_variable = '' context = {"context_variable": context_variable} return render(request,'index.html', context)
K-fold cross-validation in keras with single output for binary class I am working with a convolutional neural network that i am using to classify cats and dogs, that has just one output for two classes. I need to use k-fold cross validation to find which set or pets breeds gives the best validation accuracy. The closest answer to my problem is in this question: K fold cross validation using keras, but it doesnt use the original network model apparently and doesn't work for groups of pets with different breeds.Inside Group 1, 2 and 3, I have 2 folders called Pets and inside each Pets folder i have two folders that are my classes: Cats and Dogs:For example:Group 1/ Pets 1/ cats/ breeds_1_cats001.jpeg breeds_1_cats002.jpeg dogs/ breeds_1_dogs001.jpeg breeds_1_dogs002.jpeg Pets 2/ cats/ breeds_2_cats001.jpeg breeds_2_cats002.jpeg dogs/ breeds_2_dogs001.jpeg breeds_2_dogs002.jpegGroup 2/ Pets 1/ cats/ breeds_3_cats001.jpeg breeds_3_cats002.jpeg dogs/ breeds_3_dogs001.jpeg breeds_3_dogs002.jpeg Pets 2/ cats/ breeds_4_cats001.jpeg breeds_4_cats002.jpeg dogs/ breeds_4_dogs001.jpeg breeds_4_dogs002.jpegGroup 3/ Pets 1/ cats/ breeds_5_cats001.jpeg breeds_5_cats002.jpeg dogs/ breeds_5_dogs001.jpeg breeds_5_dogs002.jpeg Pets 2/ cats/ breeds_6_cats001.jpeg breeds_6_cats002.jpeg dogs/ breeds_6_dogs001.jpeg breeds_6_dogs002.jpeg What i want to do is use kfold and have as indices my groups.For example: use group 1 and group 2 as training, group 3 as validation.Then, group 1 and 3 as training and group 2 as validation and at last use group 2 and group 3 as training and group 1 as validation.I've separated a dummy dataset to help explain, my goals.My problem is that i dont know how to use k-fold for a given multiple groups within a nested folder that has binary classes, where im using data generators for training and testing with binary output.I need to use k-fold for my convolutional neural network, without having to modify my data augmentation or destroying my layers, in order to find the best validation accuracy and save their weights, here's my neural network: from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K import numpy as np from keras.preprocessing import image img_width, img_height = 128, 160 train_data_dir = '../input/pets/pets train' validation_data_dir = '../input/pets/pets testing' nb_train_samples = 4850 nb_validation_Samples = 3000 epochs = 100 batch_size = 16 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) train_datagen = ImageDataGenerator( zoom_range=0.2, rotation_range=40, horizontal_flip=True, ) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode="binary")model = Sequential()model.add(Conv2D(32, (3, 3), input_shape=input_shape))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Conv2D(64, (3, 3)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Flatten())model.add(Dropout(0.25))model.add(Dense(64))model.add(Dropout(0.5))model.add(Dense(1))model.add(Activation('sigmoid'))model.summary()model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['mse','accuracy'])model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data = validation_generator, validation_steps = nb_validation_Samples // batch_size)model.save_weights('pets-weights.npy')
You won't be able to use ImageDataGenerator because according to the documentation, cross_val_score expects an array of shape (n_samples, features).What you can do it load your images in memory, and create a custom CV splitter. I have a folder like this:group1/ cats/ breeds_5_cats001.jpeg breeds_5_cats002.jpeg dogs/ breeds_4_dogs001.jpeg breeds_4_dogs002.jpeggroup2/ cats/ breeds_5_cats001.jpeg breeds_5_cats002.jpeg dogs/ breeds_4_dogs001.jpeg breeds_4_dogs002.jpeggroup3/ cats/ breeds_5_cats001.jpeg breeds_5_cats002.jpeg dogs/ breeds_4_dogs001.jpeg breeds_4_dogs002.jpegI started with globbing the filenames and grouping them. You will need to change the glob pattern a little since my directory structure is slightly different. All it needs to do is get ALL the pictures, no matter the order.from tensorflow.keras.wrappers.scikit_learn import KerasClassifierfrom sklearn.model_selection import cross_val_scoreimport numpy as npfrom tensorflow.keras.layers import *from tensorflow.keras import Sequentialimport osfrom glob2 import globfrom itertools import groupbyfrom itertools import accumulateimport cv2os.environ['CUDA_VISIBLE_DEVICES'] = '-1'import tensorflow as tftf.config.experimental.list_physical_devices('GPU')os.chdir('c:/users/nicol/documents/datasets/catsanddogs')filenames = glob('*/*/*.jpg')groups = [list(v) for k, v in groupby(sorted(filenames), key=lambda x: x.split(os.sep)[0])]lengths = [0] + list(accumulate(map(len, groups)))groups = [i for s in groups for i in s]['group1\\cats\\cat.4001.jpg', 'group1\\cats\\cat.4002.jpg', 'group1\\cats\\cat.4003.jpg', 'group1\\cats\\cat.4004.jpg', 'group1\\cats\\cat.4005.jpg', 'group1\\cats\\cat.4006.jpg', 'group1\\cats\\cat.4007.jpg', 'group1\\cats\\cat.4008.jpg', 'group1\\cats\\cat.4009.jpg', 'group1\\cats\\cat.4010.jpg']Then I loaded all the pictures in an array and made an array of 0 and 1 for the categories. You will need to customize this according to your directory structure.images = list()for image in filenames: array = cv2.imread(image)/255 resized = cv2.resize(array, (32, 32)) images.append(resized)X = np.array(images).astype(np.float32)y = np.array(list(map(lambda x: x.split(os.sep)[1] == 'cats', groups))).astype(int)Then I built a KerasClassifier:def build_model(): model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dropout(0.25)) model.add(Dense(64)) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['mse', 'accuracy']) return modelkeras_clf = KerasClassifier(build_fn=build_model, epochs=1, batch_size=16, verbose=0)Then I made a custom CV splitter, as explained here:def three_fold_cv(): i = 1 while i <= 3: min_length = lengths[i - 1] max_length = lengths[i] idx = np.arange(min_length, max_length, dtype=int) yield idx, idx i += 1Then I instantiated the custom CV splitter and ran the training:tfc = three_fold_cv()accuracies = cross_val_score(estimator=keras_clf, scoring="accuracy", X=X, y=y, cv=tfc)print(accuracies)Output:[0.648 0.666 0.73 ]Full code:from tensorflow.keras.wrappers.scikit_learn import KerasClassifierfrom sklearn.model_selection import cross_val_scoreimport numpy as npfrom tensorflow.keras.layers import *from tensorflow.keras import Sequentialimport osfrom glob2 import globfrom itertools import groupbyfrom itertools import accumulateimport cv2os.environ['CUDA_VISIBLE_DEVICES'] = '-1'import tensorflow as tftf.config.experimental.list_physical_devices('GPU')os.chdir('c:/users/nicol/documents/datasets/catsanddogs')filenames = glob('*/*/*.jpg')groups = [list(v) for k, v in groupby(sorted(filenames), key=lambda x: x.split(os.sep)[0])]lengths = [0] + list(accumulate(map(len, groups)))groups = [i for s in groups for i in s]images = list()for image in filenames: array = cv2.imread(image)/255 resized = cv2.resize(array, (32, 32)) images.append(resized)X = np.array(images).astype(np.float32)y = np.array(list(map(lambda x: x.split(os.sep)[1] == 'cats', groups))).astype(int)def build_model(): model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dropout(0.25)) model.add(Dense(64)) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['mse', 'accuracy']) return modelkeras_clf = KerasClassifier(build_fn=build_model, epochs=1, batch_size=16, verbose=0)def three_fold_cv(): i = 1 while i <= 3: min_length = lengths[i - 1] max_length = lengths[i] idx = np.arange(min_length, max_length, dtype=int) yield idx, idx i += 1tfc = three_fold_cv()accuracies = cross_val_score(estimator=keras_clf, scoring="accuracy", X=X, y=y, cv=tfc)print(accuracies)[0.648 0.666 0.73 ]Here's a copy/pastable example with the MNIST dataset:from tensorflow.keras.wrappers.scikit_learn import KerasClassifierfrom sklearn.model_selection import cross_val_scoreimport numpy as npfrom tensorflow.keras.layers import *from tensorflow.keras import Sequentialfrom itertools import accumulateimport tensorflow as tf# Here's your dataset.(xtrain, ytrain), (_, _) = tf.keras.datasets.mnist.load_data()# You have three groups, as you wanted. They are 20,000 each.x_group1, y_group1 = xtrain[:20_000], ytrain[:20_000]x_group2, y_group2 = xtrain[20_000:40_000:], ytrain[20_000:40_000:]x_group3, y_group3 = xtrain[40_000:60_000], ytrain[40_000:60_000]# You need the accumulated lengths of the datasets: [0, 20000, 40000, 60000]lengths = [0] + list(accumulate(map(len, [y_group1, y_group2, y_group3])))# Now you need all three in a single dataset.X = np.concatenate([x_group1, x_group2, x_group3], axis=0)[..., np.newaxis]y = np.concatenate([y_group1, y_group2, y_group3], axis=0)# KerasClassifier needs a model building function.def build_model(): model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(28, 28, 1))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dropout(0.25)) model.add(Dense(64)) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['mse', 'accuracy']) return model# Creating the KerasClassifier.keras_clf = KerasClassifier(build_fn=build_model, epochs=1, batch_size=16, verbose=0)# Creating the custom Cross-validation splitter. Splits are based on `lengths`.def three_fold_cv(): i = 1 while i <= 3: min_length = lengths[i - 1] max_length = lengths[i] idx = np.arange(min_length, max_length, dtype=int) yield idx, idx i += 1accuracies = cross_val_score(estimator=keras_clf, scoring="accuracy", X=X, y=y, cv=three_fold_cv())print(accuracies)
Converting pandas groupby values to numpy array I tried multiple solutions but none of them gives the desired output.I have a DataFrame: tag value 'A' 3.7 'A' 1.5 'E' 9.7 'E' 2.9 'B' -1.2 'B' 0.8My expected output is a Numpy Array:array([[3.7, 1.5], [9.7, 2.9], [-1.2, 0.8]])I tried using groupby and converting in numpy arraydf.groupby(['tag']).value.apply(np.array).valuesBut I get output as:array([array([3.7, 1.5]), array([9.7, 2.9]), array([-1.2, 0.8]))], dtype=object)
If there is always same number of values per groups is possible create nested lists and pass to np.array, also for same order of groups add sort=False parameter to DataFrame.groupby:arr = np.array(df.groupby(['tag'], sort=False).value.apply(list).tolist())print (arr)[[ 3.7 1.5] [ 9.7 2.9] [-1.2 0.8]]
Cross validation inconsistent numbers of samples error (Python) I am trying to make a classification using cross validation method and SVM classifier. In my data file, the last column contains my classes (which are 0, 1, 2, 3, 4, 5) and the rest (except first column) is the numeric data that I want to use to predict these classes.from sklearn import svmfrom sklearn import metricsimport numpy as npfrom sklearn.model_selection import StratifiedKFoldfrom sklearn.model_selection import cross_val_scorefilename = "Features.csv"dataset = np.loadtxt(filename, delimiter=',', skiprows=1, usecols=range(1, 39))x = dataset[:, 0:36]y = dataset[:, 36]print("len(x): " + str(len(x)))print("len(y): " + str(len(x)))skf = StratifiedKFold(n_splits=10, shuffle=False, random_state=42)modelsvm = svm.SVC()expected = yprint("len(expected): " + str(len(expected)))predictedsvm = cross_val_score(modelsvm, x, y, cv=skf)print("len(predictedsvm): " + str(len(predictedsvm)))svm_results = metrics.classification_report(expected, predictedsvm)print(svm_results)And I am getting such an error:len(x): 2069len(y): 2069len(expected): 2069C:\Python\Python37\lib\site-packages\sklearn\model_selection\_split.py:297: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True. FutureWarninglen(predictedsvm): 10Traceback (most recent call last): File "C:/Users/MyComp/PycharmProjects/GG/AR.py", line 54, in <module> svm_results = metrics.classification_report(expected, predictedsvm) File "C:\Python\Python37\lib\site-packages\sklearn\utils\validation.py", line 73, in inner_f return f(**kwargs) File "C:\Python\Python37\lib\site-packages\sklearn\metrics\_classification.py", line 1929, in classification_report y_type, y_true, y_pred = _check_targets(y_true, y_pred) File "C:\Python\Python37\lib\site-packages\sklearn\metrics\_classification.py", line 81, in _check_targets check_consistent_length(y_true, y_pred) File "C:\Python\Python37\lib\site-packages\sklearn\utils\validation.py", line 257, in check_consistent_length " samples: %r" % [int(l) for l in lengths])ValueError: Found input variables with inconsistent numbers of samples: [2069, 10]Process finished with exit code 1I don't understand how my data count in y goes down to 10 when I am trying to predict it using CV.Can anyone help me on this please?
You are misunderstanding the output from cross_val_score. As per the documentation it returns "array of scores of the estimator for each run of the cross validation," not actual predictions. Because you have 10 folds, you get 10 values.classification_report expects the true values and the predicted values. To use this, you'll want to predict with a model. To do this, you'll need to fit the model on the data. If you're happy with the results from cross_val_score you can train that model on the data. Or, you can use GridSearchCV to do this all in one sweep.
Apply median from a subset to entire column, Python /Pandas first time posting here.I have a credit risk model data set that has 38K accounts. 25K accounts are training data. The other 13K are OOT (out of time validation). All the 200 columns have the same definitions between training and OOT. Just the data has two parts.I need to impute missing data. 37 columns of the 200 are qualified for median imputation. Here is my code that works fine. (Due to company confidentiality, I use general variable names)Interval37 =whole[['interval1','interval2'....'interval37']].apply(pd.to_numeric,errors='coerce')Interval37=interval37.fillna(Interval37.median())I must modify this because training is not supposed to see the OOT portion during training, even if it is just calculating median. So I tried the code belowTraindata=whole.query('partx==1') #partx== 1 indicates this is the 25K training accountsTrain37 =Traindata[['interval1', 'interval2'...'interval37']].apply(pd.to_numeric,errors='coerce')trainmedian=Train37.median()whole=whole(trainmedian) #Whole is the entire 37K accountsThis code runs with no error. Just it is not imputing. Same data in, same data out. I read several posts that apply subset median individually to each subset using groupby. My problem is the opposite. I need to spread median from a subset to the entire data frame. Or I need to apply, transform the entire data using median from the training data. Please help. Jia
Consider converting the trainmedian (i.e., Series) to a data frame with same dimensions using the DataFrame() constructor on list of series.whole_median = pd.DataFrame([trainmedian for _ in range(whole.shape[0])])To demonstrate with random 500-row data and training data sampled at 75% of rows.### DATA BUILDnp.random.seed(892020)whole = pd.DataFrame([np.random.uniform(1, 10, 37) for _ in range(500)], columns = ['interval'+str(i) for i in range(37)])training_data = whole.sample(frac=0.75)trainmedian = training_data.median()trainmedian# interval1 5.452105# interval2 5.497201# interval3 5.516642# ...# interval35 5.760942# interval36 5.319309# interval37 5.388019# dtype: float64whole_median = pd.DataFrame([trainmedian for _ in range(whole.shape[0])])whole_median# interval1 interval2 interval3 ... interval35 interval36 interval37# 0 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 1 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 2 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 3 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 4 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019#.. ... ... ... ... ... ... ...# 495 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 496 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 497 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 498 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019# 499 5.452105 5.497201 5.516642 ... 5.760942 5.319309 5.388019##[500 rows x 37 columns]
amixer: invalid command I am trying to change the volume of my RaspberryPi using this small code snippet:import osdef setVolume(vol,prefix): cmd = "amixer -q -M set PCM " + vol + "%" print(prefix+"Changing volume to " + vol + "%") print(prefix+str(os.system(cmd)))I am using this function in two different python scripts but it works in only one of them. (this function is just for testing please ignore the prefix and stuff).It works only in one of them and it gives the error message: amixer: Invalid command!(Python 2.7.13)
This should be very easy for you to narrow down, as the problem ultimately has little to do with Python. Your python code is just constructing a command string that is then executed by the operating system. First of all, I'd suggest printing or logging the full command you are executing so that you know just what system call you're making. Your problem could very well have to do with the current working directory that's in effect when you run your command. So I'd call os.system("pwd") prior to calling your actual cmd. This will show you what the current working directory is at the time that your command is run. Then here's the modified version of your code that I suggest you run to troubleshoot:def setVolume(vol,prefix): cmd = "amixer -q -M set PCM " + vol + "%" print(prefix+"Changing volume to " + vol + "%") os.system("cmd") print("Executing command: >" + cmd + "<") print(prefix+str(os.system(cmd)))Putting '>' and '<' in there will make sure you see any whitespace in your command. Often, just doing this will show you what your problem is, as you'll notice a problem in the way you've constructed your command. In your case, it is the vol parameter that would be the interesting factor here.Once you have the exact command you're passing to os.system(), try running that command at a shell prompt via copy/paste. Ideally, you can do this at the same shell prompt you were using to run your Python script. "cd" into the directory indicated by your code making the "pwd" call before you try to run the command. This should isolate the problem way from Python. Hopefully you'll see matching pass/fail behavior and you can troubleshoot at the level of the system command rather than in your code. Only when you fully understand how the system call works, and just what it has to look like, would you return to Python.If that doesn't get you to the goal, I'd suggest using the subprocess module rather than os.system(), assuming that's available on your RasPi version of Python. I've heard of problems being solved in the past simply by switching away from os.system(), although I don't know the details.
How to extract link from href using beautifulsoup I am try to extract url from the href but they will give me the empty list import requests from bs4 import BeautifulSoup headers ={ 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36' } r =requests.get('https://www.redfin.com/city/5357/WA/Edmonds') soup=BeautifulSoup(r.content, 'html.parser') tra=soup.find_all('div',class_='bottomV2') for links in tra: for link in links.find_all('a',href=True): comp=link['href'] print(comp)
Just one alternativ approach, you can use selenium.Examplefrom bs4 import BeautifulSoupfrom selenium import webdriverdriver = webdriver.Chrome('YOUR PATH TO CHROMEDRIVER')driver.get('https://www.redfin.com/city/5357/WA/Edmonds')soup=BeautifulSoup(driver.page_source, 'html.parser')tra=soup.find_all('div',class_='bottomV2')for links in tra: for link in links.find_all('a',href=True): comp=link['href'] print(comp)
Example using geospatial indexing with pymongo I have been looking at using MongoDB instead of a custom geospatial database, however I am having difficulty at understanding how pymongo works with spherical coordinates.Specifically I am not sure the $maxDistance (and similar have any effect).For example if I execute this code:db = pymongo.MongoClient().geo_exampledb.places.create_index([("location", pymongo.GEOSPHERE)])cities = [{"location": {'type': 'Point', 'coordinates': [57, 2]}, "name": "Aberdeen"}, {"location": {'type': 'Point', 'coordinates': [52, 13]}, "name": "Berlin"}, {"location": {'type': 'Point', 'coordinates': [44, 26]}, "name": "Bucharest"}, {"location": {'type': 'Point', 'coordinates': [40, 14]}, "name": "Napoli"}, {"location": {'type': 'Point', 'coordinates': [48, 2]}, "name": "Paris"}, {"location": {'type': 'Point', 'coordinates': [35, -70]}, "name": "Tokyo"}, {"location": {'type': 'Point', 'coordinates': [47, 8]}, "name": "Zurich"}]try: result = db.places.insert_many(cities) for doc in db.places.find( {"location":{'$nearSphere': [57, 2], "$maxDistance": 1}}).limit(3): pprint.pprint(doc)except BulkWriteError as bwe: print(bwe.details)I would expect as answer just Aberdeen, instead I still get the 3 closest by even if the maximum distance is larger than 1 meter. I guess I am doing something wrong. Any help as well as better examples of pymongo usage (better than from the documentation) would really help.Thanks
You are using the older form of query where the distance is specified in radians. If you change to the new format the maxDistance is specified in meters. Also for reasons lost in the mists of time GeoSpatial data is sorted in Long/Lat order so all your coordinates need to be reversed. I've fixed your code up to use the correct coordinate orientation and put in the new query format. I also put in a drop command for the collection (as this is example code). This confused me and it may be confusing you. Multiple runsof the program will insert the same points over and over. Without the dropeach query will return as many results as the number of times you have run the program. Finally I added a GEOSPHERE index which the documentation says you need even though your program runs fine without it. I suspect without this you will see a geometric decline in performance as the number of locations increases. import pymongoimport pprintfrom pymongo.errors import BulkWriteErrordb = pymongo.MongoClient().geo_exampledb.places.drop()db.places.create_index([("location", pymongo.GEOSPHERE)])cities = [{"location": {'type': 'Point', 'coordinates': [2, 57]}, "name": "Aberdeen"}, {"location": {'type': 'Point', 'coordinates': [13, 52]}, "name": "Berlin"}, {"location": {'type': 'Point', 'coordinates': [26, 44]}, "name": "Bucharest"}, {"location": {'type': 'Point', 'coordinates': [14, 40]}, "name": "Napoli"}, {"location": {'type': 'Point', 'coordinates': [2, 48]}, "name": "Paris"}, {"location": {'type': 'Point', 'coordinates': [-70, 35]}, "name": "Tokyo"}, {"location": {'type': 'Point', 'coordinates': [8, 47]}, "name": "Zurich"}]try: result = db.places.insert_many(cities) db.places.create_index([("location", pymongo.GEOSPHERE)]) for doc in db.places.find({"location":{ "$nearSphere": { "$geometry": { "type": "Point", "coordinates": [2,57] }, "$maxDistance": 1}}}): pprint.pprint(doc)except BulkWriteError as bwe: print(bwe.details)
Can't install Scrapy I would like to install the Scrapy package. I tried with pip install Scrapy but it didn't work, a lot of errors are displayed but I think this is the main error : Building wheel for Twisted (setup.py) ... error.I searched on this forum and found something : try to install it with the Anaconda Prompt. I use PyCharm so I thought it would be useless but I tried.I did this : conda install -c anaconda twisted.No error : # All requested packages already installed..The problem is that it is obviously not installed since if I repeat the command pip install Scrapy, I always have the same errors.If someone have an idea, it will be awesome !
try this. Works for me in pycharm.pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org Scrapy
Stuck in loop help - Python The second 'if' statement midway through this code is using an 'or' between two conditions. This is causing the issue I just don't know how to get around it. The code is going through a data file and turning on the given relay number at a specific time, I need it to only do this once per given relay. If I use an 'and' between the conditions, it will only turn on the first relay that matches the current time and wait for the next hour and turn on the next given relay. Could someone suggest something to fix this issue, thank you!def schedule(): metadata, sched = dbx.files_download(path=RELAYSCHEDULE) if not sched.content: pass # If file is empty then exit routine else: relaySchedule = str(sched.content) commaNum = relaySchedule.count(',') data1 = relaySchedule.split(',') for i in range(commaNum): data2 = data1[i].split('-') Time1 = data2[1] currentRN = data2[0] currentDT = datetime.datetime.now() currentHR = currentDT.hour global RN global T if str(currentHR) == str(Time1): if T != currentHR or RN != currentRN: relaynum = int(data2[0]) relaytime = int(data2[2]) T = currentHR RN = currentRN k = threading.Thread(target=SendToRelay(relaynum, relaytime)).start() else: print("Pass")Desired Inputs:sched.content = '1-19-10,3-9-20,4-9-10,'T = ' 'RN = ' 'T and RN are global variables because the loop is running indefinitely, they're there to let the loop know whether the specific Time(T) and Relay Number(RN) have already been used.Desired Outputs:If the time is 9 AM then,T = 9RN should be whatever the given relay number is so RN = 3, but not sure this is the right thing to use.Sorry if this is confusing. I basically need the program to read a set of scheduled times for specific relays to turn on, I need it to read the current time and if it matches the time in the schedule it will then check which relay is within that time and turn it on for however long. Once it has completed that, I need it to go over that same set of data in case there is another relay within the same time that also needs to turn on, the issue is that if I don't use T and RN variables to check whether a previous relay has been set, it will read the file and turn on the same relay over and over.
Try printing all used variables, check if everything is, what you think it is. On top of that, sometimes whietespaces characters causes problem with comparison.
How do I get Pillow ImageOps.contain to work when inside VS Code? I have VS Code on a Mac.Pillow is installed and version verified as 8.3.2 via Pip list in the terminal window of VS Code. I have confirmed via the pillow docs that the ImageOps.contain() is part of 8.3.My problem is that when I use the terminal, type python and run the following, it works perfectly:from PIL import Image, ImageOpsim = Image.open("images/Barcelona.jpg")print(im.format, im.size, im.mode)im = ImageOps.contain(im, (800, 800), method=3)im.show()Preview pops right up and shows me the picture.When I put the exact code into VS Code or build a .py file with Nano, I get an error message which is shown in this image:I've verified the right version of Python, Pillow, and such. Any help or pointers would be greatly appreciated.
This turns out to be a problem with what I can only call nested environments and/or a conflict between Anaconda and Workspaces. I'm not really sure but when I used the _version import to figure out a) what VS Code thought and then b) figured out the environment Pip was reporting on, I deactivated up one level, upgraded the (base) to the 8.3 version and all ran fine. The knowledge on importing the version variable to see precisely what the code is importing came from the question asked below and was invaluable.
Why does my graph not plot the points generated by linspace? (animation) When I remove linspace and plot points by typing them into a list by hand they are plotted just fine. However switch to linspace, and the points on the graph come up blank. What am I missing here? Printing the linspace lists show they are generating the values, but they don't seem to make the graphimport matplotlib.pyplot as pltfrom matplotlib.animation import FuncAnimationimport numpy as np%matplotlib qtfig = plt.figure(figsize=(6,4))axes = fig.add_subplot(1,1,1)plt.title("Image That's Moving")P=np.linspace(1,50,100)T=np.linspace(1,50,100)Position =[P]Time = [T]p2=[P]t2=[T]x,y=[],[]x2,y2=[],[]def animate(i): x.append(Time[i]) y.append((Position[i])) x2.append(t2[i]) y2.append((p2[i])) plt.xlim(0,100) plt.ylim(0,100) plt.plot(x,y, color="blue") plt.plot(x2,y2, color="red")anim = FuncAnimation(fig, animate, interval=300)
It seems like you are facing a problem because of Position = [P] and Time = [T].Because numpy.linspace already returns an array, you don't need additional [].Here is a working example that is referenced from matplotlib tutorial.import numpy as npimport matplotlib.pyplot as pltfrom matplotlib.animation import FuncAnimationdef init(): ax.set_xlim(0, 100) ax.set_ylim(0, 100) return ln,def update(i): xdata.append(T[i]) ydata.append(P[i]) ln.set_data(xdata, ydata) return ln,P = np.linspace(1, 50, 99)T = np.linspace(1, 50, 99)fig, ax = plt.subplots()xdata, ydata = [], []ln, = plt.plot([], [], 'ro')ani = FuncAnimation(fig, update, frames=np.arange(len(T)), init_func=init, blit=True)plt.show()
After chrome update, Message: target frame detached I have this error :selenium.common.exceptions.WebDriverException: Message: target frame detached (Session info: chrome=102.0.5005.63)which is often displayed after running my program, since the update of chrome. I suppose it's due to the fact that the version of chrome and chromedriver don't match exactly. Indeed I have the version 102.0.5005.63 of chrome (up to date) and I have the version 102.0.5005.61 of chromedriver (its last version is 103.0.5060.24 but it's obviously not compatible with my chrome version and the version 102.0.5005.63 of chromedriver does not exist).How can I solve the problem? I saw that one of the solutions was to re-download an older version of chrome but if possible I would like to keep an updated version.
I recently faced same issue, please check the accepted answer here :This version of ChromeDriver only supports Chrome version 99 Current browser version is 98.0.4758.102in few words you can download Chrome dev version or beta version to match both versions of chromeDriver and Chrome
Out of memory Error while training Rasa/LaBSE I want to train rasa/LaBSE from the LanguageModelFeaturizer. I have followed the steps in the docs and did not change the default training data.My config file looks like:# The config recipe.# https://rasa.com/docs/rasa/model-configuration/recipe: default.v1# Configuration for Rasa NLU.# https://rasa.com/docs/rasa/nlu/components/language: enpipeline:# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.# # If you'd like to customize it, uncomment and adjust the pipeline.# # See https://rasa.com/docs/rasa/tuning-your-model for more information. - name: WhitespaceTokenizer# - name: RegexFeaturizer# - name: LexicalSyntacticFeaturizer - name: LanguageModelFeaturizer # Name of the language model to use model_name: "bert" # Pre-Trained weights to be loaded model_weights: "rasa/LaBSE" cache_dir: null - name: CountVectorsFeaturizer - name: CountVectorsFeaturizer analyzer: char_wb min_ngram: 1 max_ngram: 4 - name: DIETClassifier epochs: 100 constrain_similarities: true batch_size: 8 - name: EntitySynonymMapper - name: ResponseSelector epochs: 100 constrain_similarities: true - name: FallbackClassifier threshold: 0.3 ambiguity_threshold: 0.1After running rasa train I get:tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2]I am using a GTX 1660ti with 6GB memory. My system specifications are:Rasa----------------------rasa 3.0.8rasa-sdk 3.0.5System----------------------OS: Ubuntu 18.04.6 LTS x86_64Kernel: 5.4.0-113-genericCUDA Version: 11.4Driver Version: 470.57.02Tensorflow----------------------tensorboard 2.8.0tensorboard-data-server 0.6.1tensorboard-plugin-wit 1.8.1tensorflow 2.6.1tensorflow-addons 0.14.0tensorflow-estimator 2.6.0tensorflow-hub 0.12.0tensorflow-probability 0.13.0tensorflow-text 2.6.0Regular training works fine and I can run the model. I tried to reduce the batch_size but the error persists.
Running the same code using google colab (Using 16GB GPU memory) works fine. The model uses around 6.5-7GB of memory.
Installing pyarrow: can't copy 'build/lib.macosx-11-arm64-3.9/pyarrow/include/arrow': doesn't exist or not a regular file I am trying to install pyarrow with the following command*:OPENSSL_ROOT_DIR=/opt/homebrew/opt/openssl@1.1/ pip install "pyarrow==4.0.1" --no-use-pep517However, it looks like the compilation fails as I get the following message at the end: Moving built C-extension release/_compute.cpython-39-darwin.so to build path /private/var/folders/kw/dnbqlh8n5zb9b1529n95j91m0000gr/T/pip-install-9qbgfsx2/pyarrow_3accc74c8d1a4153894730bae7812439/build/lib.macosx-11-arm64-3.9/pyarrow/_compute.cpython-39-darwin.so Did not find release/_cuda.cpython-39-darwin.so Cython module _cuda failure permitted Did not find release/_flight.cpython-39-darwin.so Cython module _flight failure permitted Did not find release/_dataset.cpython-39-darwin.so Cython module _dataset failure permitted Did not find release/_parquet.cpython-39-darwin.so Cython module _parquet failure permitted Did not find release/_orc.cpython-39-darwin.so Cython module _orc failure permitted Did not find release/_plasma.cpython-39-darwin.so Cython module _plasma failure permitted Did not find release/_s3fs.cpython-39-darwin.so Cython module _s3fs failure permitted Did not find release/_hdfs.cpython-39-darwin.so Cython module _hdfs failure permitted Did not find release/gandiva.cpython-39-darwin.so Cython module gandiva failure permitted running install_lib copying build/lib.macosx-11-arm64-3.9/pyarrow/_generated_version.py -> [*my python path*]/lib/python3.9/site-packages/pyarrow error: can't copy 'build/lib.macosx-11-arm64-3.9/pyarrow/include/arrow': doesn't exist or not a regular fileWhat is the problem here? Why does the file not exist, and how do I fix it?*Explanation of the command:OPENSSL_ROOT_DIR is required otherwise CMake complains about not finding OpenSSL4.0.1 is the most recent version and I get the same error for older versions as wellno-use-pep517 because apparently the build involves also building a wheel for numpy, which doesn't work with pep517 on M1 macs
UPDATEYou can use the nightly install of pyarrow which now supports M1pip install --extra-index-url https://pypi.fury.io/arrow-nightlies/ --prefer-binary --pre pyarrowPRE-UPDATEThe bad news is I think this is the fault of pyarrow rather than yourself.The good news is that I think it is about to be fixed!If you download the wheel mentioned in this comment and then do pip install ~/Downloads/pyarrow-5.0.0.dev471-cp39-cp39-macosx_11_0_arm64.whl I think it will install.Hopefully this will make its way into a proper release very soon.
retrun sources in Bokeh javascript callback Imagine this example in Bokeh docs:# modified to by used in a notebookfrom bokeh.layouts import columnfrom bokeh.models import CustomJS, ColumnDataSource, Sliderfrom bokeh.plotting import Figure, output_notebook, showx = [x*0.005 for x in range(0, 200)]y = xsource = ColumnDataSource(data=dict(x=x, y=y))plot = Figure(plot_width=400, plot_height=400)plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)callback = CustomJS(args=dict(source=source), code=""" var data = source.data; var f = cb_obj.value var x = data['x'] var y = data['y'] for (var i = 0; i < x.length; i++) { y[i] = Math.pow(x[i], f) } source.change.emit();""")slider = Slider(start=0.1, end=4, value=1, step=.1, title="power")slider.js_on_change('value', callback)layout = column(slider, plot)show(layout)I was wondering if there's a way to use the (most recent) value of y after changing the widget to be used in other computations in the next cells of a notebook. What occurred to my mind was returning y in the callback but I'm open to any other idea.
If you want full synchronization between the Python runtime and the JS output, you would have to embed a Bokeh server application in the notebook. (The Bokeh server is specifically the thing whose job is to keeps things in sync.) Otherwise all output in a Jupyter noteboook is uni-directional only, Python -> JavaScript, with no updates coming back to the Python kernel. You can see an example notebook with a Bokeh server app embedded here:https://github.com/bokeh/bokeh/blob/master/examples/howto/server_embed/notebook_embed.ipynb
Python Q-learning implementation not working I implemented a small simulation based on the sugarscape model in python. I have three classes in the program and when I try run the Q-learning algorithm I wrote the model converges into a single state and never changes where as when I run the model without the q-learning algorithm then the states are always different. sugarscape_env.pyimport gymfrom gym import error, spaces, utilsfrom gym.utils import seedingimport loggingimport numpyimport sysimport randomfrom six import StringIOfrom agents import Agentfrom IPython.display import Markdown, displayfrom pandas import *numpy.set_printoptions(threshold=sys.maxsize)logger = logging.getLogger(__name__)ACTIONS = ["N", "E", "S", "W", "EAT"]list_of_agents = []list_of_agents_shuffled = {}number_of_agents_in_list = 0size_of_environment = 0agents_dead = 0initial_number_of_agents = 0P = {state: {action: [] for action in range(5)} for state in range(2500)}# 50 * 50 = 2500 positions on the map any agent can be in, then 5 actions that can occur so 2500 * 5 = 12,500 states/actionsstate = Nonenew_state = Nonenew_row = Nonenew_col = Nonereward = Nonedone = Noneaction_performed = Nonerandom.seed(9001)class SugarscapeEnv(gym.Env): metadata = {'render.modes': ['human']} def __init__(self): super(SugarscapeEnv, self).__init__() self.action_space = spaces.Discrete(5) #Number of applicable actions self.observation_space = spaces.Discrete(50 * 50) # state space on 50 by 50 grid self.current_step = 0 def step(self, action): global reward, done, state """ Parameters ---------- action : Returns ------- ob, reward, episode_over, info : tuple ob (object) : an environment-specific object representing your observation of the environment. reward (float) : amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward. episode_over (bool) : whether it's time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.) info (dict) : diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment's last state change). However, official evaluations of your agent are not allowed to use this for learning. """ self._take_action(action) # Perform one action (N, E, S or W) #self._agent_s_wealth() # Return agents sugar wealth and information self._regeneration() # The agents_die method doesn't work properly due to list indexing, need to work on that. #self._agents_die() # Have any agents died? If so replace the dead ones with new ones. self.current_step += 1 #self.status = self._get_status() # Are all agents still alive or have they all died? #episode_over = self.status == 'ALL AGENTS DEAD' # Have all the agents died? return state, reward, done, {} # Return the ob, reward, episode_over and {} def _regeneration(self): global size_of_environment random_sugar = random.randrange(0, 3) """ 1. Iterate over all 0 sugar cells of the environment 2. change the 0 to a random number between 0 - 5 (so at least some sugar is created) """ for x in range(size_of_environment): for y in range(size_of_environment): if(self.environment[x, y] == 0): self.environment[x, y] = random_sugar def _get_P(self): global P return P def _take_action(self, action): """ One action is performed if action is N then agent will consider moving North if the sugar in the north cell (distance measured by vision of agent) is greater than or equal all other moves W, E or S. If moving North is not lucractive enough then agent will randomly move to the next highest paying cell. """ global list_of_agents, ACTIONS, list_of_agents_shuffled, number_of_agents_in_list, size_of_environment, P, state, new_row, new_col, reward, done, action_performed, new_state agents_iteration = 0 #while (number_of_agents != 10): #CHANGE TO 250 for x in range(size_of_environment): for y in range(size_of_environment): #while number_of_agents in range(10): # FOR EACH CELL, CHECK IF AN AGENT OUT OF THE 250 IS STANDING IN THAT CELL. if agents_iteration < number_of_agents_in_list: if(self.environment[x, y] == "\033[1mX\033[0m" and list_of_agents_shuffled[agents_iteration].get_ID() == agents_iteration): #print(f"agend ID: {list_of_agents_shuffled[agents_iteration].get_ID()} and iteration {agents_iteration}") #current_cell_sugar = self.environment[x, y] #DEFAULTS state = self.encode(x, y) new_row = x new_col = y self._agents_die() reward = self._get_reward() self.status = self._get_status() done = self.status == 'ALL AGENTS DEAD' # Once the agent has been identified in the environment we set the applicable moves and vision variables vision_of_agent = list_of_agents_shuffled[agents_iteration].get_vision() move_south = self.environment[(x - vision_of_agent) % size_of_environment, y] move_north = self.environment[(x + vision_of_agent) % size_of_environment, y] move_east = self.environment[x, (y + vision_of_agent) % size_of_environment] move_west = self.environment[x, (y - vision_of_agent) % size_of_environment] # If moving south, north, east or west means coming into contact with another agent # Set that locations sugar to 0 if(isinstance(self.environment[(x - vision_of_agent) % size_of_environment, y], str)): move_south = int(0) if(isinstance(self.environment[(x + vision_of_agent) % size_of_environment, y], str)): move_north = int(0) if(isinstance(self.environment[x, (y + vision_of_agent) % size_of_environment], str)): move_east = int(0) if(isinstance(self.environment[x, (y - vision_of_agent) % size_of_environment], str)): move_west = int(0) #print(move_north, move_east, move_south, move_west) # MOVE UP (N) if(action == ACTIONS[0]): if((move_north >= move_south) and (move_north >= move_east) and (move_north >= move_west)): # AGENT COLLECTS SUGAR. list_of_agents_shuffled[agents_iteration].collect_sugar(move_north) # CALCULATE AGENT SUGAR HEALTH list_of_agents_shuffled[agents_iteration].calculate_s_wealth() # SUGAR AT LOCATION NOW SET TO 0 self.environment[(x + vision_of_agent) % size_of_environment, y] = 0 self.environment_duplicate[(x + vision_of_agent) % size_of_environment, y] = 0 #MOVE AGENT TO NEW LOCATION. self.environment[(x + vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[(x + vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration] # SET PREVIOUS POSITION CELL TO 0 sugar self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 # ADD ACTIONS TO ENV ACT action_performed = 0 new_row = (x + vision_of_agent) % size_of_environment new_col = y else: self._random_move(agents_iteration, move_south, move_east, move_north, move_west, x, y, vision_of_agent) # MOVE DOWN (S) if(action == ACTIONS[2]): if((move_south >= move_north) and (move_south >= move_east) and (move_south >= move_west)): # AGENT COLLECTS SUGAR. list_of_agents_shuffled[agents_iteration].collect_sugar(move_south) # CALCULATE AGENT SUGAR HEALTH list_of_agents_shuffled[agents_iteration].calculate_s_wealth() # SUGAR AT LOCATION NOW SET TO 0 self.environment[(x - vision_of_agent) % size_of_environment, y] = 0 self.environment_duplicate[(x - vision_of_agent) % size_of_environment, y] = 0 #MOVE AGENT TO NEW LOCATION. self.environment[(x - vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[(x - vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration] # SET PREVIOUS POSITION CELL TO 0 sugar self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 # ADD ACTIONS TO ENV ACT action_performed = 2 new_row = (x - vision_of_agent) % size_of_environment new_col = y else: self._random_move(agents_iteration, move_south, move_east, move_north, move_west, x, y, vision_of_agent) # MOVE LEFT (W) if(action == ACTIONS[3]): if((move_west >= move_south) and (move_west >= move_east) and (move_west >= move_north)): # AGENT COLLECTS SUGAR. list_of_agents_shuffled[agents_iteration].collect_sugar(move_west) # CALCULATE AGENT SUGAR HEALTH list_of_agents_shuffled[agents_iteration].calculate_s_wealth() # SUGAR AT LOCATION NOW SET TO 0 self.environment[x, (y - vision_of_agent) % size_of_environment] = 0 self.environment_duplicate[x, (y - vision_of_agent) % size_of_environment] = 0 #MOVE AGENT TO NEW LOCATION. self.environment[x, (y - vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[x, (y - vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration] # SET PREVIOUS POSITION CELL TO 0 sugar self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 # ADD ACTIONS TO ENV ACT action_performed = 3 new_row = x new_col = (y - vision_of_agent) % size_of_environment else: self._random_move(agents_iteration, move_south, move_east, move_north, move_west, x, y, vision_of_agent) # MOVE RIGHT (E) if(action == ACTIONS[1]): if((move_east >= move_south) or (move_east >= move_west) or (move_east >= move_north)): # AGENT COLLECTS SUGAR. list_of_agents_shuffled[agents_iteration].collect_sugar(move_east) # CALCULATE AGENT SUGAR HEALTH list_of_agents_shuffled[agents_iteration].calculate_s_wealth() # SUGAR AT LOCATION NOW SET TO 0 self.environment[x, (y + vision_of_agent) % size_of_environment] = 0 self.environment_duplicate[x, (y + vision_of_agent) % size_of_environment] = 0 #MOVE AGENT TO NEW LOCATION. self.environment[x, (y + vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[x, (y + vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration] # SET PREVIOUS POSITION CELL TO 0 sugar self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 # ADD ACTIONS TO ENV ACT action_performed = 1 new_row = x new_col = (y + vision_of_agent) % size_of_environment else: self._random_move(agents_iteration, move_south, move_east, move_north, move_west, x, y, vision_of_agent) new_state = self.encode(new_row, new_col) P[state][action_performed].append( (1.0, new_state, reward, done)) agents_iteration = agents_iteration + 1 # state = env.get_state() def get_state(self): global state return state def _random_move(self, agents_iteration, move_south, move_east, move_north, move_west, x, y, vision_of_agent): global list_of_agents, ACTIONS, list_of_agents_shuffled, size_of_environment, P, state, new_row, new_col, reward, done, action_performed, new_state random_move = random.randrange(0, 3) if random_move == 0: list_of_agents_shuffled[agents_iteration].collect_sugar(move_north) list_of_agents_shuffled[agents_iteration].calculate_s_wealth() self.environment[(x + vision_of_agent) % size_of_environment, y] = 0 self.environment_duplicate[(x + vision_of_agent) % size_of_environment, y] = 0 self.environment[(x + vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[(x + vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration] self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 action_performed = 0 new_row = (x + vision_of_agent) % size_of_environment new_col = y elif random_move == 1: list_of_agents_shuffled[agents_iteration].collect_sugar(move_east) list_of_agents_shuffled[agents_iteration].calculate_s_wealth() self.environment[x, (y + vision_of_agent) % size_of_environment] = 0 self.environment_duplicate[x, (y + vision_of_agent) % size_of_environment] = 0 self.environment[x, (y + vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[x, (y + vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration] self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 action_performed = 1 new_row = x new_col = (y + vision_of_agent) % size_of_environment elif random_move == 2: list_of_agents_shuffled[agents_iteration].collect_sugar(move_south) list_of_agents_shuffled[agents_iteration].calculate_s_wealth() self.environment[(x - vision_of_agent) % size_of_environment, y] = 0 self.environment_duplicate[(x - vision_of_agent) % size_of_environment, y] = 0 self.environment[(x - vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[(x - vision_of_agent) % size_of_environment, y] = list_of_agents_shuffled[agents_iteration] self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 action_performed = 2 new_row = (x - vision_of_agent) % size_of_environment new_col = y else: list_of_agents_shuffled[agents_iteration].collect_sugar(move_west) list_of_agents_shuffled[agents_iteration].calculate_s_wealth() self.environment[x, (y - vision_of_agent) % size_of_environment] = 0 self.environment_duplicate[x, (y - vision_of_agent) % size_of_environment] = 0 self.environment[x, (y - vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration].get_visual() self.environment_duplicate[x, (y - vision_of_agent) % size_of_environment] = list_of_agents_shuffled[agents_iteration] self.environment[x, y] = 0 self.environment_duplicate[x, y] = 0 action_performed = 3 new_row = x new_col = (y - vision_of_agent) % size_of_environment new_state = self.encode(new_row, new_col) P[state][action_performed].append( (1.0, new_state, reward, done)) # 50 * 50 def encode(self, agent_row, agent_column): i = agent_row i *= 50 i = agent_column i *= 50 return i def decode(self, i): out = [] out.append(i % 50) i = i // 50 out.append(i % 50) i = i // 50 out.append(i) assert 0 <= i < 50 return reversed(out) def _get_reward(self): """ If all agents have positive s_wealth then reward 1 else 0 therefore, the Q-learning algorithm will try learn how each agent can move to have positive s_wealth each iteration. """ global agents_dead, number_of_agents_in_list if (agents_dead == 0): return 10 elif(agents_dead < (number_of_agents_in_list / 2)): return 5 else: return -1 def reset(self, number_of_agents_in_list_local, size_of_environment_local): global number_of_agents_in_list, list_of_agents, list_of_agents_shuffled, size_of_environment, observation_space_calculated, initial_number_of_agents number_of_agents_in_list = number_of_agents_in_list_local size_of_environment = size_of_environment_local initial_number_of_agents = number_of_agents_in_list_local observation_space_calculated = size_of_environment_local number_of_agents = 0 # Reset the state of the environment to an initial state self.growth_rate = 1 self.environment = numpy.empty((size_of_environment,size_of_environment), dtype=numpy.object) self.environment_duplicate = numpy.empty((size_of_environment, size_of_environment), dtype=numpy.object) # Creating 250 agent objects and putting them into the list_of_agents array. for i in range(number_of_agents_in_list): #CHANGE TO 250 list_of_agents.append(Agent(i)) # Looping though the environment and adding random values between 0 and 4 # This will be sugar levels. for i in range(size_of_environment): for j in range(size_of_environment): self.environment[i, j] = random.randrange(0, 4) # Looping 250 times over the environment and randomly placing agents on 0 sugar cells. while(number_of_agents != number_of_agents_in_list): #CHANGE TO 250 x = random.randrange(size_of_environment) y = random.randrange(size_of_environment) if(self.environment[x, y] == 0): self.environment[x, y] = list_of_agents[number_of_agents].get_visual() self.environment_duplicate[x, y] = list_of_agents[number_of_agents] # Added the agent objects have been placed down randomly onto the environment from first to last. list_of_agents_shuffled[number_of_agents] = list_of_agents[number_of_agents] number_of_agents = number_of_agents + 1 def _get_status(self): global size_of_environment """ count the environment cells. If there are no X's on the environment then count these cells, if the total number of cells in the environment is the max size of the cells then all agents have died, else some agents are still alive. """ counter = 0 for i in range(size_of_environment): for j in range(size_of_environment): if(self.environment[i, j] != "\033[1mX\033[0m"): counter = counter + 1 if(counter == (size_of_environment * size_of_environment)): return 'ALL AGENTS DEAD' else: return 'SOME AGENTS STILL ALIVE' def render(self, mode='human', close=False): """ Prints the state of the environment 2D grid """ return('\n'.join([''.join(['{:1}'.format(item) for item in row]) for row in self.environment])) def _agent_s_wealth(self): """ Returns the agents information each iteration of the simulation. ID, SUGAR WEALTH and AGE """ for i in range(number_of_agents_in_list): print("Agent %s is of age %s and has sugar wealth %s" % (list_of_agents_shuffled[i].get_ID(),list_of_agents_shuffled[i].get_age(), list_of_agents_shuffled[i].get_s_wealth())) def _agents_die(self): """ total_simulation_runs increments by 1 each iteration of the simulation when the total_simulation_runs == agents.age then agent dies and a new agent appears in a random location on the environment. number_of_agents_in_list: the number of agents created in the environment originally. agent_to_die = the agent whose age is == the frame number agent_dead = boolean if agent has died. """ agent_to_die = None agent_dead = False global number_of_agents_in_list, size_of_environment, agents_dead # Remove the agents from the dictionary for i in range(number_of_agents_in_list): if (list_of_agents_shuffled[i].get_age() == self.current_step): """Remove the agent from the list of agents""" agent_to_die = list_of_agents_shuffled[i].get_ID() del list_of_agents_shuffled[i] key_value_of_agent_dead_in_dictionary = i # An agent is being deleted from the environment. agent_dead = True number_of_agents_in_list = number_of_agents_in_list - 1 if(agent_dead == True): agents_dead += 1 # Remove the agent from the list. for i in range(number_of_agents_in_list): if agent_to_die == list_of_agents[i].get_ID(): del list_of_agents[i] # Create a new agent and add it to the list_of_agents list. list_of_agents.append(Agent(key_value_of_agent_dead_in_dictionary)) # Add new agent to dictionary. list_of_agents_shuffled[key_value_of_agent_dead_in_dictionary] = list_of_agents[len(list_of_agents) - 1] #print(f"AGENT AGE ADDED TO DICTIONARY: {list_of_agents_shuffled[key_value_of_agent_dead_in_dictionary].get_age()}") # Replace the agent in the Environment with the new agent. for x in range(size_of_environment): for y in range(size_of_environment): if(self.environment[x, y] == "\033[1mX\033[0m" and self.environment_duplicate[x, y].get_ID() == agent_to_die): # Add new agent to environment where old agent died. self.environment[x, y] = list_of_agents[number_of_agents_in_list].get_visual() self.environment_duplicate[x, y] = list_of_agents[number_of_agents_in_list] number_of_agents_in_list += 1agents.pyimport randomfrom IPython.display import Markdown, displayrandom.seed(9001)class Agent: def __init__(self, ID): self.vision = random.randrange(1, 6) self.metabolic_rate = random.randrange(1, 4) self.age = random.randrange(500) self.s_wealth = random.randrange(5, 25) self.sugar_collected = 0 self.ID = ID self.visual = "\033[1mX\033[0m" def get_vision(self): return self.vision def get_visual(self): return self.visual def get_metabolic_rate(self): return self.metabolic_rate def get_age(self): return self.age def get_s_wealth(self): return self.s_wealth def calculate_s_wealth(self): self.s_wealth = self.sugar_collected - self.metabolic_rate def collect_sugar(self, environment_cell_sugar): self.sugar_collected = self.sugar_collected + environment_cell_sugar def get_ID(self): return self.IDand finally main.pyfrom sugarscape_env import SugarscapeEnvimport randomfrom IPython.display import clear_outputimport numpy as npalpha = 0.1gamma = 0.6epsilon = 0.1all_epochs = []all_penalties = []ACTIONS = ['N', 'E', 'S', 'W']"""Example scenario run of model"""x = SugarscapeEnv()#x = SugarscapeEnv()#x.reset(10, 50) # 50 by 50 grid and 10 agents.q_table = np.zeros([x.observation_space.n, x.action_space.n])# For plotting metricsall_epochs = []all_penalties = []for j in range(1, 100001): x.reset(10, 50) # 50 by 50 grid and 10 agents. state = x.get_state() print("OLD STATE: ", state) epochs, penalties, reward, = 0, 0, 0 done = False for i in range(100): if random.uniform(0, 1) < epsilon: action = random.randrange(4) else: action = np.argmax(q_table[state]) next_state, reward, done, info = x.step(ACTIONS[action]) old_value = q_table[state, action] next_max = np.max(q_table[next_state]) new_value = (1 - alpha) * old_value + alpha * (reward + gamma * next_max) q_table[state, action] = new_value if reward == -1: penalties += 1 state = next_state print("NEW STATE: ", state) epochs += 1print("Training finished.\n")If you implement the above and run main.py the model will run, however, the following output happens overtime:NEW STATE: 850NEW STATE: 1800NEW STATE: 300NEW STATE: 2300OLD STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300NEW STATE: 2300The states change but at some point they stop changing, and I can't seem to figure out why.
More questions than answers. I don't know where the problem is exactly, but here is list of unclear things in the code, which, if corrected, may probably get you to the desired result:is it an intention in _regeneration to seed entire field with same amount of sugar? You randomly pick value once and assign it to entire field. Looks strange as all tiles are now have same attractiveness to agents. all agents receive same action, e.g. move North, rather than each agents make a decision for itself it's strange that action can be overwritten by some logic: don't execute action as it's not the best one, so action randomly insteadstate is just a column of agent multiplied by 50, what it suppose to mean? It does not look connected to sugar distribution at all. I do not see q_table being updated for newly generated field; it looks that logic is following: with 10% chance move randomly, with 90% chance look if new_state was encountered before and choose what was best action then. Multiple issues here: state does not represent field and sugar, no consideration to agents new positions. Actually, I'm struggling to understand how this thing would work at all. PSis it inspired by Primer (https://www.youtube.com/channel/UCKzJFdi57J53Vr_BkTfN3uQ) videos?
Implementing hierarchy for Enum members I would like to establish a hierarchy for the members of my Enum. My (simplified) enum aims at representing different types of food. Of course, everyone knows a burger is "superior" to a pizza and my enum needs to convey this idea:from functools import total_orderingfrom enum import IntEnum, unique@unique@total_orderingclass FoodType(IntEnum): PIZZA = 100 COOKIE = 200 STEAK = 300 BURGER = 400 def __lt__(self, other): if self.__class__ is other.__class__: return self.FOOD_HIERARCHY.index(self) < self.FOOD_HIERARCHY.index(other) return NotImplemented def __gt__(self, other): if self.__class__ is other.__class__: return self.FOOD_HIERARCHY.index(self) > self.FOOD_HIERARCHY.index(other) return NotImplemented def __eq__(self, other): if self.__class__ is other.__class__: return self.FOOD_HIERARCHY.index(self) == self.FOOD_HIERARCHY.index(other) return NotImplemented# Order is important here; smallest entity firstFoodType.FOOD_HIERARCHY = [ FoodType.COOKIE, FoodType.STEAK, FoodType.PIZZA, FoodType.BURGER,]Here my food types are arbitrary integers. They need to be integers for reasons outside of the scope of this question. I also can't use the integer values for comparison, nor the order of definition of the food types. That is why I create the hierarchy of FoodType outside the enums, and make it an attribute of the Enum after the definition.I would like to use the positions of the food types (aka indexes) to implement the comparison methods.However when I run a simple comparison on two of the FoodType mentioned above, I get a recursion error:In [2]: from test import FoodTypeIn [3]: FoodType.PIZZA < FoodType.BURGER---------------------------------------------------------------------------RecursionError Traceback (most recent call last)<ipython-input-3-1880a19bb0cd> in <module>----> 1 FoodType.PIZZA < FoodType.BURGER~/projects/test.py in __lt__(self, other) 13 def __lt__(self, other): 14 if self.__class__ is other.__class__:---> 15 return self.FOOD_HIERARCHY.index(self) < self.FOOD_HIERARCHY.index(other) 16 return NotImplemented 17~/projects//test.py in __eq__(self, other) 23 def __eq__(self, other): 24 if self.__class__ is other.__class__:---> 25 return self.FOOD_HIERARCHY.index(self) == self.FOOD_HIERARCHY.index(other) 26 return NotImplemented 27... last 1 frames repeated, from the frame below ...~/projects/test.py in __eq__(self, other) 23 def __eq__(self, other): 24 if self.__class__ is other.__class__:---> 25 return self.FOOD_HIERARCHY.index(self) == self.FOOD_HIERARCHY.index(other) 26 return NotImplemented 27RecursionError: maximum recursion depth exceeded while calling a Python objectI can't figure out why I get a recursion error. If I use the enum values to build the hierarchy and to look up the indexes, I can make this code work, but I would like to avoid that if possible.Any idea why I get the recursion error and how I could make this code more elegant?EDIT: as people mentioned in the comments, I do override __eq__, __lt__ and __gt__. I wouldn't have done it normally, but in my real life example I have two different hierarchies and some enum members can be in the two hierarchies. So I need to first check the 2 enum members I'm comparing are in the same hierarchy. That said, I can probably use __super()__. Thanks for the observation.EDIT 2:Base on @Ethan Furman's answer, here is what the final code looks like:from enum import IntEnum, uniquedef hierarchy(hierarchy_name, member_names): def decorate(enum_cls): for name in enum_cls.__members__: if not hasattr(enum_cls[name], "ordering"): enum_cls[name].ordering = {} for i, name in enumerate(member_names.split()): # FIXME, check if name in __members__ # FIXME, shouldn't exist yet, check! enum_cls[name].ordering[hierarchy_name] = i return enum_cls return decorate@hierarchy("food_hierarchy", "COOKIE STEAK PIZZA BURGER")@uniqueclass FoodType(IntEnum): PIZZA = 100 COOKIE = 200 STEAK = 300 BURGER = 400 def __lt__(self, other) -> bool: if self.__class__ is other.__class__: try: hierarchy = (self.ordering.keys() & other.ordering.keys()).pop() except KeyError: raise ValueError("uncomparable, hierachies don't overlap") return self.ordering[hierarchy] < other.ordering[hierarchy] return NotImplemented def __eq__(self, other) -> bool: if self.__class__ is other.__class__: return int(self) == int(other) return NotImplemented
You get a recursion error because in order to determine the index the list elements need to compared for equality, which in turn will invoke __eq__. Alternatively you could use a mapping from the enum members to some ordering, e.g.:FoodType.FOOD_HIERARCHIES = [ {FoodType.COOKIE: 1, FoodType.PIZZA: 2, FoodType.BURGER: 3}, {FoodType.STEAK: 1, FoodType.BURGER: 2},]This requires to make the enum hashable:def __hash__(self): return hash(self._name_)This works because the dictionary lookup checks for object identity before considering __eq__.Since total_ordering won't replace the methods inherited from the base class, you'd need to override all comparison methods (or inherit from Enum instead of IntEnum):from enum import IntEnum, uniqueimport operator@uniqueclass FoodType(IntEnum): PIZZA = 100 COOKIE = 200 STEAK = 300 BURGER = 400 def __hash__(self): return hash(self._name_) def __lt__(self, other): return self._compare(other, operator.lt) def __le__(self, other): return self._compare(other, operator.le) def __gt__(self, other): return self._compare(other, operator.gt) def __ge__(self, other): return self._compare(other, operator.ge) def __eq__(self, other): return self._compare(other, operator.eq) def __ne__(self, other): return self._compare(other, operator.ne) def _compare(self, other, op): if self.__class__ is other.__class__: hierarchy = next(h for h in self.FOOD_HIERARCHIES if self in h) try: return op(hierarchy[self], hierarchy[other]) except KeyError: return False # or: return NotImplemented return NotImplementedFoodType.FOOD_HIERARCHIES = [ {FoodType.COOKIE: 1, FoodType.PIZZA: 2, FoodType.BURGER: 3}, {FoodType.STEAK: 1, FoodType.BURGER: 2},]print(FoodType.COOKIE < FoodType.BURGER) # Trueprint(FoodType.STEAK > FoodType.BURGER) # Falseprint(FoodType.STEAK < FoodType.PIZZA) # False
Reshaping a messy dataset using Pandas I got this messy dataset from a csv-file that contains multiple entries in the same cell.This is how it looks:file = ('messy.csv')df = pd.read_csv(file)df.head()Folders Files aa; bb; aa.src aa.xml ; bb.src bb.war ;cc; cc.pom cc.py cc.js ;dd; ee; ff; dd.ts dd.js ; ee.py ; ff.xml ff.js ;In the Folders column the values are separated with a semicolon ";". In the Files column the values are separated with a space and a semicolon " ;". Files that belongs to the same Folder are only separated with a space. I need help to reshape this into a more manageable dataframe, or into a JSON-dict/list. I haven't found many examples in which there are multiple values in the same cell that I can take help from.Sure, a "manageable" format is kinda ambiguous but anything is better than this...Something like this maybe:Folders Files 1 Files 2 Files 3 aa aa.src aa.xml NaNbb bb.src bb.war NaNcc cc.pom cc.py cc.jsdd dd.ts dd.js NaNee ee.py NaN NaNff ff.xml ff.js NaNOr if there are better ideas I am open for suggestions. After I've reshaped it, it's gonna be converted into a JSON-format.
Turning it into a json/dictOk so probably not the most efficient solution but it works:import pandas as pd# Recreating the dataframedf = pd.DataFrame({'Folders':["aa; bb;", "cc", "dd; ee; ff;"], 'Files':['aa.src aa.xml ; bb.src bb.war ;', 'cc.pom cc.py cc.js ;', 'dd.ts dd.js ; ee.py ; ff.xml ff.js ;']})#Split df according to ; and removing leading ;df = df.apply(lambda x: x.str.rstrip(';').str.split(';'))print(df)So now your dataframe looks like this: Folders Files0 [aa, bb] [aa.src aa.xml , bb.src bb.war ]1 [cc] [cc.pom cc.py cc.js ]2 [dd, ee, ff] [dd.ts dd.js , ee.py , ff.xml ff.js ]Then I loop through the dataframe to build the dict:# Creating the dict by looping through the dataframe and number of elements of foldersdf_dict=dict()for index, row in df.iterrows(): for i, key in enumerate(row['Folders']): df_dict[key.strip()] = row['Files'][i].strip().split(' ')print(df_dict) And this is the output:{'aa': ['aa.src', 'aa.xml'], 'bb': ['bb.src', 'bb.war'], 'cc': ['cc.pom', 'cc.py', 'cc.js'], 'dd': ['dd.ts', 'dd.js'], 'ee': ['ee.py'], 'ff': ['ff.xml', 'ff.js']}If you can encounter twice the same key, I suggest to use this version of the code who check if the key already exists:import pandas as pd# Recreating the dataframedf = pd.DataFrame({'Folders':["aa; bb;", "cc", "dd; ee; ff;", 'aa'], 'Files':['aa.src aa.xml ; bb.src bb.war ;', 'cc.pom cc.py cc.js ;', 'dd.ts dd.js ; ee.py ; ff.xml ff.js ;', 'aa.tst']})#Split df according to ; and removing leading ;df = df.apply(lambda x: x.str.rstrip(';').str.split(';'))print(df)df_dict=dict()for index, row in df.iterrows(): for i, key in enumerate(row['Folders']): if key.strip() in df_dict: df_dict[key.strip()] += row['Files'][i].strip().split(' ') else: df_dict[key.strip()] = row['Files'][i].strip().split(' ')print(df_dict)
How can I split a string into a list by sentences, but keep the \n? I want to split text into sentences but keep the \n such as:Civility vicinity graceful is it at. Improve up at to on mentionperhaps raising. Way building not get formerly her peculiar.Arrived totally in as between private. Favour of so as on prettythough elinor direct.into sentences like:['Civility vicinity graceful is it at.', 'Improve up at to on mentionperhaps raising.', 'Way building not get formerly her peculiar.', '\nArrived totally in as between private.', 'Favour of so as on pretty though elinor direct.']Right now I'm using this code with re to split the sentences: import realphabets= "([A-Za-z])"prefixes = "(Mr|St|Mrs|Ms|Dr)[.]"suffixes = "(Inc|Ltd|Jr|Sr|Co)"starters = "(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever)"acronyms = "([A-Z][.][A-Z][.](?:[A-Z][.])?)"websites = "[.](com|net|org|io|gov)"digits = "([0-9])"def remove_urls(text): text = re.sub(r'http\S+', '', text) return textdef split_into_sentences(text): print("in") print(text) text = " " + text + " " text = re.sub(prefixes,"\\1<prd>",text) text = re.sub(websites,"<prd>\\1",text) text = re.sub(digits + "[.]" + digits,"\\1<prd>\\2",text) if "Ph.D" in text: text = text.replace("Ph.D.","Ph<prd>D<prd>") text = re.sub("\s" + alphabets + "[.] "," \\1<prd> ",text) text = re.sub(acronyms+" "+starters,"\\1<stop> \\2",text) if "..." in text: text = text.replace("...",".<prd>") text = re.sub(alphabets + "[.]" + alphabets + "[.]","\\1<prd>\\2<prd>",text) text = re.sub(" "+suffixes+"[.] "+starters," \\1<stop> \\2",text) text = re.sub(" "+suffixes+"[.]"," \\1<prd>",text) text = re.sub(" " + alphabets + "[.]"," \\1<prd>",text) if "”" in text: text = text.replace(".”","”.") if "\"" in text: text = text.replace(".\"","\".") if "!" in text: text = text.replace("!\"","\"!") if "?" in text: text = text.replace("?\"","\"?") text = text.replace(".",".<stop>") text = text.replace("?","?<stop>") text = text.replace("!","!<stop>") text = text.replace("<prd>",".") sentences = text.split("<stop>") sentences = sentences[:-1] sentences = [s.strip() for s in sentences] print(sentences) return sentencesHowever the code gets rid of the \n, which I need. I need the \n because I'm using text in moviepy, and moviepy has no built in functions to space out text with \n, so I must create my own. The only way I can do that is through having \n as a signifier in the text, but when I split my sentences it also gets rid of the \n. What should I do?
You can use (?<=...) to retain separator followed by what you want to remove by the split:import res='Civility vicinity graceful is it at. Improve up at to on mention perhaps raising. Way building not get formerly her peculiar.\n\nArrived totally in as between private. Favour of so as on pretty though elinor direct.'re.split(r'(?<=\.)[ \n]', s)output:['Civility vicinity graceful is it at.', 'Improve up at to on mention perhaps raising.', 'Way building not get formerly her peculiar.', '\nArrived totally in as between private.', 'Favour of so as on pretty though elinor direct.']
RuntimeError: populate() isn't reentrant I got a legacy project based on Django as backend and Angularjs on the front. It's deployed and working, but I got no docs at all so I had to guess everything out of how to deploy it in local, how the system works and that.Now, I've been asked to set it up in a pre-production environment, and so I tried to do, I copied all the configs from the production server and changed as necessary to fit the new environment[Thu Mar 15 07:08:53.612256 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] mod_wsgi (pid=13884): Target WSGI script '/opt/mysite/mysite/wsgi.py' cannot be loaded as Python module.[Thu Mar 15 07:08:53.612336 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] mod_wsgi (pid=13884): Exception occurred processing WSGI script '/opt/mysite/mysite/wsgi.py'.[Thu Mar 15 07:08:53.612539 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] Traceback (most recent call last):[Thu Mar 15 07:08:53.612602 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] File "/opt/mysite/mysite/wsgi.py", line 19, in <module>[Thu Mar 15 07:08:53.612611 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] application = get_wsgi_application()[Thu Mar 15 07:08:53.612624 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] File "/usr/local/lib/python3.6/dist-packages/django/core/wsgi.py", line 12, in get_wsgi_application[Thu Mar 15 07:08:53.612632 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] django.setup(set_prefix=False)[Thu Mar 15 07:08:53.612643 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] File "/usr/local/lib/python3.6/dist-packages/django/__init__.py", line 24, in setup[Thu Mar 15 07:08:53.612649 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] apps.populate(settings.INSTALLED_APPS)[Thu Mar 15 07:08:53.612659 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] File "/usr/local/lib/python3.6/dist-packages/django/apps/registry.py", line 81, in populate[Thu Mar 15 07:08:53.612666 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] raise RuntimeError("populate() isn't reentrant")[Thu Mar 15 07:08:53.612693 2018] [wsgi:error] [pid 13884:tid 140222719059712] [remote 212.170.177.164:49429] RuntimeError: populate() isn't reentrantI tried to access that file (registry.py) and edit it so I could see what's going on inside, but it doesn't print anything on the Django console, neither in a file.Here are my config files:/etc/apache2/sites-enabled/dev.my_domain.com.confWSGIPythonPath /opt/mysite:/home/my_user/.local/lib/python3.6/site-packages<VirtualHost *:80> WSGIScriptAlias /backend /opt/mysite/mysite/wsgi.py <Directory /opt/mysite/mysite/> Require all granted </Directory> WSGIDaemonProcess mysite python-path=/opt/mysite:/home/my_user/.local/lib/python3.6/site-packages WSGIProcessGroup mysite Alias /media/ /opt/mysite/media/ Alias /static/ /opt/mysite/static/ <Directory /opt/mysite/media> Require all granted </Directory> <Directory /opt/mysite/static> Require all granted </Directory> ServerName dev.my_domain.com ServerAdmin admin@my_domain.com DocumentRoot /var/www/html/dev.my_domain.com ErrorLog /var/log/apache2/virtual.host.error.log CustomLog /var/log/apache2/virtual.host.access.log combined LogLevel warn</VirtualHost>/var/www/html/dev.mydomain.com/.htaccessRewriteEngine onRewriteCond %{HTTP_HOST} ^www\.(.*)RewriteRule ^.*$ https://%1/$1 [R=301,L]RewriteEngine OnRewriteBase /RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteCond %{REQUEST_URI} !indexRewriteCond %{REQUEST_URI} !.*\.(css¦js|html|png)RewriteRule ^(.*) /index.htmlThe Django server can be run properly from the console without any error but the requests never reach it. I'm running Python3.6 and Django 1.9.5 (same as the prod server) over Ubuntu 17.04. The .htaccess has been copied from the prod server as it is. The firewall is not blocking the port 8000 for Django. Not running over virtualenvsAny suggestions of how to face this?
I had a similar error out of a simple $ python manage.py shell in my case. I couldn't find any help out of searching for RuntimeError("populate() isn't reentrant"), so I finally set about poking through it in the debugger.In my case, it turned out to be that Oracle wasn't happy over my environment variables.If that's not your problem, 'cause you checked that, then here are some steps that might help.SummaryI had to get to the right point in the code where the following happens. This is before the point where it blows up completely, so unfortunately you can't just run the debugger to the final point and hit [w]here to see the trace: (Pdb) n ImproperlyConfigured: Improper...ectory',) <<- that's it, folksTo display what that cryptic abbreviated message is, the [w]here command reveals (at the end of the stack): (Pdb) w [...] -> raise ImproperlyConfigured("Error loading cx_Oracle module: %s" % e) (Pdb)So it turns out that I forgot to load a few environment variables and Oracle failed to load. Sigh.DetailIf you decide to use this method, I recommend setting your breakpoint like this (your line number may vary - versions change):Run the app in the debugger, and let it fail.(env) [prompt] $ python -m pdb manage.py shell > <fullpath>/manage.py(2)<module>()-> import os(Pdb) cTraceback (most recent call last): [snip] File "<path>/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "<path>/django/apps/registry.py", line 78, in populate raise RuntimeError("populate() isn't reentrant")RuntimeError: populate() isn't reentrantUncaught exception. Entering post mortem debuggingRunning 'cont' or 'step' will restart the program > <path>/django/apps/registry.py(78)populate()-> raise RuntimeError("populate() isn't reentrant")(Pdb) Take note of the fact that it failed (in my case) in registry.py. Look around in registry.py while you're here. You're looking for the line where it calls the import_models() method. In my case, line 104 gets me in the neighborhood:(Pdb) l 104 99 if duplicates:100 raise ImproperlyConfigured(101 "Application names aren't unique, "102 "duplicates: %s" % ", ".join(duplicates))103 104 self.apps_ready = True105 106 # Phase 2: import models modules.107 for app_config in self.app_configs.values():108 app_config.import_models()109 (Pdb) l110 self.clear_cache()111 112 self.models_ready = True113 114 # Phase 3: run ready() methods of app configs.115 for app_config in self.get_app_configs():116 app_config.ready()117 118 -> self.ready = True119 120 def check_apps_ready(self):(Pdb) Note that the code you're looking for is at line 108 in my example. That's where you're going to set your breakpoint.Restart (pdb won't reinitialize everything with a simple restart, so quit out and restart):(Pdb) q[snip]$ python -m pdb manage.py shell # just like you did beforeNow set that breakpoint:b django/apps/registry.py:108Breakpoint 1 at <path>/django/apps/registry.py:108(Pdb) Remember: your line number may be different. [c]ontinue to the breakpoint.Now comes the part that requires patience: hit [n]ext until you get something that looks, well, suspicious is the only word:(Pdb) nImproperlyConfigured: Improper...ectory',) <<-- yeah, that bit, there> <path>/django/apps/registry.py(108)populate()-> app_config.import_models()Now is the time to hit [w]here in the debugger:(Pdb) w[snip the long traceback][You'll see the line you're on now in the middle:]> <path>/django/apps/registry.py(108)populate()[snip]-> raise ImproperlyConfigured("Error loading cx_Oracle module: %s" % e)(Pdb) The raise line is what caused it to blow up.
How to print list items as if they're contents of print in python? words_list = ['who', 'got', '\n', 'inside', 'your', '\n', 'mind', 'baby']I have this list of words stored as a list element. I wanted to use the elements as contents of the print function. Ex.print(words_list[0] + words_list[1] + words_list[2]...words_list[n])My desired output would be: who got inside yourmind baby
In Python 3 you can do:print(*words_list)because print is just a function and the * operator in this context will unpack elements of your list and put them as positional arguments of the function call.In older versions you'd need to concatenate (join) elements of the array first, possibly converting them to strings, if they are not strings already. This can be done like this:print ' '.join([str(w) for w in words_list])or, more succinctly, using generator expression instead of list comprehension: print ' '.join(str(w) for w in words_list)Yet another alternative is to use the map function, which results in even shorter code:print ' '.join(map(str, words_list))However, if you are on Python 2.6+ but not on Python 3, you can get print as a function by importing it from the future:from __future__ import print_functionprint(*words_list)
Write a list of dictionaries row by row on a file I need to append dictionaries to a file row by row.At the end I will have a list of dictionary in the file.My naive attempt is:with open('outputfile', 'a') as fout: json.dump(resu, fout) file.write(',')But it does not work. Any suggestion?
If you need to save a number of dictionaries in a particular order, why not first put them in a list object and use json to serialize the whole thing for you?import jsondef example(): # create a list of dictionaries list_of_dictionaries = [ {'a': 0, 'b': 1, 'c': 2}, {'d': 3, 'e': 4, 'f': 5}, {'g': 6, 'h': 7, 'i': 8} ] # Save json info to file path = '.\\json_data.txt' save_file = open(path, "wb") json.dump(obj=list_of_dictionaries, fp=save_file) save_file.close() # Load json from file load_file = open(path, "rb") result = json.load(fp=load_file) load_file.close() # show that it worked print(result) returnif __name__ == '__main__': example()If your application must have you append new dictionaries from time to time, then you may need to do something closer to this:import jsondef example_2(): # create a list of dictionaries list_of_dictionaries = [ {'a': 0, 'b': 1, 'c': 2}, {'d': 3, 'e': 4, 'f': 5}, {'g': 6, 'h': 7, 'i': 8} ] # Save json info to file path = '.\\json_data.txt' save_file = open(path, "w") save_file.write(u'[') save_file.close() first = True for entry in list_of_dictionaries: save_file = open(path, "a") json_data = json.dumps(obj=entry) prefix = u'' if first else u', ' save_file.write(prefix + json_data) save_file.close() first = False save_file = open(path, "a") save_file.write(u']') save_file.close() # Load json from file load_file = open(path, "rb") result = json.load(fp=load_file) load_file.close() # show that it worked print(result) returnif __name__ == '__main__': example_2()
uWSGI will not run in mixed Python environment in order to operate correctly with nginx and run Django app Environment: Ubuntu 16.04 (with system Python at 2.7.12) running in Vagrant/Virtualbox on Windows 10 hostPython Setup: System python verified by doing python -V with no virtualenv's activated. Python 3.5 is also installed, and I've done pipenv --three to create the virtualenv for this project. Doing python -V within the activated virtualenv (pipenv shell to activate) shows Python 3.5.2.Additional Background: I'm developing a Wagtail 2 app. Wagtail 2 requires Django 2 which, of course, requires Python 3. I have other Django apps on this machine that were developed in Django 1.11/Python 2.7 and are served by Apache. We are moving to Django 2/Python 3 for development going forward and are moving to nginx/uWSGI for serving the apps.I have gone through many tutorials/many iterations. All Vagrant port mapping is set up fine with nginx serving media/static files and passing requests upstream to the Django app on a unix socket, but this is giving a 502 Gateway not found error because uWSGI will not run correctly. Therefore, right now I'm simply running the following from the command line to try to get uWSGI to run: uwsgi --ini /etc/uwsgi/sites/my_site.com.ini. This file contains:[uwsgi]uid = www-datagid = www-dataplugin = python35# Django-related settings# the base directory (full path)chdir=/var/sites/my_site# Django's wsgi filewsgi-file = my_site.wsgi# the virtualenv (full path)virtualenv=/root/.local/share/virtualenvs/my_site-gmmiTMID# process-related settings# mastermaster = True# maximum number of worker processesprocesses = 10# the socket (use the full path to be safe)socket = /tmp/my_site.sock# clear environment on exitvacuum = TrueI've tried installing uWSGI in the following ways:system-wide with pip install uwsgi as well as pip3 install uwsgiusing apt-get install uwsgi uwsgi-plugin-python3I've ensured that only one install is in place at a time by uninstalling any previous uwsgi installs. The latter install method places uwsgi-core in usr/bin and also places in usr/bin shortcuts to uwsgi, uwsgi_python3, and uwsgi_python35.In the .ini file I've also tried plugin = python3. I've also tried from the command line:uwsgi_python3 --ini /etc/uwsgi/sites/my_site.com.iniuwsgi_python35 --ini /etc/uwsgi/sites/my_site.com.iniI've tried executing the uwsgi ... .ini commands from both within the activated virtual environment and with the virtualenv deactivated. Each of the three command line uwsgi ... .ini executions (uwsgi ..., uwsgi_python3 ... and uwsgi_python35 ...) DO cause the .ini file to be executed, but each time I'm getting the following error (the last two lines of the following statements):[uwsgi] implicit plugin requested python35[uWSGI] getting INI configuration from /etc/uwsgi/sites/my_site.com.ini*** Starting uWSGI 2.0.12-debian (64bit) on [Wed Mar 7 03:54:44 2018] ***compiled with version: 5.4.0 20160609 on 31 August 2017 21:02:04os: Linux-4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018nodename: vagrant-ubuntu-trusty-64machine: x86_64clock source: unixpcre jit disableddetected number of CPU cores: 2current working directory: /vagrant/my_sitedetected binary path: /usr/bin/uwsgi-coresetgid() to 33setuid() to 33chdir() to /var/sites/my_siteyour processes number limit is 7743your memory page size is 4096 bytesdetected max file descriptor number: 1024lock engine: pthread robust mutexesthunder lock: disabled (you can enable it with --thunder-lock)uwsgi socket 0 bound to UNIX address /tmp/my_site.sock fd 3Python version: 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]Set PythonHome to /root/.local/share/virtualenvs/my_site-gmmiTMIDFatal Python error: Py_Initialize: Unable to get the locale encodingImportError: No module named 'encodings'If I go into the Python command line within the activated virtualenv and do import encodings, it imports fine (no message - just comes back to command line). A search for this particular error has turned up nothing of use. Any idea why the ImportError: No module named 'encodings' is coming up?UPDATE - PROBLEM STILL OCCURRINGI'm using pipenv, and it stores individual virtualenvs in the /home/username/.local/share/virtualenvs folder. Though I was able to start uWSGI from the command line by executing the uWSGI config file as the vagrant user (see comment below), I have still not been able start the service with /home/vagrant/.local/share/virtualenvs/my_venv in the uWSGI config file. I tried adding the vagrant user to the www-data group and adding the www-data user to the vagrant group. I also put both read and execute permission for the world on the whole path (including the individual venv), but the uWSGI service will still not start. A HACKISH WORKAROUNDI did finally get the uWSGI service to start by copying the venv to /opt/virtualenvs/my_venv. I was then able to start the service with sudo service uwsgi start. Ownership of that whole path is root:root.STILL LOOKING FOR A SOLUTION...This solution is not optimal since I am now executing from a virtualenv that will have to be updated when the default virtualenv is updated since this location is not the default for pipenv, so I'm still looking for answers. Perhaps it is a Ubuntu permissions error, but I just can't find the problem.
It might be the problem with your virtual environment. Try the followingrm -rf /root/.local/share/virtualenvs/my_site-gmmiTMIDvirtualenv -p python3 /root/.local/share/virtualenvs/my_site-gmmiTMIDsource /root/.local/share/virtualenvs/my_site-gmmiTMID/bin/activatepip install -r requirements.txtand in uwsgi conf try to change virtualenv=/root/.local/share/virtualenvs/my_site-gmmiTMIDtohome=/root/.local/share/virtualenvs/my_site-gmmiTMIDReference:http://uwsgi-docs.readthedocs.io/en/latest/Options.html#virtualenv
Replace a word in a sentence with words from a list and copying the new sentences in a column I have a dataframe that contains sentences in one column, specific words I have extracted from the column, and a third column containing a list of synonyms for the words in the second column:data= {"sentences":["I am a student", "she is my friend", "that is the new window"], "words": ["student","friend", "window"], "synonyms":[["pupil"],["comrade","companion"],["brand new","up-to-date","latest"]]}df= pd.DataFrame(data,columns=['sentences', "words","synonyms"])What I would like to do is to create another column with words in sentences being replaced with the words from the synonym columns:print(df["new_col"])Output: I am a pupilshe is my comrade. she is my companion.this is the brand new window. this is the up-to-date window. this is the latest window.I have tried np.where(df["words"].isin([df["sentences"]), df["sentence"].replace(df["words"].isin([df["sentences"]), df["synonyms"],"" )but it did not give the desired output.
That is not a direct dataframe manipulation, but you can still try it :import pandas as pddata= {"sentences":["I am a student", "she is my friend", "that is the new window"], "words": ["student","friend", "window"], "synonyms":[["pupil"],["comrade","companion"],["brand new","up-to-date","latest"]]}df= pd.DataFrame(data,columns=['sentences', "words","synonyms"])new_col = []for s in data["sentences"]: for index, w in enumerate(data["words"]): if w in s: a = ". ".join([s.replace(w, syno) for syno in data["synonyms"][index]]) new_col.append(a)df["new_col"] = new_colOutput : sentences words synonyms \0 I am a student student [pupil] 1 she is my friend friend [comrade, companion] 2 that is the new window window [brand new, up-to-date, latest] new_col 0 I am a pupil 1 she is my comrade. she is my companion 2 that is the new brand new. that is the new up-...
What's the pythonic way to set class variables? perhaps I'm asking the wrong question. I have code like this:class ExpressionGrammar(Grammar): def __init__(self, nonterminals, terminals, macros, rules, precedence, nonterminal_name = '_expr'): self.nonterminals = nonterminals self.terminals = terminals self.rules = rules self.macros = macros self.precedence = precedence self.nonterminal = nonterminaland I find it redundant to always have to to self.x = x. I know python tries to avoid repetition, so what would be the correct way to do something like this?
You can avoid doing that with something like:class C(object): def __init__(self, x, y, z, etc): self.__dict__.update(locals())then all these arguments become members (including the self argument). So you may remove it with: self.__dict__.pop('self')I don't know how pythonic this approach is, but it works.PS:If you're wondering what __dict__ is, it's a dict that holds every member of an instance in the form {'member1': value, 'member2': value}locals() is a function that returns a dict with local variables.
Python-linkedin Return URL? I am trying to use https://github.com/ozgur/python-linkedin The steps I am taking are as follows:Register my application on linkedin to get the secret and api key.Put the return URI as localhost:8080/code.Change authentication.py with all of the information.change http_api.py with the information Keeping the return URL to localhost:8080I get the error: invalid redirect_uri. This value must match a URL registered with the API Key.Can anyone walk me through the steps of what I may be doing wrong? I also get this error sometimes: Your+application+has+not+been+authorized+for+the+scope+"r_contactinfo" in the URL when I run authentication.py
Since the last changes in LinkedIn API you have no longer access to the following member permissions:r_fullprofile, r_network, r_contactinfo, rw_nus, rw_groups, w_messageshttps://developer.linkedin.com/support/developer-program-transition
How to count the number of faces detected in live video by openCV using python? I need to count the number of faces in a video taken from webcam. For example, if I am standing in front of the camera then count=1, now if any other person is detected then count=2, if another person is detected then the count should be 3.I am using frontal_face_haarcascade.xml by opencv in python. I can detect faces in frame and then increase the count, but what's happening is that the count is increasing as the number of frames. So, even if 1 person was detected standing for 10 sec, it shows count as some '67'. How can I overcome this problem?This is the code:import cv2import syscascPath = sys.argv[1]faceCascade = cv2.CascadeClassifier(cascPath)video_capture = cv2.VideoCapture(0)ret, frame = video_capture.read()gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)faces = faceCascade.detectMultiScale( gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))# Draw a rectangle around the facesfor (x, y, w, h) in faces: cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)if cv2.waitKey(1) & 0xFF == ord('q'): break# Display the resulting framecv2.imshow('Video', frame)video_capture.release()cv2.destroyAllWindows()
import numpy as npimport cv2faceCascade = cv2.CascadeClassifier(cascPath)video_capture =cv2.VideoCapture(0)while(True): idx=0 ret, frame = video_capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30) ) for (x,y,w,h) in faces: cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2) idx += 1 print (idx) cv2.putText(frame,str(idx),(x,y+h),cv2.FONT_HERSHEY_SIMPLEX,.7,(150,150,0),2) cv2.imshow('img',frame) if(cv2.waitKey(1)==ord('q')): breakframe.release()cv2.destroyAllWindows()
Error while working on chatbot-universal-sentence-encoder I am trying "chatbot.py" and I am getting below error.Traceback (most recent call last):File "chatbot.py", line 11, in embeddings = model(df["MESSAGE"].values)["outputs"]IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices############################ chatbot.py starts herefrom urllib import responsefrom utils import *# pip install pandasimport pandas as pdimport numpy as np# pip install openpyxlmodel = embed_useT(r"C:/Users/Hp/Downloads/universal-sentence-encoder_4")df = pd.read_excel("chats.xlsx")print ('printing df', df)embeddings = model(df["MESSAGE"].values)["outputs"]norm = np.linalg.norm(embeddings, axis = -1)def reply(message): message_vector = model([message])["outputs"] similarities = cos_similarity(message_vector, embeddings, norm) index = np.argmax(similarities) response = df["RESPONSE"].values[index] return responsewhile True: message = input("Type your message: ") response = reply(message) print (response)# chatbot.py ends here####################################################### utils.py starts hereimport numpy as np# pip install tensorflow-hubimport tensorflow_hub as hub # pip install tensorflow# import tensorflow as tfimport tensorflow.compat.v1 as tftf.disable_v2_behavior()def cos_similarity(vector, matrix, matrix_norm): dot = np.matmul(matrix, vector.T) vector_norm = np.linalg.norm(vector) norms = (vector_norm * matrix_norm); norms=norms.reshape(norms.shape[0],1) return dot / normsdef embed_useT(module): with tf.Graph().as_default(): sentences = tf.placeholder(tf.string) embed = hub.KerasLayer(module) embeddings = embed(sentences) session = tf.train.MonitoredSession() return lambda x: session.run(embeddings, {sentences : x})# utils.py ends here###########################chats.xlsx is as shown below
model(df["MESSAGE"].values) already returns the embeddings as a numpy array. Omitting ["outputs"] should make it work.
Netmiko TEXTSFM Windows 10 issues I am running Python 3 on a Windows 10 computer.When I run this it works fine. When I add the use_textfsm=True to the results, I get a permission error where the templates are at.Permissions seem to be fine. I would like to try copy the template folder to a new location but I am not sure how to tell netmiko the new location for that folder.from nornir import InitNornirfrom nornir.core.task import Result, Taskfrom nornir_netmiko.tasks.netmiko_send_command import netmiko_send_commandfrom nornir_utils.plugins.functions import print_resultnr = InitNornir("h:/Scripts/IPvZero-master/nornir_textFSM_video/config.yaml")results = nr.run(netmiko_send_command, command_string="show interface switchport")print_result(results)TextFSM IntegrationNetmiko has been configured to automatically look in ~/ntc-template/templates/index for the ntc-templates index file.Alternatively, you can explicitly tell Netmiko where to look for the TextFSM template directory by setting the NET_TEXTFSM environment variable(note, there must be an index file in this directory):export NET_TEXTFSM=/path/to/ntc-templates/templates/
I already experienced the same issue in this environment.EnvironmentPython 3.9.4testfsm==1.1.0netmiko==3.4.0The error message you got is the solution but you may have misinterpreted it. You should do the following:Go to Control Panel > View advanced system settings and click on Environment Variables.Under System variables (not User variables for username) click New and add the following variable:Variable name: NET_TEXTFSMVariable value: %APPDATA%\Python\Python39\site-packages\ntc_templates\templates (Validate the path does exist in Windows Explorer)Restart your text editor.If it didn't work, restart your PC and try again.I already added this solution on #1664
Pandas read_html misses table with several rows I'm scraping a website and in order to get the table I'm using pd.read_html.I get the node doing this:table=WebDriverWait(browser,10).until(EC.presence_of_element_located((By.XPATH,'//tbody[ancestor::div[contains(@id,"cornerOddsDiv")]]')))newt=pd.read_html(table.get_attribute('outerHTML'))This returns: ValueError: No tables foundGiving the table node this output:table.get_attribute('outerHTML')>>'<tbody><tr><th colspan="10" align="center" class="bg1">365 Corner Odds</th></tr><tr bgcolor="#FCEAAB"><td colspan="10" align="center"><strong>Over/Under</strong></td></tr><tr onclick="goCorner(1510721)" style="cursor:pointer;" align="center" class="bg1" id="trCornerTotal" odds="1.19,0.25,0.72"><td width="14%" bgcolor="#EBF2F8">early</td><td width="10%" class="bg2">1 </td><td width="10%" class="bg2">10.5</td><td width="10%" class="bg2">0.8</td><td width="6%" class="bg2"><a href="http://data.nowgoal.com/history/corner.aspx?id=1510721&companyid=8" target="_blank">detail</a></td><td width="14%" bgcolor="#EBF2F8">0.25</td><td width="10%" class="bg2">1</td><td width="10%" class="bg2">0.72</td><td width="10%" class="bg2">0.8</td><td width="6%" class="bg2"><a href="http://data.nowgoal.com/history/corner.aspx?id=1510721&companyid=8" target="_blank">detail</a></td></tr></tbody>'Why is it not working? I have followed the same procedure for other tables and they did work.
I finally found the answer. The node is of a structure like the following<div> <div> <table> <tbody> <tr>..</tr> <tr>..</tr> ... </tbody>EtcThe key is, instead of passing the node of tbody, for reasons unknown to me,I have to pass table node, and then it works just as fine as the others when using tbody.So, it would be:table=WebDriverWait(browser,10).until(EC.presence_of_element_located((By.XPATH,'//table[contains(@class,"bhTable") and ancestor::div[contains(@id,"cornerOddsDiv")]]')))and that returns the desired output
PyTorch 0.4 LSTM: Why does each epoch get slower? I have a toy model of a PyTorch 0.4 LSTM on a GPU. The overall idea of the toy problem is that I define a single 3-vector as an input, and define a rotation matrix R. The ground truth targets are then a sequence of vectors: At T0, the input vector; at T1 the input vector rotated by R; at T2 the input rotated by R twice, etc. (The input is padded the output length with zero-inputs after T1)The loss is the average L2 difference between ground truth and outputs. The rotation matrix, construction of the input/output data, and loss functions are probably not of interest, and not shown here.Never mind that the results are pretty terrible: Why does this become successively slower with each passing epoch?!I've shown on-GPU information below, but this happens on the CPU as well (only with larger times.) The time to execute ten epochs of this silly little thing grows rapidly. It's quite noticeable just watching the numbers scroll by. epoch: 0, loss: 0.1753, time previous: 33:28.616360 time now: 33:28.622033 time delta: 0:00:00.005673epoch: 10, loss: 0.2568, time previous: 33:28.622033 time now: 33:28.830665 time delta: 0:00:00.208632epoch: 20, loss: 0.2092, time previous: 33:28.830665 time now: 33:29.324966 time delta: 0:00:00.494301epoch: 30, loss: 0.2663, time previous: 33:29.324966 time now: 33:30.109241 time delta: 0:00:00.784275epoch: 40, loss: 0.1965, time previous: 33:30.109241 time now: 33:31.184024 time delta: 0:00:01.074783epoch: 50, loss: 0.2232, time previous: 33:31.184024 time now: 33:32.556106 time delta: 0:00:01.372082epoch: 60, loss: 0.1258, time previous: 33:32.556106 time now: 33:34.215477 time delta: 0:00:01.659371epoch: 70, loss: 0.2237, time previous: 33:34.215477 time now: 33:36.173928 time delta: 0:00:01.958451epoch: 80, loss: 0.1076, time previous: 33:36.173928 time now: 33:38.436041 time delta: 0:00:02.262113epoch: 90, loss: 0.1194, time previous: 33:38.436041 time now: 33:40.978748 time delta: 0:00:02.542707epoch: 100, loss: 0.2099, time previous: 33:40.978748 time now: 33:43.844310 time delta: 0:00:02.865562The model:class Sequence(torch.nn.Module):def __init__ (self): super(Sequence, self).__init__() self.lstm1 = nn.LSTM(3,30) self.lstm2 = nn.LSTM(30,300) self.lstm3 = nn.LSTM(300,30) self.lstm4 = nn.LSTM(30,3) self.hidden1 = self.init_hidden(dim=30) self.hidden2 = self.init_hidden(dim=300) self.hidden3 = self.init_hidden(dim=30) self.hidden4 = self.init_hidden(dim=3) self.dense = torch.nn.Linear(30, 3) self.relu = nn.LeakyReLU()def init_hidden(self, dim): return (torch.zeros(1, 1, dim).to(device) ,torch.zeros(1, 1, dim).to(device) ) def forward(self, inputs): out1, self.hidden1 = self.lstm1(inputs, self.hidden1) out2, self.hidden2 = self.lstm2(out1, self.hidden2) out3, self.hidden3 = self.lstm3(out2, self.hidden3) #out4, self.hidden4 = self.lstm4(out3, self.hidden4) # This is intended to act as a dense layer on the output of the LSTM out4 = self.relu(self.dense(out3)) return out4The training loop:sequence = Sequence().to(device)criterion = L2_Loss()optimizer = torch.optim.Adam(sequence.parameters())_, _, _, R = getRotation(np.pi/27, np.pi/26, np.pi/25)losses = []date1 = datetime.datetime.now()for epoch in range(1001): # Define input as a Variable-- each row of 3 is a vector, a distinct input # Define target directly from input by applicatin of rotation vector # Define predictions by running input through model inputs = getInput(25) targets = getOutput(inputs, R) inputs = torch.cat(inputs).view(len(inputs), 1, -1).to(device) targets = torch.cat(targets).view(len(targets), 1, -1).to(device) target_preds = sequence(inputs) target_preds = target_preds.view(len(target_preds), 1, -1) loss = criterion(targets, target_preds).to(device) losses.append(loss.data[0]) if (epoch % 10 == 0): date2 = datetime.datetime.now() print("epoch: %3d, \tloss: %6.4f, \ttime previous: %s\ttime now: %s\ttime delta: %s" % (epoch, loss.data[0], date1.strftime("%M:%S.%f"), date2.strftime("%M:%S.%f"), date2 - date1)) date1 = date2 # Zero out the grads, run the loss backward, and optimize on the grads optimizer.zero_grad() loss.backward(retain_graph=True) optimizer.step()
Short answer: Because we did not detach the hidden layers, and therefore the system kept backpropagating farther and father back through time, taking up more memory and requiring more time.Long answer: This answer is meant to work without teacher forcing. "Teacher forcing" is when all inputs at all time-steps are "ground truth" input values. In contrast, without teacher forcing, the input of each time step is the output from the previous time step, no matter how early in the training regime (and therefore, how wildly erratic) that data might be. This is a bit of a manual operation in PyTorch, requiring us to keep track of not only the output, but the hidden state of the network at each step, so we can provide it to the next. Detachment has to happen, not at every time step, but at the beginning of each sequence. A method that seems to work is to define a "detach" method as part of the Sequence model (which will manually detach all the hidden layers), and call it explicitly after the optimizer.step(). This prevents the gradual accumulation of the hidden states, prevents the gradual slowdown, and still seems to train the network. I cannot truly vouch for it because I have only employed it on a toy model, not a real problem.Note 1: There are probably better ways to factor the initialization of the network and use that instead of a manual detach, as well.Note2: The loss.backward(retain_graph=True) statement retains the graph because error messages suggested it. Once the detach is enacted, that warning disappears.I leave this answer un-accepted in the hopes that someone knowledgeable will add their expertise.
Pyenv not shows all version when used as sudo I installed 3.5.2 and 3.5.3 version using pyenv.# pyenv versions* system (set by /usr/local/pyenv/version) 3.5.2 3.5.3But, when I run this command as sudo (not login as root) it not gives me all versions.$ sudo /usr/local/bin/pyenv versions* system (set by /root/.pyenv/version)I tried using setting the PYENV_ROOT path, but that also not works.$ export PYENV_ROOT=/usr/local/pyenv/$ sudo /usr/local/pyenv/bin/pyenv versions* system (set by /root/.pyenv/version)I already have path set in .bash_profile in myuser$ cat ~/.bash_profile# .bash_profile# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATHexport PYENV_ROOT=/usr/local/pyenv/export PATH="/usr/local/pyenv/bin:$PATH"eval "$(pyenv init -)"eval "$(pyenv virtualenv-init -)"Also set in root user$ sudo cat /root/.bash_profile# .bash_profile# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATHexport PYENV_ROOT=/usr/local/pyenv/export PATH="/usr/local/pyenv/bin:$PATH"eval "$(pyenv init -)"eval "$(pyenv virtualenv-init -)"I am using centos $ cat /etc/issueCentOS release 6.9 (Final)
$ export PYENV_ROOT=/usr/local/pyenv/$ sudo /usr/local/pyenv/bin/pyenv versionsThis doesn't work because PYENV_ROOT won't be passed to the environment in sudo.Try this:$ sudo PYENV_ROOT=/usr/local/pyenv/ /usr/local/pyenv/bin/pyenv versionsor this:$ export PYENV_ROOT=/usr/local/pyenv/$ sudo -E /usr/local/pyenv/bin/pyenv versions-E will make the environment variables pass to pyenv. In the man page of sudo: -E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. --preserve-env=list Indicates to the security policy that the user wishes to add the comma-separated list of environment variables to those preserved from the user's environment. The security policy may return an error if the user does not have permission to preserve the environment.The .bash_profile in root doesn't work because sudo won't load it in this case. You can refer to this if you prefer to write the config in .bash_profile.
Paramiko Expect - Tailing I am trying to tail a log file, and it works. But I need to also be able to analyze the output and log for errors and such. I am using the base example on the Paramiko-expect github page and I can not figure out how to do this.import tracebackimport paramikofrom paramikoe import SSHClientInteractiondef main(): # Set login credentials and the server prompt hostname = 'server' username = 'username' password = 'xxxxxxxx' port = 22 # Use SSH client to login try: # Create a new SSH client object client = paramiko.SSHClient() # Set SSH key parameters to auto accept unknown hosts client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # Connect to the host client.connect(hostname, port, username, password) # Create a client interaction class which will interact with the host interact = SSHClientInteraction(client, timeout=10, display=False) # Send the tail command interact.send('tail -f /var/log/log') # Now let the class tail the file for us interact.tail(line_prefix=hostname+': ') except KeyboardInterrupt: print 'Ctrl+C interruption detected, stopping tail' except Exception: traceback.print_exc() finally: try: client.close() except: passif __name__ == '__main__': main()This works to the point of it prints the logs live time in the console that it is run in, but I from the way paramiko expect I can't figure out how to be able to iterate over the ouput and look for values?Help?https://github.com/fgimian/paramiko-expectThis would also help being able to push the output of the logs to a local file for historical backups of events.Update:So I see that the way that this info is being displayed is using sys.stdout. I am not very seasoned on how to pull the output out of this into a interable way or how to modify this to a different type of output where it would still work. I have tried emailing the creater of this module without much success.This module is so close to being a VERY powerful tool, if it I could figure out the ability of actually monitoring the output, it would make the module a very very good tool. Please help!Output section of Paramiko-Expect: Tail Function: # Read the output one byte at a time so we can detect \n correctly buffer = self.channel.recv(1) # If we have an empty buffer, then the SSH session has been closed if len(buffer) == 0: break # Strip all ugly \r (Ctrl-M making) characters from the current # read buffer = buffer.replace('\r', '') # Add the currently read buffer to the current line output current_line += buffer # Display the last read line in realtime when we reach a \n # character if current_line.endswith('\n'): if line_counter and line_prefix: sys.stdout.write(line_prefix) if line_counter: sys.stdout.write(current_line) sys.stdout.flush() line_counter += 1 current_line = ''Paramiko Expect Module:## Paramiko Expect## Written by Fotis Gimian# http://github.com/fgimian## This library works with a Paramiko SSH channel to provide native SSH# expect-like handling for servers. The library may be used to interact# with commands like 'configure' or Cisco IOS devices or with interactive# Unix scripts or commands.## You must have Paramiko installed in order to use this library.#import sysimport reimport socket# Windows does not have termiostry: import termios import tty has_termios = Trueexcept ImportError: import threading has_termios = Falseimport selectclass SSHClientInteraction: """This class allows an expect-like interface to Paramiko which allows coders to interact with applications and the shell of the connected device. """ def __init__(self, client, timeout=60, newline='\r', buffer_size=1024, display=False): """The constructor for our SSHClientInteraction class. Arguments: client -- A Paramiko SSHClient object Keyword arguments: timeout -- THe connection timeout in seconds newline -- The newline character to send after each command buffer_size -- The amount of data (in bytes) that will be read at a time after a command is run display -- Whether or not the output should be displayed in real-time as it is being performed (especially useful when debugging) """ self.channel = client.invoke_shell() self.newline = newline self.buffer_size = buffer_size self.display = display self.timeout = timeout self.current_output = '' self.current_output_clean = '' self.current_send_string = '' self.last_match = '' def __del__(self): """The destructor for our SSHClientInteraction class.""" self.close() def close(self): """Attempts to close the channel for clean completion.""" try: self.channel.close() except: pass def expect(self, re_strings=''): """This function takes in a regular expression (or regular expressions) that represent the last line of output from the server. The function waits for one or more of the terms to be matched. The regexes are matched using expression \n<regex>$ so you'll need to provide an easygoing regex such as '.*server.*' if you wish to have a fuzzy match. Keyword arguments: re_strings -- Either a regex string or list of regex strings that we should expect. If this is not specified, then EOF is expected (i.e. the shell is completely closed after the exit command is issued) Returns: - EOF: Returns -1 - Regex String: When matched, returns 0 - List of Regex Strings: Returns the index of the matched string as an integer """ # Set the channel timeout self.channel.settimeout(self.timeout) # Create an empty output buffer self.current_output = '' # This function needs all regular expressions to be in the form of a # list, so if the user provided a string, let's convert it to a 1 # item list. if len(re_strings) != 0 and isinstance(re_strings, str): re_strings = [re_strings] # Loop until one of the expressions is matched or loop forever if # nothing is expected (usually used for exit) while ( len(re_strings) == 0 or not [re_string for re_string in re_strings if re.match('.*\n' + re_string + '$', self.current_output, re.DOTALL)] ): # Read some of the output buffer = self.channel.recv(self.buffer_size) # If we have an empty buffer, then the SSH session has been closed if len(buffer) == 0: break # Strip all ugly \r (Ctrl-M making) characters from the current # read buffer = buffer.replace('\r', '') # Display the current buffer in realtime if requested to do so # (good for debugging purposes) if self.display: sys.stdout.write(buffer) sys.stdout.flush() # Add the currently read buffer to the output self.current_output += buffer # Grab the first pattern that was matched if len(re_strings) != 0: found_pattern = [(re_index, re_string) for re_index, re_string in enumerate(re_strings) if re.match('.*\n' + re_string + '$', self.current_output, re.DOTALL)] self.current_output_clean = self.current_output # Clean the output up by removing the sent command if len(self.current_send_string) != 0: self.current_output_clean = ( self.current_output_clean.replace( self.current_send_string + '\n', '')) # Reset the current send string to ensure that multiple expect calls # don't result in bad output cleaning self.current_send_string = '' # Clean the output up by removing the expect output from the end if # requested and save the details of the matched pattern if len(re_strings) != 0: self.current_output_clean = ( re.sub(found_pattern[0][1] + '$', '', self.current_output_clean)) self.last_match = found_pattern[0][1] return found_pattern[0][0] else: # We would socket timeout before getting here, but for good # measure, let's send back a -1 return -1 def send(self, send_string): """Saves and sends the send string provided""" self.current_send_string = send_string self.channel.send(send_string + self.newline) def tail(self, line_prefix=None): """This function takes control of an SSH channel and displays line by line of output as \n is recieved. This function is specifically made for tail-like commands. Keyword arguments: line_prefix -- Text to append to the left of each line of output. This is especially useful if you are using my MultiSSH class to run tail commands over multiple servers. """ # Set the channel timeout to the maximum integer the server allows, # setting this to None breaks the KeyboardInterrupt exception and # won't allow us to Ctrl+C out of teh script self.channel.settimeout(sys.maxint) # Create an empty line buffer and a line counter current_line = '' line_counter = 0 # Loop forever, Ctrl+C (KeyboardInterrupt) is used to break the tail while True: # Read the output one byte at a time so we can detect \n correctly buffer = self.channel.recv(1) # If we have an empty buffer, then the SSH session has been closed if len(buffer) == 0: break # Strip all ugly \r (Ctrl-M making) characters from the current # read buffer = buffer.replace('\r', '') # Add the currently read buffer to the current line output current_line += buffer # Display the last read line in realtime when we reach a \n # character if current_line.endswith('\n'): if line_counter and line_prefix: sys.stdout.write(line_prefix) if line_counter: sys.stdout.write(current_line) sys.stdout.flush() line_counter += 1 current_line = '' def take_control(self): """This function is a better documented and touched up version of the posix_shell function found in the interactive.py demo script that ships with Paramiko""" if has_termios: # Get attributes of the shell you were in before going to the # new one original_tty = termios.tcgetattr(sys.stdin) try: tty.setraw(sys.stdin.fileno()) tty.setcbreak(sys.stdin.fileno()) # We must set the timeout to 0 so that we can bypass times when # there is no available text to receive self.channel.settimeout(0) # Loop forever until the user exits (i.e. read buffer is empty) while True: select_read, select_write, select_exception = ( select.select([self.channel, sys.stdin], [], [])) # Read any output from the terminal and print it to the # screen. With timeout set to 0, we just can ignore times # when there's nothing to receive. if self.channel in select_read: try: buffer = self.channel.recv(self.buffer_size) if len(buffer) == 0: break sys.stdout.write(buffer) sys.stdout.flush() except socket.timeout: pass # Send any keyboard input to the terminal one byte at a # time if sys.stdin in select_read: buffer = sys.stdin.read(1) if len(buffer) == 0: break self.channel.send(buffer) finally: # Restore the attributes of the shell you were in termios.tcsetattr(sys.stdin, termios.TCSADRAIN, original_tty) else: def writeall(sock): while True: buffer = sock.recv(self.buffer_size) if len(buffer) == 0: break sys.stdout.write(buffer) sys.stdout.flush() writer = threading.Thread(target=writeall, args=(self.channel,)) writer.start() try: while True: buffer = sys.stdin.read(1) if len(buffer) == 0: break self.channel.send(buffer) # User has hit Ctrl+Z or F6 except EOFError: pass
I'm the author of paramiko-expect.I've implemented a new feature in 0.2 of my module just now which will allow you to specify a callback to the tail method so that you can process the current line as you like. You may use this to grep the output or process it further before displaying it. It is expected that the callback function will return the string that is to be sent to sys.stdout.write after it has finished manipulating the current line.Here's an example:import tracebackimport paramikofrom paramikoe import SSHClientInteractiondef process_tail(line_prefix, current_line): if current_line.startswith('hello'): return current_line else: return ''def main(): # Set login credentials and the server prompt hostname = 'localhost' username = 'fots' password = 'password123' port = 22 # Use SSH client to login try: # Create a new SSH client object client = paramiko.SSHClient() # Set SSH key parameters to auto accept unknown hosts client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # Connect to the host client.connect(hostname, port, username, password) # Create a client interaction class which will interact with the host interact = SSHClientInteraction(client, timeout=10, display=False) # Send the tail command interact.send('tail -f /home/fots/something.log') # Now let the class tail the file for us interact.tail(line_prefix=hostname+': ', callback=process_tail) except KeyboardInterrupt: print 'Ctrl+C interruption detected, stopping tail' except Exception: traceback.print_exc() finally: try: client.close() except: passif __name__ == '__main__': main()
Django Postgres django.db.utils.ProgrammingError I know similar questions have been asked many times, but none of those solutions have worked. Recently I have been trying to connect to an AWS RDS database. However, now whenever I try to run a server through manage.py migrate my database I always get the following:Traceback (most recent call last): File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params)psycopg2.ProgrammingError: relation "employee_employeeprofile" does not existLINE 1: ..."employee_employeeprofile"."additional_info" FROM "employee_... ^The above exception was the direct cause of the following exception:Traceback (most recent call last): File "manage.py", line 14, in <module> execute_from_command_line(sys.argv) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 327, in execute django.setup() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/apps/registry.py", line 115, in populate app_config.ready() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/debug_toolbar/apps.py", line 15, in ready dt_settings.patch_all() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/debug_toolbar/settings.py", line 228, in patch_all patch_root_urlconf() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/debug_toolbar/settings.py", line 216, in patch_root_urlconf reverse('djdt:render_panel') File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 568, in reverse app_list = resolver.app_dict[ns] File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 360, in app_dict self._populate() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 293, in _populate for pattern in reversed(self.url_patterns): File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/utils/functional.py", line 33, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 417, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/utils/functional.py", line 33, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 410, in urlconf_module return import_module(self.urlconf_name) File "/home/somil/Documents/Twine/venv/lib/python3.4/importlib/__init__.py", line 109, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 2254, in _gcd_import File "<frozen importlib._bootstrap>", line 2237, in _find_and_load File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked File "<frozen importlib._bootstrap>", line 1129, in _exec File "<frozen importlib._bootstrap>", line 1471, in exec_module File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed File "/home/somil/Documents/Twine/mobility/mobility/urls.py", line 5, in <module> from mobility.apps.iam import urls as iam_urls File "/home/somil/Documents/Twine/mobility/mobility/apps/iam/urls.py", line 3, in <module> from . import views File "/home/somil/Documents/Twine/mobility/mobility/apps/iam/views.py", line 7, in <module> from ..employee.views import employee_profile File "/home/somil/Documents/Twine/mobility/mobility/apps/employee/views.py", line 261, in <module> class EmployeeFilter(django_filters.FilterSet): File "/home/somil/Documents/Twine/mobility/mobility/apps/employee/views.py", line 267, in EmployeeFilter functional_area_name = django_filters.MultipleChoiceFilter(name ="functional_area_name", choices = get_func_names()) File "/home/somil/Documents/Twine/mobility/mobility/apps/employee/views.py", line 226, in get_func_names for e in EmployeeProfile.objects.all(): File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/models/query.py", line 258, in __iter__ self._fetch_all() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/models/query.py", line 1074, in _fetch_all self._result_cache = list(self.iterator()) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/models/query.py", line 52, in __iter__ results = compiler.execute_sql() File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql cursor.execute(sql, params) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 79, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/utils.py", line 95, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/somil/Documents/Twine/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params)django.db.utils.ProgrammingError: relation "employee_employeeprofile" does not existLINE 1: ..."employee_employeeprofile"."additional_info" FROM "employee_...I have tried deleting all migrations and trying to make them again but this error persists. I also get this error on any new local databases I create. Any suggestions? I feel like I get this error often and the only way I can easily fix it is to delete the DB and create it again, but this time that doesn't even work?
You're calling your get_func_names() function, which queries the EmployeeProfile model, at class level in your EmployeeFilter class. This means it is called at import time. Since the views are imported on startup, your migrations have not yet had a chance to run.You should not call any code that accesses the database at import time.
marisa trie suffix compression? I'm using a custom Cython wrapper of this marisa trie library as a key-value multimap.My trie entries look like key 0xff data1 0xff data2 to map key to the tuple (data1, data2). data1 is a string of variable length but data2 is always a 4-byte unsigned int. The 0xff is a delimiter byte.I know a trie is not the most optimal data structure for this from a theoretical point of a view, but various practical considerations make it the best available choice.In this use case, I have about 10-20 million keys, each one has on average 10 data points. data2 is redundant for many entries (in some cases, data2 is always the same for all data points for a given key), so I had the idea of taking the most frequent data2 entry and adding a ("", base_data2) data point to each key. Since a MARISA trie, to my knowledge, does not have suffix compression and for a given key each data1 is unique, I assumed that this would save 4 bytes per data tuple that uses a redundant key (plus adding in a single 4-byte "value" for each key). Having rebuilt the trie, I checked that the redundant data was no longer being stored. I expected a sizable decrease in both serialized and in-memory size, but in fact the on-disk trie went from 566MB to 557MB (and a similar reduction in RAM usage for a loaded trie). From this I concluded that I must be wrong about there being no suffix compression. I was now storing the entries with a redundant data2 number as key 0xff data1 0xff, so to test this theory I removed the trailing 0xff and adjusted the code that uses the trie to cope. The new trie went down from 557MB to 535MB.So removing a single redundant trailing byte made a 2x larger improvement than removing the same number of 4-byte sequences, so either the suffix compression theory is dead wrong, or it's implemented in some very convoluted way.My remaining theory is that adding in the ("", base_data2) entry at a higher point in the trie somehow throws off the compression in some terrible way, but it should just be adding in 4 more bytes when I've removed many more than that from lower down in the trie.I'm not optimistic for a fix, but I'd dearly like to know why I'm seeing this behavior! Thank you for your attention.
As I suspected, it's caused by padding.in lib/marisa/grimoire/vector/vector.h, there is the following function:void write_(Writer &writer) const { writer.write((UInt64)total_size()); writer.write(const_objs_, size_); writer.seek((8 - (total_size() % 8)) % 8);}The key point is: writer.seek((8 - (total_size() % 8)) % 8);. After writing each chunk, the writer pads to the next 8 bytes boundary. This explains the behavior you are seeing, as part of the data removed by the initial shortening of the key was replaced with padding.When you removed the extra byte, it brought the key size below the next boundary limit, resulting in a major size change.Practically, what this means is that, since the padding code is in the serialization part of the library, you are probably getting the in-memory savings you were expecting, but that did not translate into on-disk savings. Monitoring program RAM usage should confirm that.If disk size is your concern, then you might as well simply deflate the serialized data, as it seems MARISA does not apply any compression whatsoever.
folium.GeoJson (style function) not working as i want import folium ,pandas ,jsondf=pandas.read_csv('Volcanoes_2.txt')def colors(elev): minimum=int(min(df['ELEV'])) step=int(max((df['ELEV'])-min(df['ELEV']))/3) if elev in range (minimum,minimum+step): col= "green" elif elev in range(minimum+step,minimum+step*2): col= "orange" else: col= "red" return colmap_1=folium.Map(location=[df['LAT'].mean(), df['LON'].mean()] ,zoom_start=6,tiles='mapbox bright')for name, lon, lat, elev in zip(df['NAME'], df['LON'], df['LAT'],df['ELEV'] ): folium.Marker([lat, lon], popup= name, icon = folium.Icon(color =colors(elev))).add_to(map_1)folium.GeoJson(open('world_geojson.json'), name='geojson', style_function= lambda x :{'fillcolor':'green' if \ x['properties']['POP2005']<10000000 \ else 'orange' if 10000000 <x['properties']['POP2005']>20000000 else 'red'}, ).add_to(map_1)folium.LayerControl().add_to(map_1)map_1.save("map.html")this is the map file https://github.com/xxspider4/new_repo/blob/master/map.htmlthis is the json file https://github.com/xxspider4/new_repo/blob/master/world_geojson.json
You were very close. I was able to get it to work by changing fillcolor to fillColor in your style functionlambda x :{'fillColor':'green' if \ x['properties']['POP2005']<10000000 \ else 'orange' if 10000000 <x['properties']['POP2005']>20000000 else 'red'}
Python string format with incorrect values I have a string that I'd like to format but the values I'm using to format may or may not be proper values (None, or ''). In any event that one of these improper values is passed, I still want the string to format, but ignoring any values that will not work. For example:mystring = "{:02d}:{:02d}"mystring.format('', 1)In this case I'd like my output to be :01, thus negating the fact that '' won't work for the first value in the string. I looked at something like class Default(dict): def __missing__(self, key): return key.join("{}")d = Default({"foo": "name"})print("My {foo} is {bar}".format_map(d)) # "My name is {bar}"But as I'm not using a dictionary for values, I don't think this solution will work for me.
You could write your own formatter and override format_field() to catch these errors and just returns empty strings. Here's the basics (you might want to edit to only catch certain errors):import stringclass myFormat(string.Formatter): def format_field(self, value, format_spec): try: return super().format_field(value, format_spec) except ValueError: return ''fmt = myFormat()mystring = "{:02d}:{:02d}"print(fmt.format(mystring, *(2, 1)))# 02:01print(fmt.format(mystring, *('', 1)))# :01
Python iteration wont change I want to "reset" my for loop so i thought i have to set my iterator i on 0, but it will continue with 1 etc. How do "reset" my for loop? I tried to set i on 0 but in the next step my for loop just continues with 1 and so on. g is just a list with 2 items and S is a String, ignore all that. Its just this iteration that wont workfor i in range(0, len(g)): print("gi = " + g[i]) print(i) if g[i] in S: S = S.replace(g[i], "", 1) print(S) i = 0 output += 1
you can use a while loop:S='1112'g=['1', '2']i = 0output = 0while i < len(g): if g[i] in S: S = S.replace(g[i], "", 1) print(S) i = 0 output += 1 else: i += 1print('Is S empty?', S=='')output:112122Is S empty? True
How do I extract two things using Regex in Python? I have a string that has two things, which is the name of the petitioner and the advocate.I wanna separate the petitioner names and the advocate names.All petitioner names start with a number (1-) and the advocate name start with Advocate-. 1) RAM PRASAD\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0Advocate- ADITYA PRASAD MISHRA, A.P. MISHRAThis is another string. 1) KALAICHELVI Advocate - NOTICE ORDER R1 ONLY, -------------------------------, R1 - TAPAL RETURNED, NOT KNOWN 2) KALIMUTHU 3) RAMACHANDRA GOAUNER 4) SETHU AMMAL 5) SOMU GOUNDER 6) SOMASUNDAR A GOUNDER 7) KARUNANITHI 8) LALAITHAMMAL 9) JEGANNATHA GOUNDERI tried doing this, re.split(r'[ ]\xa0\xa0(?=[0-9]+\b)', s) but works fine when the Adovate Name isn't present. How do I do this?
If you want to find two distinct things and plan to use regular expressions, it is almost always a good idea to use two distinct expressions instead of one. For examplepetitioner_re = re.compile(r"\d+\) ([A-Z ]+)") # matches petitionersadvocate_re = re.compile(r"Advocate - ([^\n]+)") # matches advocatesGiven your example input, you can apply re.finditer for petitioners and re.search for advocatescontent = """1) KALAICHELVIAdvocate - NOTICE ORDER R1 ONLY, -------------------------------, R1 - TAPAL RETURNED, NOT KNOWN2) KALIMUTHU 3) RAMACHANDRA GOAUNER 4) SETHU AMMAL 5) SOMU GOUNDER 6) SOMASUNDAR A GOUNDER 7) KARUNANITHI 8) LALAITHAMMAL 9) JEGANNATHA GOUNDER"""petitioners = [p.group(1).strip() for p in petitioner_re.finditer(content)]advocate = advocate_re.search(content)Which gives the following resultprint(petitioners)['KALAICHELVI', 'KALIMUTHU', 'RAMACHANDRA GOAUNER', 'SETHU AMMAL', 'SOMU GOUNDER', 'SOMASUNDAR A GOUNDER', 'KARUNANITHI', 'LALAITHAMMAL', 'JEGANNATHA GOUNDER']print(advocate)'NOTICE ORDER R1 ONLY, -------------------------------, R1 - TAPAL RETURNED, NOT KNOWN'If you have multiple advocates per entry and want to find all of them, they'll need to be fetched with re.finditer as well.
Adding labels in scatter plot legend I am trying to add legend labels to my scatter plot for my physics lab report. It seems to only display the first word (in this case: "Actual") and nothing else. The plot also saves and empty file.import matplotlib.pyplot as pltimport numpy as npIndexofR=[1.33, 1.443, 1.34] #Actual, Pfund's Method, Snell's LawColors = ['red', 'blue', 'green']Labels = ['Actual','Pfund\'s Method', 'Snell\'s Law']plt.scatter(IndexofR, np.zeros_like(IndexofR), c = ['red', 'blue', 'green'], vmin=-2)plt.yticks([])plt.xlabel('Index of Refraction')plt.legend(Labels, loc=1)plt.title('Actual and Calculated Indexes of Refraction in Tap Water')plt.show()plt.savefig('LineGraphLab2.pdf')I would also like to make the whole plot shorter (it is tall for the small amount of data).
Try doing something like this:import matplotlib.pyplot as pltimport numpy as npIndexofR=[1.33, 1.443, 1.34] #Actual, Pfund's Method, Snell's LawColors = ['red', 'blue', 'green']Labels = ['Actual','Pfund\'s Method', 'Snell\'s Law']for i, c, l in zip(IndexofR, Colors, Labels): plt.scatter(i, np.zeros_like(i), c=c, vmin=-2, label=l)plt.yticks([])plt.xlabel('Index of Refraction')plt.legend(loc=1)plt.title('Actual and Calculated Indexes of Refraction in Tap Water')plt.show()plt.savefig('LineGraphLab2.pdf')
Is there a way to zoom into the next node in the 'setSelected' list? In The Foundry NukeX I'm trying to find list of nodes of same kind and zoom into each node one after the other of the .setSelected nodes. To be clear I'm trying to create a Python code thats behind Edit -> Search... menu or hotkey / in NUKE.With the below script it only zooms into the first node of the .setSelected list. Is there a way to increment the zoom to next set of nodes every time I execute this code?for w in nuke.allNodes('Transform'): w.setSelected(True) xC = w.xpos + w.screenWidth()/2 yC = w.ypos + w.screenHeight()/2 nuke.zoom(3, [xC, yC])
You need a nested for-in loop to make iterations inside a desired class. Here's a how your code should look like:import nukefor node in nuke.allNodes('Grade'): node.setSelected(True) for id in nuke.selectedNodes(): xCoord = id.xpos() + id.screenWidth()/2 yCoord = id.ypos() + id.screenHeight()/2 nuke.zoom(5, [xCoord, yCoord])
Regex: string between a mix of constant and variable characters I have a string (fulltext). It's made of a part that is the name of a built-in function and of a second part that is the description. I want to extract the description.i.e. I want to extract the part of the text that is between \rPython *function_name*()\r and this \rso the outcome would be "returns class method for given function"I've tried this r'(?<=\\rPython .()\\r)(.*?)(?=\\r)' but it doesn't show any result found and I don't know why.#find descriptionfulltext=r'\rPython classmethod()\rreturns class method for given function\r'description_regex=re.compile( r'(?<=\\rPython .()\\r)(.*?)(?=\\r)')description= description_regex.search(fulltext)print(description.group())
We can try using re.findall here:input = "\rPython classmethod()\rreturns class method for given function\r"matches = re.findall(r'\rPython\s+[^()]+\(\)\r(.*)\r', input)print(matches)This prints:['returns class method for given function']Using re.findall might make sense if you have a text which you might expect to have more than one possible match.
How to use webservices with https in python I have used below python code for web services but here my site url is include https. so if i use http then it gives "Method Not Found" error at the time web services runing and if i use https then this code is not working.Anyone let me know how to resolve this issue with https.import urllibuid = usernamepwd = psw params = urllib.urlencode({ 'uid': uid, 'pwd': pwd,data = urllib.urlopen(url, params).read()print dataOr tell me any other method we can use web services.Thanks in advance!!!
filmor have right. Try to use requests. An example of function : def http_get(url, user=None, password=None, proxies=None, valid_response_status[httplib.OK], **kwargs):"""Performs a http get over an url@param url: the url to perfom get against@type url: str@param user: user if authentication required@type user: str@param password: password if authentication required@type password: str@param proxies: proxies if required@type proxies: dict@param valid_response_status: http response status code considered as valid@type valid_response_status: list of int@raise exception: if the response status code is not in valid_response_status list"""auth = HTTPBasicAuth(username=user, password=password)kwargs['proxies'] = proxieskwargs['auth'] = authtry: LOGGER.debug("Performing http get over : %s" % url) _request = requests.get(url, **kwargs) if _request.status_code not in valid_response_status: http_error_msg = '%s Error: %s' % (_request.status_code, _request.reason) http_error = HTTPError(http_error_msg) http_error.response = _request raise http_error return _request.contentexcept: LOGGER.error("Error while performing http get over : %s" % url) raise
can't pip install mysql-python I'm trying to get django/pip/mysql working and i can't seem to figure out how to install mysql-python. this is the error i receive when trying to install mysql-pythonpip install mysql-pythonDownloading/unpacking mysql-python Downloading MySQL-python-1.2.4.zip (113kB): 113kB downloaded Running setup.py egg_info for package mysql-python Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmp5jjdpf Now working in /tmp/tmp5jjdpf/distribute-0.6.28 Building a Distribute egg in /home/brian/flaskapp/build/mysql-python /home/brian/flaskapp/build/mysql-python/distribute-0.6.28-py2.7.eggInstalling collected packages: mysql-python Running setup.py install for mysql-python building '_mysql' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Dversion_info=(1,2,4,'final',1) -D__version__=1.2.4 -I/usr/include/mysql -I/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -DBIG_JOINS=1 -fno-strict-aliasing -g -DNDEBUG _mysql.c:29:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Complete output from command /home/brian/flaskapp/bin/python -c "import setuptools;__file__='/home/brian/flaskapp/build/mysql-python/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Ur7r16-record/install-record.txt --single-version-externally-managed --install-headers /home/brian/flaskapp/include/site/python2.7: running installrunning buildrunning build_pycreating buildcreating build/lib.linux-x86_64-2.7copying _mysql_exceptions.py -> build/lib.linux-x86_64-2.7creating build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/converters.py -> build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/connections.py -> build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/cursors.py -> build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/release.py -> build/lib.linux-x86_64-2.7/MySQLdbcopying MySQLdb/times.py -> build/lib.linux-x86_64-2.7/MySQLdbcreating build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/REFRESH.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantscopying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-2.7/MySQLdb/constantsrunning build_extbuilding '_mysql' extensioncreating build/temp.linux-x86_64-2.7x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Dversion_info=(1,2,4,'final',1) -D__version__=1.2.4 -I/usr/include/mysql -I/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -DBIG_JOINS=1 -fno-strict-aliasing -g -DNDEBUG_mysql.c:29:20: fatal error: Python.h: No such file or directorycompilation terminated.error: command 'x86_64-linux-gnu-gcc' failed with exit status 1----------------------------------------Cleaning up...Command /home/brian/flaskapp/bin/python -c "import setuptools;__file__='/home/brian/flaskapp/build/mysql-python/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Ur7r16-record/install-record.txt --single-version-externally-managed --install-headers /home/brian/flaskapp/include/site/python2.7 failed with error code 1 in /home/brian/flaskapp/build/mysql-pythonStoring complete log in /home/brian/.pip/pip.logGoogling reveals i need to install python-dev but whenever i try to install withsudo apt-get install python-devi get this error:E: Package 'python-dev' has no installation candidateI'm currently using linux mint 15 RC and i think that might be the issue...but i'm not sure. I'm out of ideas:(
try downloading python-dev through software manager:sudo apt-get install python-dev
local variable assigned in enclosing line referenced before assignment I have this bit of codehealth = 12manager_health = 10 def manager_boss_fight_used_for_backup(): sleep(1) print("manager health is ",manager_health) print("your health is ",health) sleep(1) print("the boss gets first attack") manager_boss_damage = random.randint(1,8) print(manager_boss_damage) print(health) health = health - manager_boss_damage sleep(1) print("the boss did",manager_boss_damage,"damage") sleep(1) print("your health is now",health) if health <= 0: sleep(1) print("you have died") sleep(1) print("better luck next time") exit() sleep(1) print("your turn to attack") sleep(1) heal_or_attack = input("Do you wish to heal or attack?(1/2)") if heal_or_attack == "1": healing = random.randint(1,7) print("you have healed by",healing) health = health + healing manager_boss_fight_used_for_backup() if heal_or_attack == "2": print("you attack the boss") sleep(1) attack_damage = random.randint(1,6) print("You did",attack_damage,"damage") manager_health = manager_health - attack_damage if manager_health <= 0: sleep(1) print("You have killed the boss") sleep(1) if manager_health > 0: manager_boss_fight_used_for_backup()and in the parts of code where for example health = health - manager_boss_damage it will error out. I have fiddled with global variables and all that but I cant get it to work so I came here. Any answers appreciated!
Whenever you make an assignment to health inside the manager_fight_used_for_backup() function, Python attempts to assign a new variable called health by default.The fastest fix is to use the global keyword at the beginning of the function body, to make it clear that you're referring to the health variable outside of the function:global healthThis will tell Python that you're referring to the health variable outside of the function. However, the best solution is to refactor this code so that functions don't modify a shared global state.
Test Intel Extension for Pytorch(IPEX) in multiple-choice from huggingface / transformers I am trying out one huggingface sample with SWAG datasethttps://github.com/huggingface/transformers/tree/master/examples/pytorch/multiple-choiceI would like to use Intel Extension for Pytorch in my code to increase the performance.Here I am using the one without training (run_swag_no_trainer)In the run_swag_no_trainer.py , I made some changes to use ipex .#Code before changing is given below:device = accelerator.devicemodel.to(device)#After adding ipex:import intel_pytorch_extension as ipex device = ipex.DEVICE model.to(device)While running the below command, its taking too much time.export DATASET_NAME=swagaccelerate launch run_swag_no_trainer.py \ --model_name_or_path bert-base-cased \ --dataset_name $DATASET_NAME \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$DATASET_NAME/Is there any other method to test the same on intel ipex?
First you have to understand, which factors actually increases the running time. Following are these factors:The large input size.The data structure; shifted mean, and unnormalized.The large network depth, and/or width.Large number of epochs.The batch size not compatible with physical available memory.Very small or high learning rate.For fast running, make sure to work on the above factors, like:Reduce the input size to the appropriate dimensions that assures no loss in important features.Always preprocess the input to make it zero mean, and normalized it by dividing it by std. deviation or difference in max, min values.Keep the network depth and width that is not to high or low. Or always use the standard architecture that are theoretically proven.Always make sure of the epochs. If you are not able to make any further improvements in your error or accuracy beyond a defined threshold, then there is no need to take more epochs.The batch size should be decided based on the available memory, and number of CPUs/GPUs. If the batch cannot be loaded fully in memory, then this will lead to slow processing due to lots of paging between memory and the filesystem.Appropriate learning rate should be determine by trying multiple, and using that which gives the best reduction in error w.r.t. number of epochs.
Many to many LSTM with attention at each time step I am working on time series image classification, where I need to output a classification at each time step (many to many). My Tensorflow graph takes [Batch size, Time step, Image] and implements a deep CNN-LSTM, which currently goes to a time distributed dense layer before classification. In my previous work, I've had a lot of success adding attention to better model time dependencies by weighting the hidden states of time steps. However, I cannot find any implementations that attempt to use attention in a many-to-many RNN. I have tried with the following code, which compiles and runs but performs worse than the model without. The idea here is to learn attention weights at each step to weight every other time step based on the current time step. I have 7.8 million training samples so I am not worried that this is overfitting - in fact it is increasing the training error over the model without!def create_multi_attention(inputs, attention_size, time_major=False):hidden_size = inputs.shape[2].valueprint("Attention In: {}".format(inputs.shape))w_omegas, b_omegas, u_omegas = [], [], []for i in range(0, MAX_TIME_STEPS): w_omegas.append(create_weights([hidden_size, attention_size])) b_omegas.append(tf.Variable(tf.constant(0.05, shape = [attention_size]))) u_omegas.append(create_weights([attention_size]))# Trainable parameterslayers_all = []for i in range(0, MAX_TIME_STEPS): v = tf.tanh(tf.tensordot(inputs, w_omegas[i], axes=1) + b_omegas[i]) vu = tf.tensordot(v, u_omegas[i], axes=1, name='vu') alphas = tf.nn.softmax(vu, name='alphas') output = tf.reshape(tf.reduce_sum(inputs * tf.expand_dims(alphas, -1), 1), (-1, 1, hidden_size)) layers_all.append(output)output = tf.concat(layers_all, axis = 1) #[Batch, Time steps, LSTM output size]print("Attention Out: {}".format(output.shape))return outputWould love any input or ideas or points to papers! I have thought about trying a seq2seq attention model, but this seems to be a bit of a stretch.
Seems that this code worked fine. The error was downstream. If anyone uses this code to implement many-to-many attention, note that it will take a very long time to train as you are learning two additional weight matrixes for each time step.
Detect text only inside detected objects I'm very new to Computer Vision, I'm tryind to build a CV model which will detect and recognize price tags and extract info from it. I've already trained model that can detect price tags using YOLO. But I also want to teach my system to detect and recognize text which only written inside these price tags. Than parse this info into different parts, for example: price, product name, product description. Or mayby I firstly need to parse detected blocks (price block on the left side of the price tag, product name on the right side, etc.) then read it. Any ideas would be appriciated.
Well, the first one that pops into my mind would be to crop the objects detected with YOLO and then run the OCR on that image. After running OCR, you'll have to do some postprocessing to classify each line of text to a specific category (price, name etc.)
How to create windowed multivariate dataset from SequenceExample TFRecord I am trying to set up a Tensorflow pipeline using tf.data.datasets in order to load some TFRecord into a Keras model. These data are multivariate timeseries.I am currently using Tensorflow 2.0First I get my dataset from the TFRecord and parse it :dataset = tf.data.TFRecordDataset('...')context_features = {...}sequence_features = {...}def _parse_function(example_proto): _, sequence = tf.io.parse_single_sequence_example(example_proto,context_features, sequence_features) return sequencedataset = dataset.map(_parse_function)The problem right now is that it gives me a MapDataset with dict of EagerTensor inside : for data in dataset.take(3): print(type(data))<class 'dict'><class 'dict'><class 'dict'># which look like : {feature1 : EagerTensor, feature2 : EagerTensor ...}Because of these dictionaries, I cannot seem to manage to get these data to be batched, shuffled ... in order to use them in an LSTM layer afterwards. For instance this :def make_window_dataset(ds, window_size=5, shift=1, stride=1): windows = ds.window(window_size, shift=shift, stride=stride) def sub_to_batch(sub): return sub.values().batch(window_size, drop_remainder=True) windows = windows.flat_map(sub_to_batch) return windowsds = make_window_dataset(dataset, 10)gives me :AttributeError: 'dict_values' object has no attribute 'batch'Thank you for your help. I am basing my research on this and other Tensorflow helpers :https://www.tensorflow.org/guide/data#time_series_windowingEDIT :I found the solution to my problem. I ended up converting the dictionary given by the parsing to a (None,11) shaped Tensor using stack in my parse function :def _parse_function(example_proto): # Parse the input `tf.Example` proto using the dictionary above. _, sequence = tf.io.parse_single_sequence_example(example_proto,context_features, sequence_features) return tf.stack(list(sequence.values()), axis=1)
Providing the solution here (Answer Section), even though it is present in the question section, for the benefit of the community.Converting dictionary to a tensor with shape (None,11) using tf.stack in the parse_function has resolved the problem.Change code fromdef _parse_function(example_proto): _, sequence = tf.io.parse_single_sequence_example(example_proto,context_features, sequence_features) return sequencetodef _parse_function(example_proto): _, sequence = tf.io.parse_single_sequence_example(example_proto,context_features, sequence_features) return tf.stack(list(sequence.values()), axis=1)
Subtract products through two tables It turns out that I am achieving a stock system, however, I managed to make it increase every time a product enters my warehouse but I can't make it subtract every time I leave. The tables I work are product and document_ detail. In the document detail_table I have an attribute called quantity_to_have which has the function of adding the quantity of product that will leave the inventory for later being discounted and also the "quantity_of ", which has the function of putting the quantity of product entering the inventory The final stock is in the product table.My code: class Product (models.Model):quantity_available = fields.Float (string = "Quantity available", compute = "_ stock")detail_document_ids = fields.One2many (...)@ api.one@ api.depends ("detail_document_ids")def _stock (self): sum = 0 for detail_document in self.detail_document_ids: sum + = detail_document.cantity_deb self.quantity_available = sum@ api.multi@ api.depends ("detail_document_ids", "product_ids") def _stock (self): for detail_document in self.detail_document_ids: self.quantity_available = self.quantity_available - self. quantity_to_haveclass Document_Detail (models.Model):quantity_to_have = fields.Float (string = "Amount to have")
I don't know if I have understood you OK.I think that each one of your products has a list of document details. Each one of these details has a cantity_deb and a quantity_to_have. And the quantity available of each product is the sum of the cantity_deb minus the sum of the quantity_to have of all of its details.If that is the case:class Product (models.Model): ... quantity_available = fields.Float( string='Quantity available', compute='_stock', ) detail_document_ids = fields.One2many( ... ) @api.multi @api.depends('detail_document_ids', 'detail_document_ids.cantity_deb', 'detail_document_ids.quantity_to_have') def _stock(self): for product in self: sum = 0 for detail_document in product.detail_document_ids: sum += (detail_document.cantity_deb - detail_document.quantity_to_have) product.quantity_available = sumclass Document_Detail (models.Model): ... quantity_to_have = fields.Float( string = 'Amount to have', )
How to make a scatter plot with a 3rd variable separating data by color? My problem is very similar to the one in: python - scatter plot with dates and 3rd variable as colorBut I want the colors to vary acording to 3 set of values inside my 3rd variable. for example:#my 3rd variable consists of a column with these planet radii values: radii1 702 63 544 35 0.3...And I expect to vary the colors according to radii>8, 4< radii<8 and radii<4.I've tried using the simple code, presented in the other question:db=table_with_3_columns()x=db['column a']y=db['column b']z=db['radii']plt.scatter(x,y,c=z,s=30)But I don't know how to specify the 'c' parameter for different sets inside z.I've also tried using:a=[]for i in db['radii'] if i>8: a['bigradii']=i elif i<4: a['smallradii']=i elif i<8 and i>4: a['mediumradii']=i return abut I don't know how to proceed with that.The result would be a scatter with the dots separated by colors guided by the values in the 3rd column 'radii', but all I get using the first code is all the dots black, or, by using the second code it tells me that i is a string, and I cannot put that on a list :(How can I achieve that?
I think what you should do is:create an empty list which later will be passed to 'c' in the scatter function.iterate over your data and do a 'switch like' sequence of if statements to add 1,2 or 3 to the list, according to the discretization you mention. These numbes will represent the different indexes in the cmap palette (which means different colors) Here is an example of what I mean:import numpy as npimport matplotlib.pyplot as plt# x and y will have 100 random points in the range of [0,1]x = np.random.rand(100)y = np.random.rand(100)# z will have 100 numbers, in order from 1 to 100# z represents your third variablez = np.arange(100)colors = []# add 1 or 2 to colors according to the z valuefor i in z: if i > 50: colors.append(2) else: colors.append(1)# half the points will be painted with one color and the other half with another oneplt.scatter(x, y, c=colors,)plt.show()
reading data from a file and storing them in a list of lists Python I have a file data.txt containing following lines : I would like to extract the lines of this file into a list of lists, each line is a list that will be contained within ListOfLines wich is a list of lists.When there is no data on some cell I just want it to be -1.I have tried this so far : from random import randintListOfLines=[]with open("C:\data.txt",'r') as file: data = file.readlines() for line in data : y = line.split() ListOfLines.append(y)with open("C:\output.txt",'a') as output: for x in range(0, 120): # 'item' represente une ligne for item in ListOfLines : item[2] = randint(1, 1000) for elem in item : output.write(str(elem)) output.write(' ') output.write('\n')output.write('------------------------------------- \n')How can I improve my program to contain less code and be faster ? Thank you in advance :)
Well, sharing your sample data in an image don't make easy to working with it. Like this I don't even bother and I assume others do the same.However, data = file.readlines() forces the content of the file into a list first, and then you iterate through that list. You could do that instantly with 'for line in file:'. That improves it a little.You haven't mentioned what you want with the otput part which seems quite messy.
Custom command to upload photo to Photologue from within Django shell? I have successfully employed Photologue to present galleries of regularly-created data plot images. Of course, now that the capability has been established, an obscene number of data plots are being created and they need to be shared!Scripting the process of uploading images and adding them to galleries using manage.py from the Django shell is the next step; however, as an amateur with Django, I am having some difficulties.Here is the custom command addphoto.py that I have currently developed:from django.core.management.base import BaseCommand, CommandErrorfrom django.utils import timezonefrom photologue.models import Photo, Galleryimport osfrom datetime import datetimeimport pytzclass Command(BaseCommand): help = 'Adds a photo to Photologue.' def add_arguments(self, parser): parser.add_argument('imagefile', type=str) parser.add_argument('--title', type=str) parser.add_argument('--date_added', type=str, help="datetime string in 'YYYY-mm-dd HH:MM:SS' format [UTC]") parser.add_argument('--gallery', type=str) def handle(self, *args, **options): imagefile = options['imagefile'] if options['title']: title = options['title'] else: base = os.path.basename(imagefile) title = os.path.splitext(base)[0] if options['date_added']: date_added = datetime.strptime(options['date_added'],'%Y-%m-%d %H:%M:%S').replace(tzinfo=pytz.UTC) else: date_added = timezone.now() p = Photo(image=imagefile, title=title, date_added=date_added) p.save()Unfortunately, when executed with --traceback, it results in the following:./manage.py addphoto '../dataplots/monitoring/test.png' --tracebackFailed to read EXIF DateTimeOriginalTraceback (most recent call last): File "/home/user/mysite/photologue/models.py", line 494, in save exif_date = self.EXIF(self.image.file).get('EXIF DateTimeOriginal', None) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/db/models/fields/files.py", line 51, in _get_file self._file = self.storage.open(self.name, 'rb') File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 38, in open return self._open(name, mode) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 300, in _open return File(open(self.path(name), mode)) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 405, in path return safe_join(self.location, name) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/utils/_os.py", line 78, in safe_join 'component ({})'.format(final_path, base_path))django.core.exceptions.SuspiciousFileOperation: The joined path (/home/user/mysite/dataplots/monitoring/test.png) is located outside of the base path component (/home/user/mysite/media)Traceback (most recent call last): File "./manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/user/mysite/photologue/management/commands/addphoto.py", line 36, in handle p.save() File "/home/user/mysite/photologue/models.py", line 553, in save super(Photo, self).save(*args, **kwargs) File "/home/user/mysite/photologue/models.py", line 504, in save self.pre_cache() File "/home/user/mysite/photologue/models.py", line 472, in pre_cache self.create_size(photosize) File "/home/user/mysite/photologue/models.py", line 411, in create_size if self.size_exists(photosize): File "/home/user/mysite/photologue/models.py", line 364, in size_exists if self.image.storage.exists(func()): File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 392, in exists return os.path.exists(self.path(name)) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/core/files/storage.py", line 405, in path return safe_join(self.location, name) File "/home/user/mysite/venv/lib/python3.5/site-packages/django/utils/_os.py", line 78, in safe_join 'component ({})'.format(final_path, base_path))django.core.exceptions.SuspiciousFileOperation: The joined path (/home/user/mysite/dataplots/monitoring/cache/test_thumbnail.png) is located outside of the base path component (/home/user/mysite/media)Obviously, a copy of the image file was not put in the the media/ directory. Also, while the image, title, and date_added columns are populated in the photologue_photos table of the website database, the slug column is not.How can the file be uploaded to the MEDIA_ROOT directory?Here are the relevant snippets from the Photo and ImageModel models from the Photologue models.py file, for reference:class Photo(ImageModel): title = models.CharField(_('title'), max_length=250, unique=True) slug = models.SlugField(_('slug'), unique=True, max_length=250, help_text=_('A "slug" is a unique URL-friendly title for an object.')) caption = models.TextField(_('caption'), blank=True) date_added = models.DateTimeField(_('date added'), default=now) is_public = models.BooleanField(_('is public'), default=True, help_text=_('Public photographs will be displayed in the default views.')) sites = models.ManyToManyField(Site, verbose_name=_(u'sites'), blank=True) objects = PhotoQuerySet.as_manager() def save(self, *args, **kwargs): if self.slug is None: self.slug = slugify(self.title) super(Photo, self).save(*args, **kwargs)class ImageModel(models.Model): image = models.ImageField(_('image'), max_length=IMAGE_FIELD_MAX_LENGTH, upload_to=get_storage_path) date_taken = models.DateTimeField(_('date taken'), null=True, blank=True, help_text=_('Date image was taken; is obtained from the image EXIF data.')) view_count = models.PositiveIntegerField(_('view count'), default=0, editable=False) crop_from = models.CharField(_('crop from'), blank=True, max_length=10, default='center', choices=CROP_ANCHOR_CHOICES) effect = models.ForeignKey('photologue.PhotoEffect', null=True, blank=True, related_name="%(class)s_related", verbose_name=_('effect')) class Meta: abstract = True def __init__(self, *args, **kwargs): super(ImageModel, self).__init__(*args, **kwargs) self._old_image = self.image def save(self, *args, **kwargs): image_has_changed = False if self._get_pk_val() and (self._old_image != self.image): image_has_changed = True # If we have changed the image, we need to clear from the cache all instances of the old # image; clear_cache() works on the current (new) image, and in turn calls several other methods. # Changing them all to act on the old image was a lot of changes, so instead we temporarily swap old # and new images. new_image = self.image self.image = self._old_image self.clear_cache() self.image = new_image # Back to the new image. self._old_image.storage.delete(self._old_image.name) # Delete (old) base image. if self.date_taken is None or image_has_changed: # Attempt to get the date the photo was taken from the EXIF data. try: exif_date = self.EXIF(self.image.file).get('EXIF DateTimeOriginal', None) if exif_date is not None: d, t = exif_date.values.split() year, month, day = d.split(':') hour, minute, second = t.split(':') self.date_taken = datetime(int(year), int(month), int(day), int(hour), int(minute), int(second)) except: logger.error('Failed to read EXIF DateTimeOriginal', exc_info=True) super(ImageModel, self).save(*args, **kwargs) self.pre_cache()Here is the get_storage_path function, as requested:# Look for user function to define file pathsPHOTOLOGUE_PATH = getattr(settings, 'PHOTOLOGUE_PATH', None)if PHOTOLOGUE_PATH is not None: if callable(PHOTOLOGUE_PATH): get_storage_path = PHOTOLOGUE_PATH else: parts = PHOTOLOGUE_PATH.split('.') module_name = '.'.join(parts[:-1]) module = import_module(module_name) get_storage_path = getattr(module, parts[-1])else: def get_storage_path(instance, filename): fn = unicodedata.normalize('NFKD', force_text(filename)).encode('ascii', 'ignore').decode('ascii') return os.path.join(PHOTOLOGUE_DIR, 'photos', fn)
On just one part of your question: the slug column is empty when the Photo is saved.It should automatically be populated when the Photo is saved - as your copy-and-paste of the Photologue source code above if self.slug is None: self.slug = slugify(self.title) makes clear.This suggests that the Photologue source code is not actually being called from your management command - you can check this by adding some quick debugging code to your local copy of the Photologue code, e.g. a print() statement in the save() method, and check if it is being run.
Python parsing date and find the correct locale_setting I have the following date string: '3 févr. 2015 14:26:00 CET'datetime.datetime.strptime('03 févr. 2015 14:26:00', '%d %b %Y %H:%M:%S')Parsing this failed with the error:ValueError: time data '03 f\xc3\xa9vr. 2015 14:26:00' does not match format '%d %b %Y %H:%M:%S'I tried to loop over all locales with locale.locale_alias:for l in locale.locale_alias: try: locale.setlocale(locale.LC_TIME, l) print l,datetime.datetime.strptime('03 févr. 2015 14:26:00', '%d %b %Y %H:%M:%S') break except Exception as e: print ebut I was not able to find the correct one.
To parse localized date/time string using ICU date/time format:#!/usr/bin/env python# -*- coding: utf-8 -*-from datetime import datetimeimport icu # PyICUimport pytz # $ pip install pytztz = icu.ICUtzinfo.getDefault() # any ICU timezone will do heredf = icu.DateFormat.createDateTimeInstance(icu.DateFormat.MEDIUM, icu.DateFormat.MEDIUM, icu.Locale.getFrench())df.setTimeZone(tz.timezone)ts = df.parse(u'3 févr. 2015 14:26:00 CET') #NOTE: CET is ignorednaive_dt = datetime.fromtimestamp(ts, tz).replace(tzinfo=None)dt = pytz.timezone('Europe/Paris').localize(naive_dt, is_dst=None)print(dt) # -> 2015-02-03 14:26:00+01:00df.applyPattern() could be used to set a different date/time pattern (df.toPattern()) or you could use icu.SimpleDateFormat to get df from the format and the locale directly.It is necessary to use an explicit ICU timezone (so that df.parse() and .fromtimestamp() could use the same utc offset) because icu and datetime may use different timezone definitions.pytz is used here, to get a proper UTC offset for past/future dates (some timezones may have different utc offsets in the past/future including reasons unrelated to DST transitions).
Triangular indices for multidimensional arrays in numpy We know that np.triu_indices returns the indices of the triangular upper part of a matrix, an array with two dimensions.What if one wants to create indices as in the following code?indices = []for i in range(0,n): for j in range(i+1,n): for k in range(j+1,n): indices.append([i,j,k])in a num-pythonic way?
In general you can get a list of indexes that follow the logic of the code you put withfrom itertools import combinationsndim = 3 # number of dimensionsn = 5 # dimension's length (assuming equal length in each dimension)indices = list(combinations(range(n), r=ndim)or if you want to iterate over each position:for i,j,k in combinations(range(n), r=ndim): # Do your cool stuff here passHowever, you referred to it as the triangular upper part of a multidimensional matrix. I'm not sure what's the definition of it, and trying to visualize the indexes you selected with your nested loops I can't figure it out... (I'm just curious now on if there's a deffinition for multidimensional triangular matrix :-P)import matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Da = zip(*indices)fig = plt.figure()ax = fig.add_subplot(111, projection='3d')ax.scatter(a[0], a[1], a[2])plt.xlabel('x')plt.ylabel('y')plt.show()(I moved the view angle to try to show what positions are selected)
Pyramid app: How can I pass values into my request.route_url? I have this in my views.py file as the view config for my home page:@view_config(route_name='home_page', renderer='templates/edit.pt')def home_page(request): if 'form.submitted' in request.params: name= request.params['name'] body = request.params['body'] page=Page(name,body) DBSession.add(page) return HTTPFound(location=request.route_url('view_page',pagename=name)) return {} Also, here is the form in the edit.pt template: <form action="/view_page" method="post"> <div> <input type="text" name="name"/> </div> <div> <input type="text" name="body"/> </div><label for="stl">Stl</label><input name="stl" type="file" value="" /><input type="submit" name='form.submitted' value="Save"/></form> Also in my init.py file I have config.add_route('home_page', '/') config.add_route('view_page', '/{pagename}')right now when I submit the form it just tries to go to localhost:6543/view_page. This returns a 404 as there is no view_page resource or route leading to it. Instead I want it to go to localhost:6543/(the name of the page I just created aka the first input box in the form). How can I do this? Edit: I am worried that something else may be telling it to route to view_page because I even tried changing it toreturn HTTPFound(location=request.route_url('front_page',pagename=name))And it still goes to /view_page. There is no route named front_page, so I would at least suspect it to throw an error. Also, I would really appreciate it if you could tell me where you found the info. I have been looking at http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/api/request.html?highlight=request.route_url#pyramid.request.Request.route_url but can't seem to find use from it. Edit: should I be using an asset specification instead of a path name? soreturn HTTPFound(Location=request.route_url('tutorial:templates/view.pt','/{pagename}'))Also, I am working through this article which seems very helpful with the syntax: http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/urldispatch.html#urldispatch-chapter
I think your form should submit to "/", ie.<!-- where your home_page route is waiting for the POST --><form action="/" method="post">With the prior answers this now looks correct:return HTTPFound(location=request.route_url('view_page', pagename=name))
Slicing first and last two inputs not working when inputs are taken If I run the code from the commented #c_list, it works. But if I run it from the a_list that takes input, I get an empty list. How do I solve it?a_list = [input('Enter input for list: ')]#c_list = [10,20,24,25,26]length = len(a_list)if length>3: b_list = a_list[2:length-2] print(b_list)else: print("Not Possible")
As Green Cloak Guy mentioned, input() always returns a single string.In Python 3.x, if you want multiple inputs you can use the .split() method and a list comprehension:input_list = [int(x) for x in input("Enter values for list: ").split()]#Enter values for list: 1 2 3 4 5input_listOut[4]: [1, 2, 3, 4, 5]Keep in mind this will only work for integer values, if you want the user to input float values as well, you can change int(x) to float(x). Only caveat is that the integer values will also be cast to float values:input_list = [float(x) for x in input("Enter values for list: ").split()]#Enter values for list: 1 2 3.3 4 5input_listOut[8]: [1.0, 2.0, 3.3, 4.0, 5.0]More on the .split() method hereMore on list comprehensions here
Stop trial from advancing based on user input in PsychoPy I'm coding an experiment for an EyeTracker machine using PsychoPy.As suggested in PsychoPy's google group, I've used the Builder View to create the basic structure of the experiment, compiled the script and then added modifications on the Coder View, so the way to handle user's input is the standard generated by PsychoPy. Since I need the user to write his/her input and then advancing to the next question by pressing the Enter key, I have 2 key response objects, so the first one collects user's input, while the second triggers when pressing the Enter key. For example: # *B1_P2_key_01* updates if t >= 0.0 and B1_P2_key_01.status == NOT_STARTED: # keep track of start time/frame for later B1_P2_key_01.tStart = t # underestimates by a little under one frame B1_P2_key_01.frameNStart = frameN # exact frame index B1_P2_key_01.status = STARTED # keyboard checking is just starting win.callOnFlip(B1_P2_key_01.clock.reset) # t=0 on next screen flip if B1_P2_key_01.status == STARTED: #theseKeys = event.getKeys(keyList=['1', '2', '3', '4', '5', '6', '7', '8', '9', '0']) theseKeys = event.getKeys(keyList=['num_1', 'num_2', 'num_3', 'num_4', 'num_5', 'num_6', 'num_7', 'num_8', 'num_9', 'num_0']) # check for quit: if "escape" in theseKeys: endExpNow = True if len(theseKeys) > 0 and b1p2_counter < 3: # at least one key was pressed B1_P2_key_01.keys.extend(theseKeys) # storing all keys B1_P2_key_01.rt.append(B1_P2_key_01.clock.getTime()) print "accepted key" b1p2_counter += 1 # *B1_P2_key_02* updates if t >= 0.0 and B1_P2_key_02.status == NOT_STARTED: # keep track of start time/frame for later B1_P2_key_02.tStart = t # underestimates by a little under one frame B1_P2_key_02.frameNStart = frameN # exact frame index B1_P2_key_02.status = STARTED # keyboard checking is just starting win.callOnFlip(B1_P2_key_02.clock.reset) # t=0 on next screen flip event.clearEvents(eventType='keyboard') if B1_P2_key_02.status == STARTED: theseKeys = event.getKeys(keyList=['return']) # check for quit: if "escape" in theseKeys: endExpNow = True if len(theseKeys) > 0: # at least one key was pressed B1_P2_key_02.keys = theseKeys[-1] # just the last key pressed B1_P2_key_02.rt = B1_P2_key_02.clock.getTime() # a response ends the routine msg2='finish_b1p2_'+paths_list[imagecounter] print msg2 #tracker.sendMessage(msg2) continueRoutine = FalsePlease note that I need the user's input to be displayed on the screen as he/she types it, so I've implemented a solution based on the use of another TextStim object, as suggested on the group too: # *B1_P2_resp_01* updates if t >= 0.0 and B1_P2_resp_01.status == NOT_STARTED: # keep track of start time/frame for later B1_P2_resp_01.tStart = t # underestimates by a little under one frame B1_P2_resp_01.frameNStart = frameN # exact frame index B1_P2_resp_01.setAutoDraw(True) displaystring=" ".join(B1_P2_key_01.keys) #convert list of pressed keys to string displaystring=displaystring.replace(' ','') #remove intermediate spaces # Do some text cleanup...replacing key names with puntuation and effectively disabling keys like 'back','shift', etc. displaystring=displaystring.replace('space',' ') displaystring=displaystring.replace('comma',',') displaystring=displaystring.replace('lshift','') displaystring=displaystring.replace('rshift','') displaystring=displaystring.replace('period','.') displaystring=displaystring.replace('back','') displaystring=displaystring.replace('num_1','1') displaystring=displaystring.replace('num_2','2') displaystring=displaystring.replace('num_3','3') displaystring=displaystring.replace('num_4','4') displaystring=displaystring.replace('num_5','5') displaystring=displaystring.replace('num_6','6') displaystring=displaystring.replace('num_7','7') displaystring=displaystring.replace('num_8','8') displaystring=displaystring.replace('num_9','9') displaystring=displaystring.replace('num_0','0') #displaystring=displaystring.replace('backspace','') #set text of AnswerDisplay to modified string B1_P2_resp_01.setText(displaystring) #print displaystringNow my issue here is how to prevent the user from advancing if he/she hasn't typed anything, or if the values provided are outside of a given range (example: values can only go from 0 to 100). I'm aware that there are some alternatives out there, but using a structure different from PsychoPy's default one. And I need this input control to be implemented on the currently provided one.I've tried to handle the Enter event, but the text object still records the enter key being pressed, so it changes its text. I've also tried to control it from the first key_response object, but then the other one in charge of handling the Enter key stops working.Long story short, I need to set up a condition for the second key_response object (the one handling the Enter key) to decide if the user can advance to the next trial, while sticking to this format, plus not affecting the on-screen display of input.Help on this subject would be much appreciated.
You haven't included the critical part, which is the loop around your first code snippet. This loop is probably something likewhile continueRoutine: # Your code here... which runs the code until continueRoutine is set to False which happens when the user presses return. You want to add one more condition. So something like this:displaystring = '' # to prevent carry-over from the last trial in the first iteration of the loop.continueRoutine = True # this is probably already set higher up in the script and could be ignoredwhile continueRoutine and displaystring: # your code hereYou can be very flexible with these conditions, e.g. if you want a minimum of 5 seconds to pass, you would dowhile continueRoutine and displaystring and t < 5:
list returns None, omits value I am attempting to return values and positions of letters. Running this as a plain for loop works just fine. It's when I turned it into a function that it started to look wonky.Here it is along with its output:dict = {'a': 1, 'b': 2 ... 'z': 26}list1 = []list2 = []def plot(word): counter = 0 for i in word: y = dict.get(i) list1.append(y) #keeps printing None for the first letters counter += 1 x = counter list2.append(x) print list1 print list2 r = zip(list1, list2) print rt = raw_input('Enter word: ')Enter word: Helloplot(t)Output:[None, None, 5, 12, 12, 15][1, 2, 3, 4, 5][(None, 1), (None, 2), (5, 3), (12, 4), (12, 5)]
I think that the issue is that your are trying to map a capital letter. I would simply change the for loop to iterate over the lowercase.for i in word.lower(): y = dict.get(i) list1.append(y) #keeps printing None for the first letters counter += 1 x = counter list2.append(x)
Conditional addition in Python I have been struggling with a text file to perform conditional appending/extending certain text using Python. My apologies in advance if this is too basic and already been discussed here in someway or the other :(Concerning the code (attached), I need to add statement "mtu 1546" under the statement containing "description" only if it doesn't exist. Also, I would like to be able add "description TEST" statement under interface statement (and/or above mtu statement, if available) only if doesn't pre-exist. I am using python 2.7.Here's is my code:import ref = open('/TESTFOLDER/TEST.txt','r')interfaces=re.findall(r'(^interface Vlan[\d+].*\n.+?\n!)',f.read(),re.DOTALL|re.MULTILINE)for i in interfaces: interfaceslist = i.split("!") for i in interfaceslist: if "mtu" not in i: print if.close()print statement works fine with the condition as it's able to print the interesting lines correctly, however my requirement is to add (append/extend) the required statements to the list so I can further use it for parsing and stuff. Upon, trying append/extend function, the interpreter complains it to be a string object instead. Here's the sample source file(text). The text files I will parsing from are huge in size so only adding the interesting text.!interface Vlan2268 description SNMA_Lovetch_mgmt mtu 1546 no ip address xconnect 22.93.94.56 2268 encapsulation mpls!interface Vlan2269 description SNMA_Targoviste_mgmt mtu 1546 no ip address xconnect 22.93.94.90 2269 encapsulation mpls!interface Vlan2272 mtu 1546 no ip address xconnect 22.93.94.72 2272 encapsulation mpls!interface Vlan2282 description SNMA_Ruse_mgmt no ip address xconnect 22.93.94.38 2282 encapsulation mpls!interface Vlan2284 mtu 1546 no ip address xconnect vfi SNMA_Razgrad_mgmt!interface Vlan2286 description mgmt_SNMA_Rs no ip address xconnect 22.93.94.86 2286 encapsulation mpls!interface Vlan2292 description SNMA_Vraca_mgmt mtu 1546 no ip address xconnect 22.93.94.60 2292 encapsulation mpls!
The basic answer to your question is very simple. Strings are immutable, so you can't append to or extend them. You have to create a new string using concatenation. >>> print iinterface Vlan2286 description mgmt_SNMA_Rs no ip address xconnect 22.93.94.86 2286 encapsulation mpls>>> print i + ' mtu 1546\n'interface Vlan2286 description mgmt_SNMA_Rs no ip address xconnect 22.93.94.86 2286 encapsulation mpls mtu 1546Then you have to save the result, either to a variable name or some kind of container. You could just save it to i like so:i = i + ' mtu 1546\n'or like so:i += ' mtu 1546\n'But in this case, a list comprehension might be useful...def add_mtu(i): return i if "mtu" in i else i + " mtu 1546\n"for iface in interfaces: interfaceslist = iface.split("!") updated_ifaces = [add_mtu(i) for i in interfaceslist]Note that I replaced the first i with iface for clarity. And also, it appears to me that there's only one iface in interfaces right now. Perhaps you need that for loop, but if not, it would simplify things to remove it.
How to integrate Lambda, Alexa, and my code (Python - Tweepy)? I am trying to tweet something by talking to Alexa. I want to put my code on AWS Lambda, and trigger the function by Alexa.I already have a Python code that can tweet certain string successfully. And I also managed to create a zip file and deploy it on Lambda (code depends on the "tweepy" package). However, I could not get to trigger the function by Alexa, I understand that I need to use handlers and ASK-SDK (Alexa Service Kit), but I am kind of lost at this stage. Could anyone please give me an idea about how the handlers work and help me see the big picture?
Alexa ASK_SDK Psuedo Code:This is pseudo code of the new ASK_SDK, which is the predecessor to the ALEXA_SDK.Also note I work in NodeJS but the structure is likely the sameOuter Function with Call Back - Lambda Function HandlerCanHandle FunctionContains logic to determine if this handler is the right handler. The HandlerInput variable contains the request data so you can check and see if the intent == "A specific intent" then return true. Else return false. Or you can go way more specific. (Firing handlers by intent is pretty basic. you can take it a step further and fire handlers based on Intent and State.Handle FunctionWhich ever "canHandle" function returns true this is the code that will be run. The handler has a few functions it can perform. It can read the session attributes, change the session attributes based on the intent that was called, formulate a string response, read and write to a more persistant attribute store like dynamodb and create and fire an alexa response. The handerInput contains everything you'll need. I'd highly recommend running your test code in Pycharm with the debugger and then examining the handlerInput variable. The response builder is also very important and is what allows you to add speech, follow up prompts, cards, elicit slot values etc.handler_input.response_builderExample to Examinehttps://github.com/alexa/skill-sample-python-helloworld-classes/blob/master/lambda/py/hello_world.py class HelloWorldIntentHandler(AbstractRequestHandler): """Handler for Hello World Intent.""" def can_handle(self, handler_input): # type: (HandlerInput) -> bool return ask_utils.is_intent_name("HelloWorldIntent")(handler_input) def handle(self, handler_input): # type: (HandlerInput) -> Response speak_output = "Hello Python World from Classes!" return ( handler_input.response_builder .speak(speak_output) # .ask("add a reprompt if you want to keep the session open for the user to respond") .response )
Run script with subprocess.run() in Python, blocking all file read/write attempts I have a web server running Apache 2 on Raspbian Stretch. It is going to be a programming contest website, where users can send code via a HTML form, that sends their source code to PHP via a POST request. PHP then runs (using exec()) a Python script with arguments such as the submitted code path. The script then executes the code (using subprocess.run()) with a custom input and compares it to an expected output. All of that works just fine.However, I want to make sure no one is going to send malicious code to overwrite files such as index.php, or read the expected outputs, for example. I'd like to know if there is any way to prevent an application that is being executed by subprocess.run() from reading, creating and writing to files other than stdin, stdout and stderr.I have tried using Docker but didn't have success, as when I build and run the Dockerfile using PHP's exec() it reaches step 2/4 and just stops. My Dockerfile should copy the script, the code and the expected outputs to an image, cd to the new location and execute the code, but that is not very relevant since I want to avoid Docker as it isn't working properly.I am considering using a chroot jail, but I am still looking for other less-complicated ways of doing that.This is the PHP code I'm using. It calls the Python 3 code verifier (variables are retrieved from a HTML form and from a SQL query, those are not relevant):$cmd = "python3 verify.py $uploadedFile $questionID $uploadedFileExtension $questionTimeLimit 2>&1";And this is the Python 3 code that executes the submitted code:def runCmd(args, vStdin, timelimit = 10): p = subprocess.run(args, stdout = subprocess.PIPE, stderr = subprocess.PIPE, input = vStdin, encoding = 'utf-8', timeout=timelimit) vStdout = p.stdout vStderr = p.stderr if vStdout.endswith('\n'): vStdout = vStdout[:-1] if vStderr.endswith('\n'): vStderr = vStderr[:-1] return vStdout, vStderr...# Assuming it is a .py file# Its path is given by PHP's exec.runCmd(['python3', sys.argv[1], 'simulatedinput.in', int(sys.argv[4]))The combination of both programs works just fine. It runs the code with a simulated input, compares the stdout with the expected output and returns a status string to the PHP code. However, if the code sent has a malicious bit of code, such asopen('/var/www/html/contest/index.php', 'w').write('oops!')the index.php file will be overwritten.All I need is a way of executing the user-sent code in a way that its attempts to read or write to files (other than stdin, stdout and stderr) are denied.Any thoughts?
doing this securely, to put it simply, is difficult. it's relatively easy to escape even a chroot jail if you're not really careful about how you set it up. basically the Unix security model isn't built to make this sort of thing easy and it's assumed that things are mostly cooperativedocker would probably be my suggestion, but there are other lighter weight solutions like chroot (but they'd probably still have the ability to do naughty things with the web server's network connection) or maybe something like firejailwith docker you'd probably want to create a single minimal docker image/container containing Python and whatever libraries are appropriate. you'd then use volumes to make the user supplied code appear inside the VM at runtime. you don't want to be creating containers all the time, that would entail lots of cleanup worksee https://security.stackexchange.com/q/107850/36536 for some more info on using docker as a sandbox, basically there are still lots ways out of it unless you're careful
Understanding time.perf_counter() and time.process_time() I have some questions about the new functions time.perf_counter() and time.process_time().For the former, from the documentation: Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.Is this 'highest resolution' the same on all systems? Or does it always slightly depend if, for example, we use linux or windows?The question comes from the fact the reading the documentation of time.time() it says that 'not all systems provide time with a better precision than 1 second' so how can they provide a better and higher resolution now?About the latter, time.process_time(): Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.I don't understand, what are those 'system time' and 'user CPU time'? What's the difference?
There are two distincts types of 'time', in this context: absolute time and relative time.Absolute time is the 'real-world time', which is returned by time.time() and which we are all used to deal with. It is usually measured from a fixed point in time in the past (e.g. the UNIX epoch of 00:00:00 UTC on 01/01/1970) at a resolution of at least 1 second. Modern systems usually provide milli- or micro-second resolution. It is maintained by the dedicated hardware on most computers, the RTC (real-time clock) circuit is normally battery powered so the system keeps track of real time between power ups. This 'real-world time' is also subject to modifications based on your location (time-zones) and season (daylight savings) or expressed as an offset from UTC (also known as GMT or Zulu time).Secondly, there is relative time, which is returned by time.perf_counter and time.process_time. This type of time has no defined relationship to real-world time, in the sense that the relationship is system and implementation specific. It can be used only to measure time intervals, i.e. a unit-less value which is proportional to the time elapsed between two instants. This is mainly used to evaluate relative performance (e.g. whether this version of code runs faster than that version of code).On modern systems, it is measured using a CPU counter which is monotonically increased at a frequency related to CPU's hardware clock. The counter resolution is highly dependent on the system's hardware, the value cannot be reliably related to real-world time or even compared between systems in most cases. Furthermore, the counter value is reset every time the CPU is powered up or reset.time.perf_counter returns the absolute value of the counter. time.process_time is a value which is derived from the CPU counter but updated only when a given process is running on the CPU and can be broken down into 'user time', which is the time when the process itself is running on the CPU, and 'system time', which is the time when the operating system kernel is running on the CPU on behalf on the process.
Editable console listview in python I'm trying to find a way to make an editable listview in python to be used in a terminal. Basically each line of that listview will have a single word and I would like to be able to check or uncheck some words in the listview. Then, once I'm done editing I want to be able to close the listview and continue the rest of the program. How can I do this?
The curses library is a good way to do this. It allows you to write strings to the screen in a specific position instead of just constantly scrolling down like a typical Python program. And it has the ability to grab key inputs so you could use arrow keys and space bar to select individual lines. Because you can write strings to a specific position on the screen you can use a string like '[ ]' to represent an unselected option and '[*]' to represent a selected option. So when a user hits space you can toggle the current lines' selection status.https://docs.python.org/3/howto/curses.html#the-python-curses-module
Features in sklearn logistic regression I have some problem with adding own features to sklearn.linear_model.LogisticRegression. But anyway lets see some example code:from sklearn.linear_model import LogisticRegression, LinearRegressionimport numpy as np#Numbers are class of tagresultsNER = np.array([1,2,3,4,5])#Acording to resultNER every row is another class so is another features#but in this way every row have the same featuresxNER = np.array([[1.,0.,0.,0.,-1.,1.], [1.,0.,1.,0.,0.,1.], [1.,1.,1.,1.,1.,1.], [0.,0.,0.,0.,0.,0.], [1.,1.,1.,0.,0.,0.]])#Assing resultsNER to yy = resultsNER#Create LogReglogit = LogisticRegression(C=1.0)#Learn LogReglogit.fit(xNER,y)#Some test vector to check wich class will be predictxPP = np.array([1.,1.,1.,0.,0.,1.])#linear = LinearRegression()#linear.fit(x, y)print "expected: ", yprint "predicted:", logit.predict(xPP)print "decision: ",logit.decision_function(xNER)print logit.coef_#print linear.predict(x)print "params: ",logit.get_params(deep=True)Code above is clear and easy. So I have some classes which I called 1,2,3,4,5(resultsNER) they are related to some classes like "data", "person", "organization" etc. So for each class I make custom features which return true or false, in this case one and zero numbers. Example: if token equals "(S|s)unday", it is data class. Mathematically it is clear. I have token for each class features I test it. Then I look which class have the max value of sum of features (that’s why return number not boolean) and pick it up. In other words I use argmax function. Of course in summarization each feature have alpha coefficients. In this case it is multiclass classification, so I need to know how to add multiclass features to sklearn.LogisticRegression. I need two things, alphas coefficients and add my own features to Logistic Regression. The most important for me is how to add to sklearn.LogisticRegression my own features functions for each class. I know I can compute coefficients by gradient descent. But I think when I use fit(x,y) the LogisticRegression use some algorithm to compute coefficients witch I can get by attribute.coef_ .So in the end my main question is how to add custom features for different classes in my example classes 1,2,3,4,5 (resultNER).
Not quite sure about your question, but few thing that might help you:You can use predict_proba function to estimate probabilities for each class:>>> logit.predict_proba(xPP)array([[ 0.1756304 , 0.22633999, 0.25149571, 0.10134168, 0.24519222]])If you want features to have some weights (is this the thing you're calling alpha?), you do it not in learning algorithm but on preprocessing phase. I your case you can use an array of coefficients:>>> logit = LogisticRegression(C=1.0).fit(xNER,y)>>> logit.predict(xPP)array([3])>>> alpha = np.array([[0.2, 0.2, 1, 1, 0.3, 1]])>>> logit = LogisticRegression(C=1.0).fit(alpha*xNER,y)>>> logit.predict(alpha*xPP)array([2])
PySimpleGuiWeb change ip address I have a python script that uses PySimpleGuiWeb. I want to host it on my server and connect to it from another computer. But the script is running on 127.0.0.1. Can I somehow change this, or is there another way?
From the host device running the pysimplegui python script, you can likely load the gui via 127.0.0.1:###### through a browser (where ##### is your port). This is because it's hosted and being accessed from the same device.Accessing from another device on the same network: try using the IP of the device hosting the pysimplegui followed by the same port as is used locally by host device to access gui.e.g. 172.20.10.5:######IP of the host device found via:Linux(terminal):ifconfigMac(terminal):ifconfigWindows(command prompt):ipconfigYou can make the IP of your gui host device static and according the pysimplegui you can make the port static too, therefore the GUI should be always found at the same IP:port externally (given the static IP holds and doesn't get reallocated and that port isn't blocked for any reason by network etc.)Update:In addition, with some further reading, optional arguments to a 'Window' include:web_ip='0.0.0.0', web_port=0https://github.com/PySimpleGUI/PySimpleGUI/blob/master/PySimpleGUIWeb/PySimpleGUIWeb.py
How to add if condition in a TensorFlow graph? Let's say I have following code:x = tf.placeholder("float32", shape=[None, ins_size**2*3], name = "x_input")condition = tf.placeholder("int32", shape=[1, 1], name = "condition")W = tf.Variable(tf.zeros([ins_size**2*3,label_option]), name = "weights")b = tf.Variable(tf.zeros([label_option]), name = "bias")if condition > 0: y = tf.nn.softmax(tf.matmul(x, W) + b)else: y = tf.nn.softmax(tf.matmul(x, W) - b) Would the if statement work in the calculation (I do not think so)? If not, how can I add an if statement into the TensorFlow calculation graph?
You're correct that the if statement doesn't work here, because the condition is evaluated at graph construction time, whereas presumably you want the condition to depend on the value fed to the placeholder at runtime. (In fact, it will always take the first branch, because condition > 0 evaluates to a Tensor, which is "truthy" in Python.)To support conditional control flow, TensorFlow provides the tf.cond() operator, which evaluates one of two branches, depending on a boolean condition. To show you how to use it, I'll rewrite your program so that condition is a scalar tf.int32 value for simplicity:x = tf.placeholder(tf.float32, shape=[None, ins_size**2*3], name="x_input")condition = tf.placeholder(tf.int32, shape=[], name="condition")W = tf.Variable(tf.zeros([ins_size**2 * 3, label_option]), name="weights")b = tf.Variable(tf.zeros([label_option]), name="bias")y = tf.cond(condition > 0, lambda: tf.matmul(x, W) + b, lambda: tf.matmul(x, W) - b)
Mass invoices duplicate Odoo8 Is there anyway to duplicate, or create, a bunch of invoices with xml-rpc?I try with the copy method of the Odoo ORMApiinvoices = call('account.invoice','search_read', [('type','ilike',"out_invoice")])for invoice in invoices:inv = invoice.copy()How can I insert the new invoice int the db?
Try erppeek, it's a python library that makes this much easierclient = erppeek.Client(SERVER, DATABASE, USERNAME, PASSWORD)invoices=client.search('account.invoice',[('type','ilike',"out_invoice")])for i in range(len(invoices)): client.copy('account.invoice',invoices[i-1])
Use Python re to get rid of links Say I have a string looks like <a href="/wiki/Greater_Boston" title="Greater Boston">Boston–Cambridge–Quincy, MA–NH MSA</a>How can I use re to get rid of links and get only the Boston–Cambridge–Quincy, MA–NH MSA part?I tried something like match = re.search(r'<.+>(\w+)<.+>', name_tmp) but not working.
re.sub('<a[^>]+>(.*?)</a>', '\\1', text)Note that parsing HTML in general is rather dangerous. However it seems that you are parsing MediaWiki generated links where it is safe to assume that the links are always similar formatted, so you should be fine with that regular expression.
Python declaring a numpy matrix of lists of lists I would like to have a numpy matrix that looks like this[int, [[int,int]]]I receive an error that looks like this "ValueError: setting an array element with a sequence."below is the declarationdef __init__(self): self.path=np.zeros((1, 2))I attempt to assign a value to this in the line below routes_traveled.path[0, 1]=[loc]loc is a list and routes_traveled is the object
Do you want a higher dimensional array, say 3d, or do you really want a 2d array whose elements are Python lists. Real lists, not numpy arrays?One way to put lists in to an array is to use dtype=object:In [71]: routes=np.zeros((1,2),dtype=object)In [72]: routes[0,1]=[1,2,3]In [73]: routes[0,0]=[4,5]In [74]: routesOut[74]: array([[[4, 5], [1, 2, 3]]], dtype=object)One term of this array is 2 element list, the other a 3 element list.I could have created the same thing directly:In [76]: np.array([[[4,5],[1,2,3]]]) Out[76]: array([[[4, 5], [1, 2, 3]]], dtype=object)But if I'd given it 2 lists of the same length, I'd get a 3d array:In [77]: routes1=np.array([[[4,5,6],[1,2,3]]])Out[77]: array([[[4, 5, 6], [1, 2, 3]]])I could index the last, routes1[0,1], and get an array: array([1, 2, 3]), where as routes[0,1] gives [1, 2, 3].In this case you need to be clear where you talking about arrays, subarrays, and Python lists.With dtype=object, the elements can be anything - lists, dictionaries, numbers, stringsIn [84]: routes[0,0]=3In [85]: routesOut[85]: array([[3, [1, 2, 3]]], dtype=object)Just be ware that such an array looses a lot of the functionality that a purely numeric array has. What the array actually contains is pointers to Python objects - just a slight generalization of Python lists.
How to find recursively all links from a webpage with beautifulsoup? I have been trying to use some code I found in this answer to recursively find all links from a given URL:import urllib2from bs4 import BeautifulSoupurl = "http://francaisauthentique.libsyn.com/"def recursiveUrl(url,depth): if depth == 5: return url else: page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) newlink = soup.find('a') #find just the first one if len(newlink) == 0: return url else: return url, recursiveUrl(newlink,depth+1)def getLinks(url): page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) links = soup.find_all('a') for link in links: links.append(recursiveUrl(link,0)) return linkslinks = getLinks(url)print(links)and besides a warning/usr/local/lib/python2.7/dist-packages/bs4/__init__.py:181: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.The code that caused this warning is on line 28 of the file downloader.py. To get rid of this warning, change code that looks like this: BeautifulSoup(YOUR_MARKUP})to this: BeautifulSoup(YOUR_MARKUP, "lxml")I get the following error:Traceback (most recent call last): File "downloader.py", line 28, in <module> links = getLinks(url) File "downloader.py", line 25, in getLinks links.append(recursiveUrl(link,0)) File "downloader.py", line 11, in recursiveUrl page=urllib2.urlopen(url) File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 396, in open protocol = req.get_type()TypeError: 'NoneType' object is not callableWhat is the problem?
Your recursiveUrl tries to access a url link that is invalid like: /webpage/category/general which is the value your extracted from one of the href links. You should be appending the extracted href value to the website's url and then try to open the webpage. You will need to work on your algorithm for recursion, as I don't know what you want to achieve. Code:import requestsfrom bs4 import BeautifulSoupdef recursiveUrl(url, link, depth): if depth == 5: return url else: print(link['href']) page = requests.get(url + link['href']) soup = BeautifulSoup(page.text, 'html.parser') newlink = soup.find('a') if len(newlink) == 0: return link else: return link, recursiveUrl(url, newlink, depth + 1)def getLinks(url): page = requests.get(url) soup = BeautifulSoup(page.text, 'html.parser') links = soup.find_all('a') for link in links: links.append(recursiveUrl(url, link, 0)) return linkslinks = getLinks("http://francaisauthentique.libsyn.com/")print(links)Output:http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/2017http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/2017/10http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/2017/09http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/2017/08http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/2017/07http://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/generalhttp://francaisauthentique.libsyn.com//webpage/category/general