questions stringlengths 50 48.9k | answers stringlengths 0 58.3k |
|---|---|
Why does my code take so long to write CSV file in Dask Python Below is my Python code:import dask.dataframe as ddVALUE2015 = dd.read_csv('A/SKD - M2M by Salesman (value by uom) (NEWSALES)2015-2016.csv', usecols = VALUEFY, dtype = traintypes1) REPORT = VALUE2015.groupby(index).agg({'JAN':'sum', 'FEB':'sum', 'MAR':'sum', 'APR':'sum', 'MAY':'sum','JUN':'sum', 'JUL':'sum', 'AUG':'sum', 'SEP':'sum', 'OCT':'sum', 'NOV':'sum', 'DEC':'sum'}).compute()REPORT.to_csv('VALUE*.csv', header=True)It takes 6 minutes to create a 100MB CSV file. | Looking through Dask documentation, it says there that, "generally speaking, Dask.dataframe groupby-aggregations are roughly same performance as Pandas groupby-aggregations." So unless you're using a Dask distributed client to manage workers, threads, etc., the benefit from using it over vanilla Pandas isn't always there.Also, try to time each step in your code because if the bulk of the 6 minutes is taken up by writing the .CSV to file on disk, then again Dask will be of no help (for a single file).Here's a nice tutorial from Dask on adding distributed schedulers for your tasks. |
Program format getting change in wing if see the picture on this link https://drive.google.com/file/d/0B_CP5fn_tuEDTDZoclM5M0V0cmc/edit?usp=sharingthis what my program looks when I write in sublime. But when I copy and paste the program in wing it looks like picture on the following link https://drive.google.com/file/d/0B_CP5fn_tuEDZEd0SVktVHRMcEE/edit?usp=sharingWhen write and save the file in sublime, and then try to run it in python it give me errorBut when paste in wing and format the indentation and save. Then when I run in python it works fine.I dont how to indent the program well in sublime. | By looking at the images it seem likely that the editor settings related to indenting are different in sublime and wing.Check if any of the editors are using tabs instead of spaces when indenting the code and if they are, change the editor to use 4 x whitespace instead of a tab. |
What is more efficient .objects.filter().exists() or get() wrapped on a try I'm writing tests for a django application and I want to check if an object has been saved to the database. Which is the most efficient/correct way to do it?User.objects.filter(username=testusername).exists()ortry: User.objects.get(username=testusername)except User.DoesNotExist: | Speed test: exists() vs. get() + try/exceptTest functions in test.py:from testapp.models import Userdef exists(x): return User.objects.filter(pk=x).exists()def get(x): try: User.objects.get(pk=x) return True except User.DoesNotExist: return FalseUsing timeit in shell:In [1]: from testapp import testIn [2]: %timeit for x in range(100): test.exists(x)10 loops, best of 3: 88.4 ms per loopIn [3]: %timeit for x in range(100): test.get(x)10 loops, best of 3: 105 ms per loopIn [4]: timeit for x in range(1000): test.exists(x)1 loops, best of 3: 880 ms per loopIn [5]: timeit for x in range(1000): test.get(x)1 loops, best of 3: 1.02 s per loopConclusion: exists() is over 10% faster for checking if an object has been saved in the database. |
I want to deploy using the entries that i have in my database So i used postgres in development for my django project and have important entries in there and i want to deploy to my app in herokuis there a simple way to do this? | Sure. You just need to export your local database and import it on the Heroku Postgres database. Heroku has a guide to do just that.Create a dump from your local database. PGPASSWORD=mypassword pg_dump -Fc --no-acl --no-owner -h localhost -U myuser mydb > mydb.dumpUpload mydb.dump somewhere Heroku can access it.Import to heroku. heroku pg:backups restore 'https://s3.amazonaws.com/me/items/3H0q/mydb.dump' DATABASE_URLSource |
name " " is not defined import mathEMPTY = '-'def is_between(value, min_value, max_value): """ (number, number, number) -> bool Precondition: min_value <= max_value Return True if and only if value is between min_value and max_value, or equal to one or both of them. >>> is_between(1.0, 0.0, 2) True >>> is_between(0, 1, 2) False """ return value >= min_value and value <= max_value # Students are to complete the body of this function, and then put their # solutions for the other required functions below this function.def game_board_full(cells): """ (str) -> bool Return True if no EMPTY in cells and else False >>> game_board_full ("xxox") True >>> game_board_full ("xx-o") False """ return "-" not in cellsdef get_board_size (cells): """ (str) -> int Return the square root of the length of the cells >>>get_board_size ("xxox") 2 >>>get_board_size ("xoxoxoxox") 3 """ sqrt_cell= len(cells) ** 0.5 return int(sqrt_cell)def make_empty_board (size): """ (int) -> str Precondition: size>=1 and size<=9 Return a string for storing information with the size >>>make_empty_board (2) "----" >>>make_empty_board (3) "---------" """ return "-" *size ** 2 def get_position (row_index,col_index,size): """ (int,int,int) -> int Precondition:size >=col_index and size >= row_index Return the str_index of the cell with row_index,col_index and size >>>get_position (2,2,4) 5 >>>get_position (3,4,5) 13 """ str_index = (row_index - 1) * size + col_index - 1 return str_indexdef make_move( symbol,row_index,col_index,game_board): """(str,int,int,str) -> str Return the resultant game board with symbol,row_index,col_index and game_board >>>make_move("o",1,1,"----") "o---" >>>make_move("x"2,3,"---------") "-----x---" """ length=len(game_board) size=len(cells) ** 0.5 str_index = (row_index - 1) * size + col_index - 1 return "-"*(str_index-1)+symbol+"-"*(length-str_index)def extract_line (cells,direction,cells_num): """ (str,str,int) -> str Return the characters of a specified row with cells, direction and cells_num >>>extract_line ("xoxoxoxox","across",2) "oxo" >>>extract_line ("xoxo","up_diagonal","-") "xo" """ num=cells_num s=cells size= get_board_size (cells) if direction=="across": return s[(num-1)* size : num*size] elif direction=="down": return s[num-1:size **2:size] elif direction=="up_diagonal": return s[(size-1)*size:size-2:1-size] elif direction=="down_diagonal": return s[0:size*2:size+1] NameError: name 'cells' is not defined I don't know how to define cells because it is a parameter. | You have NO cells parameter in def make_move( symbol,row_index,col_index,game_board):Next time read the error message carefully so you know in which code line you have a problem. |
How to define policies for Python application in Bluemix Autoscaling service? I noticed that the policy types depend on the target runtime. For example, for Java it is possible to define policies based on memory, throughput, response time... etc. The only possibility for Python is memory based policy. Is there any workaround for that? | Bluemix Auto scaling service for Liberty for Java™ applications, supports scaling rules for JVM Heap, Memory, and Throughput. Actually, Auto Scaling services on Bluemix works with IBM JVM. For other types of runtimes, including Python runtime, there is only scaling rules for Memory. |
Save Game Progress for Multiple Sprites I'm working on a game in Pygame that includes a player class and an enemy class. Each class has multiple variables within it. I'm trying to figure out how I can save the data of these sprites by using Python's built-in pickle module. I thought of doing something similar to this:data_file = open_file("save.dat","wb")for i in enemyList: pickle.dump(i.health) pickle.dump(i.rect.x) pickle.dump(i.rect.y) pickle.dump(i.image)and so on for each variable. How can I save the data and retrieve it in the same state it was in previously? | AnswerSince pickle is object serialization, you should just be able to dump your whole object. The b in wb is for binary. This is because you don't have to know how an object is represented in binary, you can just dump it like so:data_file = open_file("save.dat","wb")for i in enemyList: pickle.dump(i, data_file)Then when you load it back in you will have the whole object.To open it:with open('save.dat', 'rb') as fp: i = pickle.load(fp)I havn't used pickle before, but since it is all binary you should just be able to dump your enemyList if it an object:data_file = open_file("save.dat","wb")pickle.dump(enemyList, data_file)with open('save.dat', 'rb') as fp: enemyList = pickle.load(fp)Excluding/Including Additional StatePickle uses the __getstate__ and __setstate__ methods to alter state before reading and writing pickle serialized data. If you wish to omit un-serialization data you must override these methods. Here is the documentation to help you in doing so:Pickle StateConsiderationSerialization (and therefor python pickle) is seen as an alternative to creating your own file format. Which often times, I find to be easier depending on the data types. If you are not in control of your object hierarchy, sometimes you don't want to create your own inherited object to try and gain control of all the data. Sometimes it is just easier to write your own file format. |
how to omit tns from response and change tag name in spyne? how do omit tns from my response and also change the tag name.?my response is like this<soap11env:Envelope xmlns:soap11env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tns="spyne.example"> <soap11env:Body> <tns:FnSchedule_CityResponse> <tns:FnSchedule_CityResult> <tns:ErrorString></tns:ErrorString> <tns:CityName>HYDERABAD</tns:CityName> <tns:CityId>1</tns:CityId> <tns:ErrId>0</tns:ErrId> </tns:FnSchedule_CityResult> </tns:FnSchedule_CityResponse> </soap11env:Body></soap11env:Envelope>I want to remove tns and change "soap11env" to "soap".Having these values is causing validation issues.i referred this question on stack overflow, implemented it, but was not helpful.Remove the namespace from Spyne response variables | In order to change soap11env to soap just simply override the response usingapplication.interface.nsmap['soap'] = application.interface.nsmap['soap11env']The 'tns' or target namespaces must not be change but there may arrive a few cases, One might need to change a name in order to completely test something.To change the namespaces,def on_method_return_string(ctx):ctx.out_string[0] = ctx.out_string[0].replace(b'ns4', b'diffgr')ctx.out_string[0] = ctx.out_string[0].replace(b'ns5', b'msdata')YourModelClassName.event_manager.add_listener('method_return_string', on_method_return_string)What I did here was, replace the namespace ns4 with diffgr and ns5 with msdata. ns4 and ns5 are sub name spaces that I had in the responses of some third party application. This solution I found in the mailing list maintained for spyne. |
ERROR: Command errored out with exit status 1 while installing requirements.txt I have been trying to install packages from a requirements.txt file but I'm getting error. I made a virtual environment to install the packages but got his huge array. My machine is running on python 3.8Below is the error what I got in my terminal while trying to install the requirements.txt file in my virutal environment Getting requirements to build wheel ... done Preparing wheel metadata ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\Scripts\python.exe' 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\user\AppData\Local\Temp\tmpzo50rwzk' cwd: C:\Users\user\AppData\Local\Temp\pip-install-dpmca2i7\scipy Complete output (170 lines): lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\'] NOT AVAILABLE openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['C:\\Users\\user\\Documents\\demoProjectwebapp-v3\\venv\\lib', 'C:\\'] NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE setup.py:111: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp setup.py:386: UserWarning: Unrecognized setuptools command ('dist_info --egg-base C:\Users\user\AppData\Local\Temp\pip-modern-metadata-fv_m7gz4'), proceeding with generating Cython sources and expanding templates warnings.warn("Unrecognized setuptools command ('{}'), proceeding with " Running from scipy source directory. C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. self.calc_info() C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. self.calc_info() C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\system_info.py:624: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. self.calc_info() Traceback (most recent call last): File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 280, in <module> main() File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 133, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel self.run_setup() File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup super(_BuildMetaLegacyBackend, File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 505, in <module> File "setup.py", line 501, in setup_package setup(**metadata) File "C:\Users\user\AppData\Local\Temp\pip-build-env-glp1ljn7\overlay\Lib\site-packages\numpy\distutils\core.py", line 135, in setup config = configuration() raise NotFoundError(msg) numpy.distutils.system_info.NotFoundError: No lapack/blas resources found. ----------------------------------------ERROR: Command errored out with exit status 1: 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\Scripts\python.exe' 'C:\Users\user\Documents\demoProjectwebapp-v3\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\user\AppData\Local\Temp\tmpzo50rwzk' Check the logs for full command output. | Try isolate which line of requirements.txt gives you an error, you can try comment out torch and see how installation goes without it.To replicate your error message try pip install torch, I think it would give you the error message you experienced.Two things after that:go to torch documentation, find out about manual installationtry installing with conda, another package managerAlso, you may want to install Anaconda package bundle to try if most libraries are there.As a side note - bad experience in installing some heavy dependency happens often, so treat it as excercise, it is not something you did wrong, it is kind of nature of dependencies you try using. Good luck! |
How to check whether a graph is an undirected graph? Currently, I am creating a function to check whether a graph is un-directed.The way, my graphs are stored are in this way. This is a un-directed graph of 3 nodes, 1, 2, 3.graph = {1: {2:{...}, 3:{...}}, 2: {1:{...}, 3:{...}}, 3: {1:{...}, 2:{...}}}the {...} represents alternating layers of the dictionaries for the connections in each of the nodes. It is infinitely recurring, since it is nested in each other.More details about graph:the keys refer to the node, and it's values refer to a dict, with the nodes that are connected to the key.Example: two nodes (1, 2) with an undirected edge: graph = {1: {2: {1: {...}}}, 2: {1: {2: {...}}}}Example2: two nodes (1, 2) with a directed edge from 1 to 2: graph = {1: {2: {}}, 2: {}}My current way of figuring out whether a graph is un-directed or not, is by checking whether the number of edges in the graph is equal to (n*(n-1))/2 (n represents the number of nodes) , but this cannot differentiate between 15 directed edges and 15 un-directed edges, so what other way can i confirm that my graph is undirected? | First off, I think you're abusing terminology by calling a graph with edges in both directions "undirected". In a real undirected graph, there is no notion of direction to an edge, which often means you don't need redundant direction information in the graph's representation in a computer program. What you have is a directed graph, and you want to see if it could be represented by an undirected graph, even though you're not doing so yet.I'm not sure there's any easier way to do this than by checking every edge in the graph to see if the reversed edge also exists. This is pretty easy with your graph structure, just loop over the verticies and check if there is a returning edge for every outgoing edge:def undirected_compatible(graph): for src, edges in graph.items(): # edges is dict of outgoing edges from src for dst, dst_edges in edges.items(): # dst_edges is dict of outgoing edges from dst if src not in dst_edges: return False return TrueI'd note that a more typical way of describing a graph like yours would be to omit the nested dictionaries and just give a list of destinations for the edges. A fully connected 3-node graph would be:{1: [2, 3], 2: [1, 3], 3: [1, 2]}You can get the same information from this graph as your current one, you'd just need an extra indirection to look up the destination node in the top level graph dict, rather than having it be the value of the corresponding key in the edge container already. A version of my function above for this more conventional structure would be:def undirected_compatible(graph): for src, edges in graph.items(): for dst in edges: if src not in graph[dst]: return False return TrueThe not in test may make this slower for large graphs, since searching a list for an item is less asymptotically efficient than checking if a key is in a dictionary. If you needed the higher performance, you could use sets instead of lists, to speed up the membership tests. |
How to Return a List of Values From Within a Dictionary? I need to return a list of values for a given id number using two previously created dictionaries, where the values I need are stored within the dictionaries.The two dictionaries I've created are as follows:{100: ('Mulan', [300, 500], [200, 400]), 200: ('Ariel', [100, 500], [500]), 300: ('Jasmine', [500], [500, 100]), 400: ('Elsa', [100, 500], []), 500: ('Belle', [200, 300], [100, 200, 300, 400])}{100000: (400, 'Does not want to build a %SnowMan %StopAsking', ['SnowMan', 'StopAsking'], [100, 200, 300], [400, 500]), 100001: (200, 'Make the ocean great again.', [''], [], [400]), 100002: (500, "Help I'm being held captive by a beast! %OhNoes", ['OhNoes'], [400], [100, 200, 300]), 100003: (500, "Actually nm. This isn't so bad lolz :P %StockholmeSyndrome", ['StockholmeSyndrome'], [400, 100], []), 100004: (300, 'If some random dude offers to %ShowYouTheWorld do yourself a favour and %JustSayNo.', ['ShowYouTheWorld', 'JustSayNo'], [500, 200], [400]), 100005: (400, 'LOLZ BELLE. %StockholmeSyndrome %SnowMan', ['StockholmeSyndrome', 'SnowMan'], [], [200, 300, 100, 500])}The first dictionary is of the form {id: (name, followers, following}.The second dictionary is of the form {key: (id, chirp, tags, likes, dislikes}.For the given id numbers 100, 200, 300, 400, 500, I need to return the chirp with the most likes for each user they follow. An example of the output, for say id number 500, would be:['Make the ocean great again.', 'If some random dude offers to %ShowYouTheWorld do yourself a favour and %JustSayNo.', 'Does not want to build a %SnowMan %StopAsking']I understand the process that needs to happen here, but I need some help with how to get the function to find the necessary value in one dictionary, and then search for the required values in the second dictionary.Thanks so much for any guidance you can offer! | You'll need to use nested loops to go through both dictionaries starting with the first:user_input = 500for key, value in dictionary1.items():if user_input == key: for key2, value2 in dictionary2.items(): for items in value[1]: if items == value2[0]: print(value2[1])key = idvalue[1][0] = namevalue[1][1] = followersvalue[1][2] = followingkey2 = keyvalue2[0] = idvalue2[1] = chirpvalue2[2] = tagsvalue2[3] = likesvalue2[4] = dislikes |
Output of python code is one character per line I'm new to Python and having some trouble with an API scraping I'm attempting. What I want to do is pull a list of book titles using this code:r = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')data = json.loads(r.text)for doc in data["docs"]: for title in doc["sourceResource"]["title"]: print (title)Which works to pull the titles, but most (not all) titles are outputting as one character per line. I've tried adding .splitlines() but this doesn't fix the problem. Any advice would be appreciated! | The problem is that you have two types of title in the response, some are plain strings "Germain the wizard" and some others are arrays of string ['Joe Strong, the boy wizard : or, The mysteries of magic exposed /']. It seems like in this particular case, all lists have length one, but I guess that will not always be the case. To illustrate what you might need to do I added a join here instead of just taking title[0].import requestsimport jsonr = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')data = json.loads(r.text)for doc in data["docs"]: title = doc["sourceResource"]["title"] if isinstance(title, list): print(" ".join(title)) else: print(title)In my opinion that should never happen, an API should return predictable types, otherwise it looks messy on the users' side. |
Convert AngularJS website to Flask I created an Angular website with ui-router:angular app structure|--index.html|--js |--app.js |--angular.js |-- ...|--stylesheets |--main.css |-- ...|--template |--navbar.html |--about.html |-- ...Each js and css is linked like this:<script src="js/main.js"></script>I want to serve this with Flask. I threw everything in the "templates" folder and wrote a simple Flask app:server.py:from flask import Flask, make_responseapp = Flask(__name__)@app.route('/')def view(): return make_response(open('templates/index.html').read())app.debug = Trueif __name__ == '__main__': app.run()flask app|--server.py|--templates |--index.html |--js |--app.js |--angular.js |-- ... |--stylesheets |--main.css |-- ... |--template |--navbar.html |--about.html |-- ...None of my file are loading when I go to the root url. How do I serve the Angular files from the Flask app? | You need to render your template. The best way to do that is @app.route('/')def view(): return render_template('index.html') |
MySQL query throwing 1064 error I have a huge data which is stored in mysql db. One of the columns in the database is a long string. One of the strings is "iEdge detected the 'warning' condition 'iedge it" which is stored in string_type. I have to query the database and find how many such strings are there.I am querying from my python program. When I do it using something like cur.execute("select count(*) from table1 as tmp where tmp.err_string='"+row[r]+"'")row[r] contains "iEdge detected the 'warning' condition 'iedge it"I am getting error 1064 (You have an error in your SQL syntax...). I think it is happening because of some quotes in the string. May I know how to fix this? | Can you try this:sql = "select count(*) from table1 as tmp where tmp.err_string=%s"cursor.execute(sql, [row[r]])Let the MySQL Python library worry about escaping special characters and how to quote your string.See this SO post for more information. |
Python - regular expressions - find every word except in tags How to find all words except the ones in tags using RE module?I know how to find something, but how to do it opposite way? Like I write something to search for, but acutally I want to search for every word except everything inside tags and tags themselves?So far I managed this:f = open (filename,'r')data = re.findall(r"<.+?>", f.read())Well it prints everything inside <> tags, but how to make it find every word except thats inside those tags?I tried ^, to use at the start of pattern inside [], but then symbols as . are treated literally without special meaning.Also I managed to solve this, by splitting string, using '''\= <>"''', then checking whole string for words that are inside <> tags (like align, right, td etc), and appending words that are not inside <> tags in another list. But that a bit ugly solution.Is there some simple way to search for every word except anything that's inside <> and these tags themselves?So let say string 'hello 123 <b>Bold</b> <p>end</p>'with re.findall, would return:['hello', '123', 'Bold', 'end'] | If you want to avoid using a regular expression, BeautifulSoup makes it very easy to get just the text from an HTML document:from BeautifulSoup import BeautifulSoupsoup = BeautifulSoup(html_string)text = "".join(soup.findAll(text=True))From there, you can get the list of words with split:words = text.split() |
Add build information in Jenkins using REST Does anyone know how to add build information to an existing Jenkins build? What I'm trying to do is replace the #1 build number with the actual full version number that the build represents. I can do this manually by going to http://MyJenkinsServer/job/[jobname]/[buildnumber]/configureI have tried to reverse engineer the headers using chrome by seeing what it sends to the server and I found the following:Request URL:http://<server>/job/test_job/1/configSubmitRequest Method:POSTStatus Code:200 OKRequest Headers view sourceAccept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding:gzip,deflate,sdchAccept-Language:en-US,en;q=0.8Cache-Control:max-age=0Connection:keep-aliveContent-Length:192Content-Type:application/x-www-form-urlencodedCookie:hudson_auto_refresh=false; JSESSIONID=qbn3q22phkbc12f1ikk0ssijb; screenResolution=1920x1200Referer:http://<server>/job/test_job/1/configureUser-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4Form Data view URL encodeddisplayName:#1description:test4core:apply:truejson:{"displayName": "#1", "description": "test4", "": "test4", "core:apply": "true"}**Response Headers view sourceContent-Length:155Content-Type:text/html;charset=UTF-8Server:Jetty(8.y.z-SNAPSHOT)This at least gives me the form parameters that I need to POST. So from this I came up with the following python3 code:import requestsparams={"displayName":"Hello World", "description":"This is my description", "":"This is my description", "core:apply":"true"}a = requests.post("http://myjenkinsserver/job/test_jira_job_update/1/configSubmit", data=params, auth=( username, pwd), headers={"content-type":"text/html;charset=UTF-8"} )if a.raw.status != 200: print("***ERROR***") print(a.raw.status) print(a.raw.reason)but sadly this failed with the following error:***ERROR***400Nothing is submittedAny ideas what I am doing wrong? Is my approach to this problem completely wrong? | It's a bit confusing to reverse engineer this. You just need to submit the json parameter in your POST:p = {'json': '{"displayName":"New Name", "description":"New Description"}'}requests.post('http://jenkins:8080/job/jobname/5/configSubmit', data=p, auth=(user, token))In my tests, the above works to set the build name and description with Jenkins 1.517. (Also, I don't think you should set the content-type header, since you should be submitting form-encoded data.) |
Suppress warnings for python-xarray I'm running the following code positive_values = values.where(values > 0) In this example values may contain nan elements. I believe that for this reason, I'm getting the following runtime warning: RuntimeWarning: invalid value encountered in greater_equal if not reflexive Does xarray have methods of surpressing these warnings? | The warnings module provides the functionality you are looking for.To suppress all warnings do (see John Coleman's answer for why this is not good practice):import warningswarnings.simplefilter("ignore") # warnings.simplefilter("ignore", category=RuntimeWarning) # for RuntimeWarning onlyTo make the suppression temporary do it inside the warnings.catch_warnings() context manager:import warningswith warnings.catch_warnings(): warnings.simplefilter("ignore") positive_values = values.where(values > 0) The context manager saves the original warning settings prior to entering the context and then sets them back when exiting the context. |
How to customize pybusyinfo window in (windows OS) to make it appear at top corner of window and the other formatting options? I am writing a python script to get the climate conditions in particular area every 30 minutes and give a popup notification.This code gives popup at the center of the screen which is annoying.I wish to have the popup similar to notify-send in linux[which appears at right corner] and the message is aligned at the center of pybusyinfo window ,and how to align it to right?Any change of code in pybusyinfo would be helpful.import requestsfrom bs4 import BeautifulSoupimport datetime,timeimport wximport wx.lib.agw.pybusyinfo as PBInow = datetime.datetime.now()hour=now.hour# gets current timedef main(): chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s' g_link = 'http://www.accuweather.com/en/in/tambaram/190794/hourly-weather-forecast/190794?hour='+str(hour) g_res= requests.get(g_link) g_links= BeautifulSoup(g_res.text,"lxml") if hour > 18 : temp = g_links.find('td', {'class' :'first-col bg-s'}).text climate = g_links.find('td', {'class' :'night bg-s icon first-col'}).text else : temp = g_links.find('td', {'class' :'first-col bg-c'}).text climate = g_links.find('td', {'class' :'day bg-c icon first-col'}).text for loc in g_links.find_all('h1'): location=loc.text info = location +' ' + str(now.hour)+':'+str(now.minute) #print 'Temp : '+temp #print climate def showmsg(): app = wx.App(redirect=False) title = 'Weather' msg= info+'\n'+temp + '\n'+ climate d = PBI.PyBusyInfo(msg,title=title) return d if __name__ == '__main__': d = showmsg() time.sleep(6)while True: main() time.sleep(1800) | screen_size = wx.DisplaySize()d_size = d._infoFrame.GetSize()pos_x = screen_size[0] - d_size[0] # Right - popup.width (aligned to right side)pos_y = screen_size[1] - d_size[1] # Bottom - popup.height (aligned to bottom)d.SetPosition((pos_x,pos_t))d.Update() # force redraw ... (otherwise your "work " will block redraw)to align the text you will need to subclass PyBusyFrameclass MyPyBusyFrame(PBI.PyBusyFrame): def OnPaint(self, event): """ Handles the ``wx.EVT_PAINT`` event for L{PyInfoFrame}. :param `event`: a `wx.PaintEvent` to be processed. """ panel = event.GetEventObject() dc = wx.BufferedPaintDC(panel) dc.Clear() # Fill the background with a gradient shading startColour = wx.SystemSettings_GetColour(wx.SYS_COLOUR_ACTIVECAPTION) endColour = wx.WHITE rect = panel.GetRect() dc.GradientFillLinear(rect, startColour, endColour, wx.SOUTH) # Draw the label font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT) dc.SetFont(font) # Draw the message rect2 = wx.Rect(*rect) rect2.height += 20 ############################################# # CHANGE ALIGNMENT HERE ############################################# dc.DrawLabel(self._message, rect2, alignment=wx.ALIGN_CENTER|wx.ALIGN_CENTER) # Draw the top title font.SetWeight(wx.BOLD) dc.SetFont(font) dc.SetPen(wx.Pen(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT))) dc.SetTextForeground(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT)) if self._icon.IsOk(): iconWidth, iconHeight = self._icon.GetWidth(), self._icon.GetHeight() dummy, textHeight = dc.GetTextExtent(self._title) textXPos, textYPos = iconWidth + 10, (iconHeight-textHeight)/2 dc.DrawBitmap(self._icon, 5, 5, True) else: textXPos, textYPos = 5, 0 dc.DrawText(self._title, textXPos, textYPos+5) dc.DrawLine(5, 25, rect.width-5, 25) size = self.GetSize() dc.SetPen(wx.Pen(startColour, 1)) dc.SetBrush(wx.TRANSPARENT_BRUSH) dc.DrawRoundedRectangle(0, 0, size.x, size.y-1, 12)then you would have to create your own BusyInfo function that instanciated your frame and returns it (see https://github.com/wxWidgets/wxPython/blob/master/wx/lib/agw/pybusyinfo.py#L251 ) |
Scrapy Logging Level Change I'm trying to start scrapy spider from my scripty as shown in herelogging.basicConfig( filename='log.txt', format='%(levelname)s: %(message)s', level=logging.CRITICAL)configure_logging(install_root_handler=False)process = CrawlerProcess(get_project_settings())process.crawl('1740')process.start() # the script will block here until the crawling is finishedI want to configure the logging level of my spider but even if i do not install root logger handler and configure my basic config with logging.basicConfig method it does not obey the determinded level. INFO: Enabled spider middlewares:['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware']INFO: Enabled item pipelines:['collector.pipelines.CollectorPipeline']INFO: Spider openedINFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)It is following format and file name determined in basicConfig but it does not use logging level. I do not determine logging level other then this place. NOTE: There is not any other place which i import logging or change logging level. | For scrapy itself you should define logging settings in settings.py as described in the docsso in settings.py you can set:LOG_LEVEL = 'ERROR' # to only display errorsLOG_FORMAT = '%(levelname)s: %(message)s'LOG_FILE = 'log.txt' |
Trouble outputing file size to a label from a listbox in Python 3 I'm using os.path.getsize() to output the size of a file to a label. The file path is stored in a listbox. The function works, but it outputs the file size in bits, so I wrote the following to convert to more appropriate units, and it is now displaying only in TB. It's executing all of the if statements, regardless if the condition is true.activeFile = FilesList.get(ACTIVE)fileSize = os.path.getsize(activeFile)fileSizeStr = str(fileSize) + ' Bits'if fileSize > 8: fileSize = fileSize / 8 fileSizeStr = str(fileSize) + ' Bytes'if fileSize < 1024: fileSize = fileSize / 1024 fileSizeStr = str(fileSize) + ' KB'if fileSize < 1024: fileSize = fileSize / 1024 fileSizeStr = str(fileSize) + ' MB'if fileSize < 1024: fileSize = fileSize / 1024 fileSizeStr = str (fileSize) + ' GB'if fileSize < 1024: fileSize = fileSize / 1024 fileSizeStr = str(fileSize) + ' TB' | There are couple problems in your code, You always re-assign fileSizeStr. You need to concatenate new values. You need to check if fileSize greater than or equal to 1024, not smaller. New fileSize should be remainder of the first calculation, not the result of it.Also, checking from larger one would be better IMHO. #constantsTB = 2**43GB = 2**33MB = 2**23KB = 2**13BYTES = 2**3#some test value herefileSize = 8#empty string to be filled and shown laterfileSizeStr = ""#calculationsif fileSize >= TB: fileTB = fileSize / TB fileSize = fileSize % TB fileSizeStr += str(fileTB) + 'TB 'if fileSize >= GB: fileGB = fileSize / GB fileSize = fileSize % GB fileSizeStr += str(fileGB) + 'GB 'if fileSize >= MB: fileMB = fileSize / MB fileSize = fileSize % MB fileSizeStr += str(fileMB) + 'MB 'if fileSize >= KB: fileKB = fileSize / KB fileSize = fileSize % KB fileSizeStr += str(fileKB) + 'KB 'if fileSize >= BYTES: fileB = fileSize / BYTES fileSize = fileSize % BYTES fileSizeStr += str(fileB) + 'Byte(s) 'fileSizeStr += str(fileSize) + 'Bit(s)'print fileSizeStr |
Is there danger in installing 2 versions of Anaconda for Python on one machine? Some background: I have an intel Mac osx (running Yosemite) and use PyCharm community edition as my main IDE. I usually code in Python 3.4 however, I'm taking some MIT OCW courses which all use Python 2. To make it easier on myself when using MIT's skeleton files I have downloaded Python 2.7 and switch the PyCharm interpreter depending on my project.Here's my question:I'm wondering if I would run into any trouble downloading the 2.7 and 3.4 versions of Anaconda. If this is ok, would I need to do anything special with my import commands depending on which version of Python I'm coding in?Thanks! Happy to add clarity / more info if this isn't enough to answer my questions. | There's no danger, but it's also not the recommended way of achieving this. Rather, you should use conda, the package manager that comes with Anaconda, to create an environment for the other version of Python. For instance, if you started with Anaconda3,conda create -n python27 python=2.7 anacondawould create an environment called python27 in ~/anaconda/envs/python27 with Python 2.7 and all the packages from Anaconda. You would then point to ~/anaconda/bin/python or ~/anaconda/envs/python27/bin/python depending on what version of Python you want. In the terminal, use source activate python27 and source deactivate to switch between the two. See http://conda.pydata.org/docs/ for more information on conda. |
Regex python : find different forms of currency with amount I try to find the amounts in euros in receipts. I extract the values, but the currency can appear in different ways: "EUR", "E" or"€". I do not succeed in specifying these different forms within the regex. In addition, the "E" must not raise words that also begin with "E" such as "Eggs".Currently my regex is \d+[\.+\,+]\d*\s*[(e|eur|euros|€)]+\W but the brackets don't work correctly because it retrieves all the words that contains E...My goal: find the amounts if we find the form amount + EUR or amount + € or amount + ESee here an example : https://regex101.com/r/F3Zm9M/2Thank you | There are a couple things going on here. First, you're not capturing what I think you want to capture (you said the values). You should have something like (\d+(?:.|,)\d\d) (the ?: inside the inner parentheses groups the . and , without making it another capturing group). Second, your [(e|eur|euros|€)] is not doing at all what you want it to - look at the explanation on the side panel of regex101 that you linked. What you want instead is just e|eur|euros|€. Again, in order to group these and have the | work like you want, you group them, and I'm assuming you don't want to capture these symbols, so use (?:e|eur|euros|€). You might want to think about adding spaces to make sure the 'e' or 'eur' isn't inside a word, though then you might not match something like 'EUR3000'.Overall, I'm not sure entirely what you're trying to match, but I hope this helps you get started. |
Python program to convert words to numbers in a text file containing English words also I would like to use word2number from https://pypi.org/project/word2number/ to convert words to numbers in a text file to another file as output. A similar program is available to convert numbers to words. So how do I workaround this program to suit my case.import reimport num2wordswith open('input.txt') as f_input: text = f_input.read()text = re.sub(r"(\d+)", lambda x: num2words.num2words(int(x.group(0))), text)with open('output.txt', 'w') as f_output: f_output.write(text) | There's definitely a more pythonic way to do this but here you go, you will need to replace word2number with the function call from the library you want to use where the parameter is a string. Also this will skip newline characters and make one big line.lines = f_input.readlines()nums = list()for line in lines: words = line.split(' ') for word in words: nums.append(word2number(word))with open('output.txt', 'w') as f_output: f_output.write(" ".join(nums)) |
Find the number of clusters in a list of integers Let's consider the distance d(a, b) = number of digits which are pairwise different in a and b, e.g.:d(1003000000, 1000090000) = 2 # the 4th and 6th digits don't match(we only work with 10-digit numbers) and this list:L = [2678888873, 2678878873, # distance 1 from L[0] 1000000000, 1000040000, # distance 1 from L[2] 1000300000, # distance 1 from L[2], distance 2 from L[3] 1000300009, # distance 1 from L[4], distance 2 from L[2] ]I would like to find the minimal number of points P such that each integer in the list is at a distance <= 1 of a point in P.Here I think this number is 3: every number in the list is at distance <= 1 of 2678888873, 1000000000, or 1000300009.I imagine an O(n^2) algorithm is possible by first computing a distance matrix i.e. M[i, j] = d(L[i], L[j]).Is there a better way to do this, especially using Numpy? (maybe there's a built-in algorithm in Numpy/Scipy?)PS: If we see these 10-digit integers as strings, we're close to finding a minimal number of clusters in a list of many words with a Levenshtein distance.PS2: I know realize this distance has a name on strings: Hamming distance. | Let's see what we know from a the distance metric. Given a number P (not necessarily in L), if two members of L are within distance 1 of P, they each share 9 digits with P, but not necessarily the same ones, so they are only guaranteed to share 8 digits with each other. So any two numbers that have distance 2 are guaranteed to two unique Ps that are distance 1 from each of them (and distance 2 from each other as well). You can use this information to reduce the amount of brute-force effort required to optimize the selection of P.Let's say you have a distance matrix. You can immediately discard rows (or columns) that don't have entries less than 3: they are their own cluster automatically. For the remaining entries that are equal to 2, construct a list of possible P values. Find the number of elements of L that are within 1 of each element of P (another distance matrix). Sort P by the number of neighbors, and select. You will need to update the matrix at each iteration as you remove members with maximal neighbors to avoid inefficient grouping due to overlap (members of L that are near multiple members of P).You can compute a distance matrix for L in numpy by first converting it to a 2D array of digits:L = np.array([2678888873, 2678878873, 1000000000, 1000040000, 1000300000, 1000300009])z = 10 # Number of digitsn = len(L) # Number of numbersdec = 10**np.arange(z).reshape(-1, 1).astype(np.int64)digits = (L // dec) % 10digits is now a 10xN array:array([[3, 3, 0, 0, 0, 9], [7, 7, 0, 0, 0, 0], [8, 8, 0, 0, 0, 0], [8, 8, 0, 0, 0, 0], [8, 7, 0, 4, 0, 0], [8, 8, 0, 0, 3, 3], [8, 8, 0, 0, 0, 0], [7, 7, 0, 0, 0, 0], [6, 6, 0, 0, 0, 0], [2, 2, 1, 1, 1, 1]], dtype=int64)You can compute the distance between digits and itself, or digits and any other 10xM array using != and sum along the right axis:distance = (digits[:, None, :] != digits[..., None]).sum(axis=0)The result:array([[ 0, 1, 10, 10, 10, 10], [ 1, 0, 10, 10, 10, 10], [10, 10, 0, 1, 1, 2], [10, 10, 1, 0, 2, 3], [10, 10, 1, 2, 0, 1], [10, 10, 2, 3, 1, 0]])We are only concerned with the upper (or lower) triangle of that matrix, so we can immediately mask out the other triangle:distance[np.tril_indices(n)] = z + 1Find all candidate values of P: all elements of L, but also all pairs between elements that have distance 2:# Find indices of pairs that differ by 2indices = np.nonzero(distance == 2)# Extract those numbers as 10xKx2 arrayd = digits[:, np.stack(indices, axis=1)]# Compute where the difference is nonzero (Kx2)locs = np.diff(d, axis=2).astype(bool).squeeze()# Find the index of the first digit to replace (K)s = np.argmax(locs, axis=0)The extra values of P are constructed from each half of d, with the digits represented by k replaced from the other half:P0 = digits[:, indices[0]]P1 = digits[:, indices[1]]k = np.arange(s.size)tmp = P0[s, k]P0[s, k] = P1[s, k]P1[s, k] = tmpPextra = np.unique(np.concatenate((P0, P1), axis=1)So now you can compute the total set of possibilities for P:P = np.concatenate((digits, Pextra), axis=1)distance2 = (P[:, None, :] != digits[..., None]).sum(axis=0)You can discard any elements of Pextra that match with elements of digits based on the distance:mask = np.concatenate((np.ones(n, bool), distance2[:, n:].all(axis=0)))P = P[:, mask]distance2 = distance2[:, mask]Now you can iteratively distance P with L, and select the best values of P, removing any values that have been selected from the distance matrix. A greedy selection from P will not necessarily be optimal, since an alternative combination may require fewer elements due to overlaps, but that is a matter for a simple (but somewhat expensive) graph traversal algorithm. The following snippet just shows a simple greedy selection, which will work fine for your toy example:distMask = distance2 <= 1quality = distMask.sum(axis=0)clusters = []accounted = 0while accounted < n: # Get the cluster location best = np.argmax(quality) # Get the cluster number clusters.append(P[:, best].dot(dec).item()) # Remove numbers in cluser from consideration accounted += quality[best] quality -= distMask[distMask[:, best], :].sum(axis=0)The last couple of steps can be optimized using sets and graphs, but this shows a starting point for a valid approach. This is going to be slow for large data, but probably not prohibitively so. Do some benchmarks to decide how much time you want to spend optimizing vs just running the algorithm. |
Multivariate Optimization - scipy.optimize input parsing error I have the above rgb image saved as tux.jpg. Now I want to get the closest approximation to this image that is an outer product of two vectors I.e of the form A·BT.Here is my code - #load image to memoryimport Imageim = Image.open('tux.jpg','r')#save image to numpy arrayimport numpy as npmat = np.asfarray(im.convert(mode='L')) # mat is a numpy array of dimension 354*300msizex,msizey = mat.shapex0 = np.sum(mat,axis=1)/msizexy0 = np.sum(mat,axis=0)/msizeyX0 = np.concatenate((x0,y0)) # X0.shape is (654,)# define error of outer product with respect to original imagedef sumsquares(X): """ sum of squares - calculates the difference between original and outer product input X is a 1D numpy array with the first 354 elements representing vector A and the rest 300 representing vector B. The error is obtained by subtracting the trial $A\cdot B^T$ from the original and then adding the square of all entries in the matrix. """ assert X.shape[0] == msizex+msizey x = X0[:msizex] y = X0[msizex:] return np.sum( ( np.outer(x,y) - mat )**2 )#import minimizefrom scipy.optimize import minimizeres = minimize(sumsquares, X0, method='nelder-mead', options={'disp':True})xout = res.x[:msizex]yout = res.x[msizex:]mout = np.outer(xout,yout)imout= Image.fromarray(mout,mode='L')imout.show()The result is . Optimization terminated successfully. Current function value: 158667093349733.531250 Iterations: 19 Function evaluations: 12463This doesn't look good enough to me. Is there any way to improve this? The noise in the output is not even of the same length as the structures in the original picture. My guess is that the algorithm isn't going through. How can I debug or improve this?EDIT1: I created the image below with the code size = 256mat0 = np.zeros((size,size))mat0[size/4:3*size/4,size/4:3*size/4] = 1000#mat0[size/4:3*size/4,] = 1000#mat0[:3*size/4,size/4:] = 1000im0 = Image.fromarray(mat0)im0.show() The two commented out lines result in two other images. Here are the result of my experiments - Square in the middle. Input - Output - Same Band in the middle. Input - Output - White chunk to the North East Input - Output- While this is much better than what I expected, cases 2 and 3 still end up being wrong. I hope that the arguments to the minimize function mean what I think they mean. | 1) The rendering problem of the first image seems to be an issue in the conversion from numpy array to image. I get the right rendering by running: imout = Image.fromarray(mout/np.max(mout)*255)(i.e. normalize the image to a maximum value of 255 and let it determine the mode automatically). In general, to check that Image.fromarray is working, it is useful to compare the output of imout.show() with import matplotlib.pyplot as pltplt.matshow(mout/np.max(mout)*255, cmap=plt.cm.gray)and you should get the same results. BTW, by doing that, I get all the 3 other cases correct. 2) Secondly, the main problem with tux.png is that it is not possible to reconstruct an image with such a detailed structure with only an outer product of two 1-D vectors. (This tends to work for simple images such as the blocky ones shown above, but not for an image with few symmetries and many details). To prove the point: There exist matrix factorization techniques that allow reconstructing a matrix as the product of two low-rank matrices M=AB, such as sklearn.decomposition.NMF. In this case, setting the rank of A and B to 1 would be equivalent to your problem (with a different optimization technique). Playing with the code below you can easily see that with n_components=1 (which is equivalent to an outer product of two 1-D vectors), the resulting reconstructed matrix looks very similar to the one outputted by your method, and that with bigger n_components, the better the reconstruction. For reproducibility: import matplotlib.pyplot as pltfrom sklearn.decomposition import NMFnmf = NMF(n_components=20)prj = nmf.fit_transform(mat)out = prj.dot(nmf.components_)out = np.asarray(out, dtype=float)imout = Image.fromarray(out)imout.show()For illustration, this is the NMF reconstruction with 1 component (this is exactly an outer product between two 1-D vectors):With 2 components: And this is the NMF reconstruction with 20 components. Which clearly indicates that a single 1-D outer product is not enough for this images. However it works for the blocky images. If you are not restricted to an outer product of vectors, then matrix factorization can be an alternative. BTW, there exist a vast number of matrix factorization techniques. Another alternative in sklearn is SVD. 3) Finally, there may be a scaling issue in x0 and y0. Note that the elements of np.outer(x0, y0) are orders of magnitude higher than the elements of mat. Although I still get good results for the 3 blocky examples with the code provided, in general it is a good practice to have comparable scales when taking square differences. For instance, you might want to scale x0 and y0 so that the norm of np.outer is comparable to the one of mat. |
Erratic (seemingly random) behavior of seek() and split() in python Consider the following code:import syswith open(sys.argv[1]) as data_file: data_file.readline() #skipping lines of texts data_file.readline() data_file.readline() #skipping lines of texts data_file.readline() data_file.readline() #skipping lines of texts data_file.readline() #skipping lines of texts data_file.readline() #skipping lines of texts data_file.readline() #skipping lines of texts data_file.readline() #skipping lines of texts while True: print "#" pos=data_file.tell() next_mol=data_file.readline().split() print next_mol data_file.seek(pos) print data_file.readline().split()here sys.argv[1] is the name of text file, which contains the following data:ITEM: TIMESTEP31500000ITEM: NUMBER OF ATOMS28244ITEM: BOX BOUNDS pp pp pp0.706774 63.60721.77317 62.6918-4.27518 67.4572ITEM: ATOMS id type x y z 1 1 8.07271 20.6394 38.953 2 1 7.45444 20.2706 37.5682 3 1 7.94593 21.3438 36.5822 4 2 8.88701 22.2414 37.422 5 6 8.97587 21.7898 38.6976 6 7 9.51512 23.1098 36.8675 7 1 9.83459 22.2787 39.7728 8 3 8.54346 19.7726 39.3733 9 3 7.3188 20.9572 39.6053 10 3 6.33686 20.2798 37.6457 11 3 7.62824 19.2464 37.1935 12 3 7.14438 21.9616 36.2781 13 3 8.4454 20.9589 35.6742 14 3 9.51704 23.2023 40.2712 15 3 10.839 22.4705 39.342 16 3 9.84061 21.5031 40.5668 gives me following output:#['1', '1', '8.07271', '20.6394', '38.953']['71', '20.6394', '38.953']#['2', '1', '7.45444', '20.2706', '37.5682']['1', '7.45444', '20.2706', '37.5682']#['3', '1', '7.94593', '21.3438', '36.5822']['1', '7.94593', '21.3438', '36.5822']#['4', '2', '8.88701', '22.2414', '37.422']['2', '8.88701', '22.2414', '37.422']#['5', '6', '8.97587', '21.7898', '38.6976']['6', '8.97587', '21.7898', '38.6976']#['6', '7', '9.51512', '23.1098', '36.8675']['7', '9.51512', '23.1098', '36.8675']#['7', '1', '9.83459', '22.2787', '39.7728']['1', '9.83459', '22.2787', '39.7728']#['8', '3', '8.54346', '19.7726', '39.3733']['3', '8.54346', '19.7726', '39.3733']#['9', '3', '7.3188', '20.9572', '39.6053']['3', '7.3188', '20.9572', '39.6053']#['10', '3', '6.33686', '20.2798', '37.6457']['0', '3', '6.33686', '20.2798', '37.6457']I was expecting both strings between '#' to be same. Am i missing something here? | file.readline() uses a read-ahead buffer to find newlines, so it can return you a neat line that ends in \n. The alternative is to read byte by byte until a newline is found, which would be extremely inefficient.As such, your first file.readline() reads in a chunk of information from the file, parses out the first line and returns that. Then a next call to file.readline() may well be able to give you the next line from the buffer alone, etc.By the time you get to your while loop, the read-ahead buffer has been filled with every thing up to 1 1 8.072 (the first bytes after the ITEM: ATOMS id type x y z line). The next file.readline() call then reads in more buffer to find another newline, moving the file position to after the initial 2 on the next line, etc.You can't reliably get the right file position from a file and use file.readline() calls; you'd have to take into account the number of lines read, the actual buffer size, and the style of line separators used in the file. Your problem can almost certainly be solved in different ways, like storing the already read lines in a queue or stack of some sort, for use in later iterations of your loop. |
How to access and edit variables inside functions in python Im new(-ish) to python and I made a game today which after I finished I realised I'd made a big mistake : inside the functions I had to access and edit variables which where also accessed and changed in other functions and maybe in the future outside the functions. And I don't know how to do that.I've researched for a long time and found very few things that might solve the problem, I've tried a few, but they haven't worked and I don't understand how to use others.Could you please try to help me with the problem and if you spot others please tell me, as Im not too good at debugging :(Here is the code below, its quite big (I've put the variables I need to access and change in bold): from random import randint print ("Ghost Game v2.0") print ("select difficulty")score = 0alive = Truedifficulty = 0doors = 0ghost_door = 0action = 0ghost_power = 0 #define the function 'ask_difficulty'def ask_difficulty() : difficulty = input ("Hard, Normal, Easy") set_difficulty() # define the function 'set_difficulty' which sets the difficulty.def set_difficulty() :if difficulty == 'Hard' or 'Normal' or 'Easy' : if difficulty == 'Hard' : doors = 2 elif difficulty == 'Normal' : doors = 3 elif difficulty == 'Easy' : doors = 5else: print ("Invalid input, please type Hard, Normal, or Easy") ask_difficulty() # define the function 'ghost_door_choose' which sets the ghost door and the chosen doordef ghost_door_choose(x): ghost_door = randint (1, x) print (doors + " doors ahead...") print ("A ghost behind one.") print ("Which do you open?") if doors == 2 : door = int("Door number 1, or door number 2...") if 1 or 2 in door : ghost_or_no() else : print ("Invalid input") ghost_door_choose(difficulty) elif doors == 3 : door = int("Door number 1, door number 2, or door number 3") if 1 or 2 or 3 in door : ghost_or_no() else: print ("Invalid input") ghost_door_choose(difficulty) elif doors == 5 : print("Door number 1, door number 2, door number 3, door number 4, or door number 5.") if 1 or 2 or 3 or 4 or 5 in door : ghost_or_no() else: print ("Invalid input") ghost_door_choose(difficulty) # define the function 'ghost_or_no'def ghost_or_no() : if door == ghost_door: print ("GHOST!!") print ("Initiating battle...") battle() else: print ("No ghost, you\'ve been lucky, but will luck remain with you...") score = score + 1 ghost_door_choose(difficulty) # define the function 'battle' which is the battle programdef battle() : ghost_power = randint (1, 4) # 1 = Speed, 2 = Strength, 3 = The ghost is not friendly, 4 = The ghost is friendly print ("You have 3 options") print ("You can flee, but beware, the ghost may be fast (flee),") print ("You can battle it, but beware, the ghost might be strong (fight),") print ("Or you can aproach the ghost and be friendly, but beware, the ghost may not be friendly (aproach)...") action = input ("What do you choose?") if flee in action : action = 1 elif fight in action : action = 2 elif aproach in action : action = 3 else : print ("Invalid input") battle() if ghost_power == action : if action == 1: print ("Oh no, the ghost\'s power was speed!") print ("DEFEAT") print ("You\'r score is " + score) alive = False elif action == 2: print ("Oh no, the ghost\'s power was strength!") print ("DEFEAT") print ("You\'r score is " + score) alive = False elif action == 3: print ("Oh no, the ghost wasn\'t friendly ") alive = False elif ghost_power == 4 and action == 3 : print ("Congratulations, The ghost was friendly!") score = score + 1 ghost_door_choose(difficulty) elif ghost_power != action and ghost_power != 4 : if action == 1: print ("Congratulations, the ghost wasn\'t fast!") score = score + 1 ghost_door_choose(difficulty) elif action == 2: print ("Congratulations, you defeated the ghost!") score = score +1 ghost_door_choose(difficulty) elif ghost_power != action and ghost_power == 4 : if action == 1: print ("You ran away from a friendly ghost!") print ("Because you ran away for no reason, your score is now 0") score = 0 ghost_door_choose(difficulty) elif action == 1: print ("You killed a friendly ghost!") print ("Your score is now 0 because you killed the friendly ghost") score = 0 ghost_door_choose(difficulty) #actual game loopask_difficulty()while alive : ghost_door_choose(doors) | Consider:x=0z=22def func(x,y): y=22 z+=1 print x,y,zfunc('x','y') When you call func you will get UnboundLocalError: local variable 'z' referenced before assignmentTo fix the error in our function, do:x=0z=22def func(x,y): global z y=22 z+=1 print x,y,zThe global keyword allows a local reference to a global defined variable to be changed.Notice too that the local version of x is printed, not the global version. This is what you would expect. The ambiguity is if there is no local version of a value. Python treats globally defined values as read only unless you use the global keyword.As stated in comments, a class to hold these variables would be better. |
python scipy unit test I have installed a number of python modules into a common Linux directory that a number of people will be using via an NFS mount (yes I understand that there is a performance hit with this esp with python) I have been able to run the scipy.test('full') as the user that owns the NFS mount as well as root.Is there a way that I can pass in an argument to the scipy.test() function that will tell it what dir to build the sc_* and linux227compiled_catalog.d* files in? ie scpipt.test('full', /tmp) so that any user who mounts this can run these tests and not have write access to the NFS mount?Thanks in advance. | nvm ... I put the following into the test script:import scipyimport osimport shutildirectory = os.getcwd()userHomeDirectory = ( "/home/" + os.getlogin())userHomeScipyTests = ( userHomeDirectory + "/scipytests" )# print ("your current directory location is: " + directory)print ("Making the following temp folder: " + userHomeScipyTests )if (os.path.isdir(userHomeScipyTests)): shutil.rmtree(userHomeScipyTests)os.makedirs ( userHomeScipyTests )os.chdir( userHomeScipyTests )print os.getcwd()output = scipy.test('full')# print ("this is the output of the scipy full test: " + str(output.wasSuccessful()))self.assertEqual(str(output.wasSuccessful()), 'True', 'FullSciPyTest failed')if (output.wasSuccessful):print ("Removing the following temp folder: " + userHomeScipyTests ) shutil.rmtree(userHomeScipyTests) |
Memory leak by ctypes pointers used within python class I try to wrap some C code via ctypes. Altough, my code (attached below) is functional, memory_profiler suggests it is suffering a memory leak somewhere. The basic C struct, I'm trying to wrap is defined in 'image.h'. It defines an image object, containing a pointer to the data, a pointer array (needed for various other functions not included here), along with some shape information. image.h:#include <stdio.h>#include <stdlib.h>typedef struct image {double * data; /*< The main pointer to the image data*/i3_flt **row; /*< An array of pointers to each row of the image*/unsigned long n; /*< The total number of pixels in the image*/unsigned long nx; /*< The number of pixels per row (horizontal image dimensions)*/unsigned long ny; /*< The number of pixels per column (vertical image dimensions)*/} image;The python code that wraps this C struct via ctypes is contained in 'image_wrapper.py' below. The python class Image implements many more methods which I didn't include here. The idea is to have a python object, that is as convenient to use as a numpy array. In fact, the class contains a numpy array as an attribute (self.array) which points to the exact same memory location than the data pointer within the C struct. image_wrapper.py:import numpyimport ctypes as cclass Image(object): def __init__(self, nx, ny): self.nx = nx self.ny = ny self.n = nx * ny self.shape = tuple((nx, ny)) self.array = numpy.zeros((nx, ny), order='C', dtype=c.c_double) self._argtype = self._argtype_generator() self._update_cstruct_from_array() def _update_cstruct_from_array(self): data_pointer = self.array.ctypes.data_as(c.POINTER(c.c_double)) ctypes_pointer = c.POINTER(c.c_double) * self.ny row_pointers = ctypes_pointer( *[self.array[i,:].ctypes.data_as(c.POINTER(c.c_double)) for i in range(self.ny)]) ctypes_pointer = c.POINTER(ctypes_pointer) row_pointer = ctypes_pointer(row_pointers) self._cstruct = c.pointer(self._argtype(data=data_pointer, row=row_pointer, n=self.n, nx=self.nx, ny=self.ny)) def _argtype_generator(self): class _Argtype(c.Structure): _fields_ = [("data", c.POINTER(c.c_double)), ("row", c.POINTER(c.POINTER(c.c_double) * self.ny)), ("n", c.c_ulong), ("nx", c.c_ulong), ("ny", c.c_ulong)] return _ArgtypeNow, testing the memory consumption of the above code with memory_profiler suggests that Python's garbage collector is unable to clean up all references. Here is my test code, that creates a variable number of class instances within loops of different sizes.test_image_wrapper.pyimport sysimport image_wrapper as imgimport numpy as np @profiledef main(argv): image_size = 500 print 'Create 10 images\n' for i in range(10): x = img.Image(image_size, image_size) del x print 'Create 100 images\n' for i in range(100): x = img.Image(image_size, image_size) del x print 'Create 1000 images\n' for i in range(1000): x = img.Image(image_size, image_size) del x print 'Create 10000 images\n' for i in range(10000): x = img.Image(image_size, image_size) del xif __name__ == "__main__": main(sys.argv)The @profile is telling memory_profiler to analyse the subsequent function, here main. Running python with memory_profiler on test_image_wrapper.py viapython -m memory_profiler test_image_wrapper.pyyields the following output:Filename: test_image_wrapper.pyLine # Mem usage Increment Line Contents================================================ 49 @profile 50 def main(argv): 51 """ 52 Script to test memory usage of image.py 53 16.898 MB 0.000 MB """ 54 16.898 MB 0.000 MB image_size = 500 55 56 16.906 MB 0.008 MB print 'Create 10 images\n' 57 19.152 MB 2.246 MB for i in range(10): 58 19.152 MB 0.000 MB x = img.Image(image_size, image_size) 59 19.152 MB 0.000 MB del x 60 61 19.152 MB 0.000 MB print 'Create 100 images\n' 62 19.512 MB 0.359 MB for i in range(100): 63 19.516 MB 0.004 MB x = img.Image(image_size, image_size) 64 19.516 MB 0.000 MB del x 65 66 19.516 MB 0.000 MB print 'Create 1000 images\n' 67 25.324 MB 5.809 MB for i in range(1000): 68 25.328 MB 0.004 MB x = img.Image(image_size, image_size) 69 25.328 MB 0.000 MB del x 70 71 25.328 MB 0.000 MB print 'Create 10000 images\n' 72 83.543 MB 58.215 MB for i in range(10000): 73 83.543 MB 0.000 MB x = img.Image(image_size, image_size) 74 del xEach instance of the class Image within python seems to leave about 5-6kB, summing up to ~58MB when processing 10k images. For an individual object this seems not much, but as I have to run on ten millions, I do care. The line that seems to cause the leak is the following contained in image_wrapper.py. self._cstruct = c.pointer(self._argtype(data=data_pointer, row=row_pointer, n=self.n, nx=self.nx, ny=self.ny))As mentioned above, it seems Python's garbage collector is unable to clean up all references. I did try to implement my own del function, something likedef __del__(self): del self._cstruct del selfUnfortunately, this doesn't seem to fix the issue. After spending a day of researching and trying several memory debuggers, my last resort seems stackoverflow. Many thanks for your valuable thoughts and suggestions. | It may not be the only issue, but for sure the caching of each _Argtype: LP__Argtype pair in the dict _ctypes._pointer_type_cache is not insignificant. Memory usage should go down if you clear the cache. The pointer and function type caches can be cleared with ctypes._reset_cache(). Bear in mind that clearing the cache can cause problems. For example:from ctypes import *import ctypesc_double_p = POINTER(c_double)c_double_pp = POINTER(c_double_p)class Image(Structure): _fields_ = [('row', c_double_pp)]ctypes._reset_cache()nc_double_p = POINTER(c_double)nc_double_pp = POINTER(nc_double_p)The old pointers still work with Image:>>> img = Image((c_double_p * 10)()) >>> img = Image(c_double_pp(c_double_p(c_double())))New pointers created after resetting the cache won't work:>>> img = Image((nc_double_p * 10)())TypeError: incompatible types, LP_c_double_Array_10 instance instead of LP_LP_c_double instance>>> img = Image(nc_double_pp(nc_double_p(c_double())))TypeError: incompatible types, LP_LP_c_double instance instead of LP_LP_c_double instanceIf resetting the cache solves your problem, maybe that's good enough. But generally the pointer cache is both necessary and beneficial, so personally I'd look for another way. For example, as far as I can see there's no reason to customize _Argtype for each image. You could just define row as a double ** initialized to the array of pointers. |
Select related of selected related Say I have a relationship (by foreign key) like this: Model 1 → Model 2 → Model 3. Can I follow foreign key relationship with select_related() more than one level deep? I.e. not only from Model 1 to Model 2 but also from Model 2 to Model 3? | Yes, you can, by using the normal double-underscore syntax - as explicitly described in the documentation:Model1.objects.select_related('model2__model3') |
how repeat one plot in multiples subplots matplotlib Please I need repeat a climatological plot (fill_between(x,y1,y2) in multiples subplots, exist any tips to resolve that? Here is part of my code. from matplotlib import pyplot as pltplt.figure()fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True)ax = axs[0,0]ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')ax.errorbar(a1[0], a1[1], a1[2], marker='o')ax = axs[0,1]ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')ax.errorbar(a2[0], a2[1], a2[2], marker='o')ax = axs[1,0]ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')ax.errorbar(a3[0], a3[1], a3[2], marker='o')ax = axs[1,1]ax.fill_between(month, y2,y3 , alpha=0.25, color='grey')ax.errorbar(a4[0], a4[1], a4[2], marker='o')The data a1,a2,a3,....a24 because are 24 years come froma1 = graf1.graf1('mon1992.dat')a2 = graf1.graf1('mon1993.dat')a3 = graf1.graf1('mon1994.dat')a4 = graf1.graf1('mon1995.dat') And graf1 is a moduleimport pandas as pddef graf1(archivo):archivo = '/Users/ccasiccia/Desktop/research/dataO3_180/' + archivodata2 = pd.read_csv(archivo, delim_whitespace = True, header=None)xd2 = data2.ix[:,0]yd2 = data2.ix[:,1]sd2 = data2.ix[:,2]return xd2, yd2, sd2Considering the data obtained with the graf1 function I need build figures with 4 subplots (2x2) and the question here is: how can add the clima plot (ax.fill_between(month, y2, y3, alpha=0.25, color='grey') in each subplots? but not how I did, repeating the instruction on each subplot. Follow the data to one year (ex. mon1996.dat)1 290.1931 23.214682 280.32778 17.707193 274.70455 19.080374 292.43913 27.80675 292.49667 24.571766 301.64667 26.963977 323.13889 20.308838 319.76 22.014869 306.432 20.0701610 310.54444 45.9083111 341.484 27.9942412 300.71935 12.98657This is the climatological data for fill_between1 322.418 20.25 20.287 342.668 302.1682 315.1 21.534 21.578 336.634 293.5663 293.268 23.694 23.738 316.962 269.5744 292.928 26.499 26.55 319.427 266.4295 301.565 31.153 31.21 332.718 270.4126 304.135 35.883 35.953 340.018 268.2527 317.792 36.85 36.916 354.642 280.9428 321.36 35.798 35.863 357.158 285.5629 324.558 33.472 33.535 358.03 291.08610 336.043 45.679 45.762 381.722 290.36411 338.736 33.518 33.58 372.254 305.21812 327.578 27.093 27.144 354.671 300.485 | Depending on how the data is organized it may be quite easy to loop over the plots to fill them. import numpy as npfrom matplotlib import pyplot as pltmonth=np.linspace(1,8)y2 = -0.15*(month-4)**2+2.3y3 = 0.1*(month-3.7)**2x = np.logspace(1,5,base=1.5, num=16).reshape(4,4).Ty = np.sinc(x-3)**2+1yerr = np.sqrt(y)fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True)for i, ax in enumerate(axs.flatten()): ax.fill_between(month, y2,y3 , alpha=0.25, color='gold') ax.errorbar(x[:,i], y[:,i],yerr[:,i], marker='o')plt.show() |
Python Django project - move div class footer into body I'm creating a blog with python and django. Most of it has been fine up until i've just tried to create the footer. The footer display's fine on the home page but when you click into the blog post the footer gets constrained by the content container and row div class.When you look at this in firefox dev inspector and DOM it's showing that my footer div is sat within the content container and row, and not in the body. Most things i've usually managed to find the answer for but this is driving me nuts. I think i'm either missing something or i'm not asking the right question.I don't understand why it's just the footer div that has been put within the content container / row div and nothing else that is effected in the same way.Is there anyway to amend this without using jquery / jscript and changing the parentElement node?If i have to amend this with jscript, where exactly do i have to amend it, and with what?Thanks.footer { height: 50px; background-color: #000000; color: #ffffff; padding: 10px; text-align: center; clear: both;} <div class="content container"> <div class="row"> <div class="col-md-8"> {% block content %} {% endblock %} </div> </div></div> <footer> <div class="footer"> Copyright &copy 2017 </div> </footer>firefox dev inspector image - sat in row divfirefox dev inspector image - moved to body | This is the blog post detail page{% extends 'blog/base.html' %}{% block content %} {% if post.published_date %} {{ post.published_date }} by Matt Cheetham {% else %} <a class="btn btn-default" href="{% url 'blog.views.post_publish' pk=post.pk %}">Publish</a> {% endif %} {% if user.is_authenticated %} <a class="btn btn-default" href="{% url 'post_edit' pk=post.pk %}"><span class="glyphicon glyphicon-pencil"></span></a> <a class="btn btn-default" href="{% url 'post_remove' pk=post.pk %}"><span class="glyphicon glyphicon-remove"></span></a> {% endif %} <div class="postview"> {{ post.title }} </div> <p>{{ post.text|safe }}</p> <div class="comment"> <div class="date"> <h3>Comments</h3> <br> <a class="btn btn-default" href="{% url 'add_comment_to_post' pk=post.pk %}">Add comment</a> <br> <br> {% for comment in post.comments.all %} {% if user.is_authenticated or comment.approved_comment %} {{ comment.created_date }} {% if not comment.approved_comment %} <a class="btn btn-default" href="{% url 'comment_remove' pk=comment.pk %}"><span class="glyphicon glyphicon-remove"></span></a> <a class="btn btn-default" href="{% url 'comment_approve' pk=comment.pk %}"><span class="glyphicon glyphicon-ok"></span></a> {% endif %} </div> <strong>{{ comment.author }}</strong> <p>{{ comment.text|safe }}</p> </div> {% endif %} {% empty %} <p>No comments here yet...</p> {% endfor %}{% endblock %} |
Seaborn ImportError: DLL load failed: The specified module could not be found I am getting the "ImportError: DLL load failed: The specified module could not be found." when importing the module seaborn.I tried uninstalling both seaborn and matplotlib, then reinstalling by using pip install seaborn but no luck. I still get the same error. ImportError Traceback (most recent call last)<ipython-input-5-085c0287ecb5> in <module>()----> 1 import seabornC:\Users\johnsam\venv\lib\site-packages\seaborn\__init__.py in <module>() 4 5 # Import seaborn objects----> 6 from .rcmod import * 7 from .utils import * 8 from .palettes import *C:\Users\johnsam\venv\lib\site-packages\seaborn\rcmod.py in <module>() 6 import matplotlib as mpl 7 ----> 8 from . import palettes, _orig_rc_params 9 10 C:\Users\johnsam\venv\lib\site-packages\seaborn\palettes.py in <module>() 10 from .external.six.moves import range 11 ---> 12 from .utils import desaturate, set_hls_values, get_color_cycle 13 from .xkcd_rgb import xkcd_rgb 14 from .crayons import crayonsC:\Users\johnsam\venv\lib\site-packages\seaborn\utils.py in <module>() 6 7 import numpy as np----> 8 from scipy import stats 9 import pandas as pd 10 import matplotlib as mplC:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\stats\__init__.py in <module>() 332 from __future__ import division, print_function, absolute_import 333 --> 334 from .stats import * 335 from .distributions import * 336 from .rv import *C:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\stats\stats.py in <module>() 179 from scipy.lib.six import callable, string_types 180 from numpy import array, asarray, ma, zeros, sum--> 181 import scipy.special as special 182 import scipy.linalg as linalg 183 import numpy as npC:\Program Files\Continuum\Anaconda3\lib\site-packages\scipy\special\__init__.py in <module>() 544 from __future__ import division, print_function, absolute_import 545 --> 546 from ._ufuncs import * 547 548 from .basic import *ImportError: DLL load failed: The specified module could not be found.Is there a way to get around this error? | I was having this issue until I uninstalled and reinstalled scipy with the pip command. Just got to your command line and type pip uninstall scipy and pip install scipy.Hopefully that works for you as well. I also uninstalled/installed seaborn before this although I'm not sure if that was necessary.Using conda rather than pip may also work. |
Find and remove duplicate files using Python I have several folders which contain duplicate files that have slightly different names (e.g. file_abc.jpg, file_abc(1).jpg), or a suffix with "(1) on the end. I am trying to develop a relative simple method to search through a folder, identify duplicates, and then delete them. The criteria for a duplicate is "(1)" at the end of file, so long as the original also exists.I can identify duplicate okay, however I am having trouble creating the text string in the right format to delete them. It needs to be "C:\Data\temp\file_abc(1).jpg", however using the code below I end up with r"C:\Data\temp''file_abc(1).jpg".I have looked at answers [Finding duplicate files and removing them, however this seems to be far more sophisticated than what I need.If there are better (+simple) ways to do this then I let me know, however I only have around 10,000 files in total in 50 odd folders, so not a great deal of data to crunch through.My code so far is:import osfile_path = r"C:\Data\temp"file_list = os.listdir(file_path)print (file_list)for file in file_list: if ("(1)" in file): index_no = file_list.index(file) print("!! Duplicate file, number in list: "+str(file_list.index(file))) file_remove = ('r"%s' %file_path+"'\'"+file+'"') print ("The text string is: " + file_remove) os.remove(file_remove) | Your code is just a little more complex than necessary, and you didn't apply a proper way to create a file path out of a path and a file name. And I think you should not remove files which have no original (i. e. which aren't duplicates though their name looks like it).Try this:for file_name in file_list: if "(1)" not in file_name: continue original_file_name = file_name.replace('(1)', '') if not os.path.exists(os.path.join(file_path, original_file_name): continue # do not remove files which have no original os.remove(os.path.join(file_path, file_name))Mind though, that this doesn't work properly for files which have multiple occurrences of (1) in them, and files with (2) or higher numbers also aren't handled at all. So my real proposition would be this:Make a list of all files in the whole directory tree below a given start (use os.walk() to get this), thensort all files by size, thenwalk linearly through this list, identify the doubles (which are neighbours in this list) andyield each such double-group (i. e. a small list of files (typically just two) which are identical).Of course you should check the contents of these few files then to be sure that not just two of them are accidentally the same size without being identical. If you are sure you have a group of identical ones, remove all but the one with the simplest names (e. g. without suffixes (1) etc.).By the way, I would call the file_path something like dir_path or root_dir_path (because it is a directory and a complete path to it). |
Is there a way to convert named function arguments to dict I am trying to find out if there is a way to convert named arguments to dict. I understand using **kwargs in place of individual named arguments would be pretty straight forward.def func(arg1=None, arg2=None, arg3=None): # How can I convert these arguments to {'arg1': None, 'arg2': None, 'arg3': None}` | You can use locals() to get the local arguments:def func(arg1=None, arg2=None, arg3=None): print(locals())func() # {'arg3': None, 'arg2': None, 'arg1': None} |
How to add values into an empty list from a for loop in python? The given python code is supposed to accept a number and make a list containingall odd numbers between 0 and that numbern = int(input('Enter number : '))i = 0 series = []while (i <= n): if (i % 2 != 0): series += [i]print('The list of odd numbers :\n')for num in series: print(num) | So, when dealing with lists or arrays, it's very important to understand the difference between referring to an element of the array and the array itself.In your current code, series refers to the list. When you attempt to perform series + [i], you are trying to add [i] to the reference to the list. Now, the [] notation is used to access elements in a list, but not place them. Additionally, the notation would be series[i] to access the ith element, but this still wouldn't add your new element. One of the most critical parts of learning to code is learning exactly what to google. In this case, the terminology you want is "append", which is actually a built in method for lists which can be used as follows:series.append(i)Good luck with your learning! |
Elastic Beanstalk with Django: is there a way to run manage.py shell and have access to environment variables? Similar question was asked here, however the solution does not give the shell access to the same environment as the deployment. If I inspect os.environ from within the shell, none of the environment variables appear. Is there a way to run the manage.py shell with the environment?PS: As a little side question, I know the mantra for EBS is to stop using eb ssh, but then how would you run one-off management scripts (that you don't want to run on every deploy)? | One of the cases you have to run something once is db schema migrations. Usually you store information about that in the db... So you can use db to sync / ensure that something was triggered only once.Personally I have nothing against using eb ssh, I see problems with it however. If you want to have CI/CD, that manual operation is against the rules.Looks like you are referring to WWW/API part of Beanstalk. If you need something that is quite frequent... maybe worker is more suitable? Problem here is that if API goes deployed first you would have wrong schema.In general you are using EC2, so it's user data stores information that spins up you service. So there you can put your "stuff". Still you need to sync / ensure. Here are docs for beanstalk - for more information how to do that.EditBeanstalk is kind of instrumentation on top of EC2. So there must be a way to work with it, since you have access to user data of that EC2s. No worries you don't need to dig that deep. There is good way of instrumenting your server. It is called ebextensions. It can be used to put files on the server, trigger commands, instrument cron. What ever you want.You can create ebextension, with container_commands this time Python Configuration Namespaces section. That commands are executed on each deployment. Still, problem is that you need to sync since more then one deployment can go at the same time. Good part is that you can set env in the way you want.I have no problem to access to the environment variables. How did you get the problem? Try do prepare page with the map. |
Apply multiple if/else statement to groupby object in pandas I have a very large DataFrame according to below:id amt date1 0 2010-02-011 0 2012-05-121 0 2016-08-091 20 1970-01-012 0 2016-03-212 0 2017-11-102 0 2012-09-012 0 2016-04-15What I want is to reduce it to one row per id according to following logic:For a given ID-group: if amt > 0 and date == 1970-01-01 then output row. For a given ID-group: if amt == 0 for all id rows, output max date for idI want appearance according to below.id amt date1 20 1970-01-012 0 2017-11-10I have actually solved it through sort and grouping by ID and then taking last(). However, my issue came when I tried to write a function which operates on each separate groupby object and applies the logic i have in point 1 and point 2 above (if/else-style). Can someone help me with this?Code for DataFrame is below - and please note, the data is large so quick execution is helpful.Many thanks,/Swepabdf = pd.DataFrame({'id' : [1, 1, 1, 1, 2, 2, 2, 2] ,'amt' : [0, 0, 0, 20, 0 ,0, 0, 0] ,'date' : ['2010-02-01', '2012-05-12','2016-08-09' ,'1970-01-01','2016-03-21','2017-11-10' ,'2012-09-01','2016-04-15']})df['date'] = pd.to_datetime(df.date,format = "%Y-%m-%d")df = df[['id', 'amt', 'date']] | I wrote a custom function which you can apply on individual groupsdef custom_fx(df):if df.amt.sum() == 0: max_date = df.date.max() return df.loc[df.date==max_date,:]elif df.amt.sum() != 0 : return df[df.date.isin(["1970-01-01"])]for groups,data in df.groupby("id"): print(custom_fx(data))OUTPUT: amt date id 3 20 1970-01-01 1 amt date id 5 0 2017-11-10 2 |
Python csv.reader to separate items by comma but ignore those within pairs of double-quotes I'm trying to use csv.reader to create a list of items from a string, but I'm having trouble. For instance, I have the following string:bibinfo = "wooldridge1999asymptotic, author = \"Wooldridge, Jeffrey M.\", title = \"Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples\", journal = \"Econometrica\", volume = \"\", year = 1999"And I run the following code:import csvfrom io import StringIObibitems = [bibitem for bibitem in csv.reader(StringIO(bibinfo), skipinitialspace = True)][0]But instead of having a list in which commas within a pair of double-quotes are not considered as separators, I obtain the following (unwanted) result:['wooldridge1999asymptotic', 'author = "Wooldridge', 'Jeffrey M."', 'title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples"', 'journal = "Econometrica"', 'volume = ""', 'year = 1999']In other words, it separates some items (like author's surname from first name) when it should not. I followed the tips in this other link, but it seems that I'm missing something else too. | It works if the " is at beginning of the item:"author = Wooldridge, Jeffrey M."With the changed text:>>> s = """wooldridge1999asymptotic, "author = Wooldridge, Jeffrey M.", title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples", journal = "Econometrica", volume = "", year = 1999""">>> list(csv.reader(s.splitlines(), skipinitialspace=True))[['wooldridge1999asymptotic', 'author = Wooldridge, Jeffrey M.', 'title = "Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples"', 'journal = "Econometrica"', 'volume = ""', 'year = 1999']] |
How to set some space between the colorbar and the image I would like to set some space between the image and the colorbar, I have tried the pad but do nothing, so...This is the image I have:and this is the code:from mpl_toolkits.axes_grid1 import make_axes_locatablefrom matplotlib.colors import LogNormfrom matplotlib.ticker import LogLocatorfrom matplotlib import rcParamsrcParams['font.size']=35x = np.arange(0,16,1)yx= np.linspace(-50,0,38)mx = np.random.rand(15,38)m2 = np.linspace(0,6,38)fig, ax = plt.subplots(figsize=(40,30))divider = make_axes_locatable(ax)cax = divider.append_axes('right', size='5%', pad=2)im = ax.pcolor(x,yx,mx.T,norm=LogNorm(0.1, 100),cmap= 'jet')cbar = fig.colorbar(im,pad = 2,cax=cax, orientation='vertical')cbar.ax.yaxis.set_major_locator(LogLocator()) # <- Why? See above.cbar.ax.set_ylabel('Resistividade \u03C1 [ohm.m]', rotation=270)ax2=ax.twiny()ax2.plot(m2,yx,'k--',linewidth=10)#ax2.set_xlim([0,60])ax2.set_xlabel('Resistividade \u03C1 [ohm.m]')ax.set_xlabel('Aquisição')ax.set_ylabel('Profundidade [m]')#fig.tight_layout()plt.savefig('mrec_1'+'.png',bbox_inches = "tight", format='png', dpi=300) plt.show() | The secondary axes occupies all of the space in the figure that is meant for axes. Therefore, no matter what padding you give to the colorbar of ax, it wont affect ax2.A hacky-ish solution would be to also spit your secondary axes exactly the same as the primary axes, and then delete the axes where the second colorbar goes:fig, ax = plt.subplots(figsize=(10, 8))pad = 0.2 # change the padding. Will affect both axesim = ax.pcolor(x, yx, mx.T, norm=LogNorm(0.1, 100), cmap='jet')divider = make_axes_locatable(ax)cax = divider.append_axes('right', size='5%', pad=pad)ax2 = ax.twiny()ax2.plot(m2, yx, 'k--', linewidth=10)ax2.set_xlim([0, 60])ax2.set_xlabel('Resistividade \u03C1 [ohm.m]')ax.set_xlabel('Aquisição')ax.set_ylabel('Profundidade [m]')cbar = fig.colorbar(im,pad = 2,cax=cax, orientation='vertical')cbar.ax.yaxis.set_major_locator(LogLocator())cbar.ax.set_ylabel('Resistividade \u03C1 [ohm.m]', rotation=270)secondary_divider = make_axes_locatable(ax2) # divide second axesredundant_cax = secondary_divider.append_axes('right', size='5%', pad=pad)redundant_cax.remove() # delete the second (empty) colorbarplt.show() |
How to calculate progressively using python I am creating a fitness wearable device python program that tracks the distance its users walk or run daily. To motivate the users to meet and exceed the target distance, it rewards users with fitness points on a leadership board for the users who meet and exceed the target distance in a week.The fitness point is calculated with a minimum distance of 32 km per week, and the points for each km in excess of the minimum distance is show as follow:Distance: 0 to 32 km Fitness Point: 0Distance: 33 to 40 km Fitness Point:325 points per kmDistance: 41 to 48 km Fitness Point: 550 points per kmDistance: Greater than 48 km Point: 600 points per kmHow do I make the points calculate progressively.def fitness_app(distance): while True: distance = int(input("Please Enter Distance in Km: ")) if 0 > distance < 32: fitness_pt = 0 print(fitness_pt) elif 33 > distance < 40: fitness_pt = 325 * distance print(fitness_pt) elif 41 > distance < 48: fitness_pt = 550 * distance print(fitness_pt) elif distance > 48: fitness_pt = 600 * distance print(fitness_pt)print(fitness_app(distance=True)) | I think you're almost there, just that the comparisons don't need to be so complicated:def fitness_app(): while True: distance = int(input("Please Enter Distance in Km: ")) if distance < 32: fitness_pt = 0 elif distance < 40: fitness_pt = 325 * distance elif distance < 48: fitness_pt = 550 * distance else: fitness_pt = 600 * distance print(fitness_pt)fitness_app()Note: other superfluous complications also removed. |
How to fix python program that appears to be doing an extra loop? A portion of a python program I am writing seems to be looping an extra time. The part of the program that isn't working is below. It is supposed to ask for a string from the user and create a two-dimensional list where each distinct character of the string is put in its own sub-list. (Hopefully that makes sense... if not I can try to explain better. Perhaps the code will help)def getInput(emptyList): inputString = input("Please enter a sentence:\n").strip().upper() functionList = [x for x in inputString] emptyList.extend(functionList) return 0def sortList(listA,listB): listA.sort() currentElement = listA[0] compareTo = listA[0] elementsCounted = 0 i = 0 listB.append([]) while elementsCounted < len(listA): while currentElement == compareTo: listB[i].append(currentElement) elementsCounted += 1 print(listB) if elementsCounted < len(listA): currentElement = listA[elementsCounted] else: break if currentElement != compareTo: i += 1 listB.append([]) compareTo = listA[i] return 0def main(): myList = list() sortedList = list() getInput(myList) sortList(myList,sortedList) print(sortedList)main()If the user enters qwerty, the program returns [['E'], ['Q'], ['R'], ['T'], ['W'], ['Y']] which is correct but if the user enters qwwerrty the program returns [['E'], ['Q'], ['R', 'R'], [], ['T'], ['W', 'W'], [], ['Y']]. Note the extra empty list after each "double" character. It appears that the loop is making one extra iteration or that the if statement before listB.append([]) isn't written properly.I can't seem to figure it out more than this. Thank you in advance for your help.NOTE: elementsCounted should be a cumulative tally of each element that has been processed from listA. i is the index of the current element in listB. For example, if ['A','A','B'] was listA and the program is processing the second A, then it is the second element being counted but i is still 0 because it belongs in listB[0]. currentElement is the one currently being processed and it is being compared to the first element that was processed as that "i". For the ['A','A','B'] example, when processing the second A, it is being compared to the first A to see ifishould be incremented. In the next loop, it is comparing 'B' to the first 'A' and thus will increasei` by one since 'B' belongs in the next sub-list. | Your mistake lies in this part:if currentElement != compareTo: ... compareTo = listA[i]It should be:if currentElement != compareTo: ... compareTo = listA[elementsCounted]It's an overly complex function for such a simple task. |
Retrieve Test Parameter Values from Quality Center I have been trying to get the actual value of my parameters from Quality Center that have been set in my test's test configuration. I am using the OTA API through python. I cannot seem to get anything but the default value. Where should I be retrieving the parameter's value from? The test, design steps, test configuration, test set? If someone could point me in the right direction that would help.Thanks!Jason | Can you post your code. I may be help you out. Have a look at following code. Assuming you know how to set the connection up : ( You need Test lab--> test set usually starts with Root) - hope this helps GetTest=test_lab_folder.TestSetFactoryTestSetFilter=GetTest.FilterGetTSList=GetTest.NewList(TestSetFilter.Text)for j in range (1,GetTSList.Count + 1): TestSet=GetTSList.Item(j) print TestSet.Name LabTests=TestSet.TSTestFactory LabTestSet=LabTests.NewList("") for k in range(1,LabTestSet.Count +1 ): LabTest=LabTestSet.Item(k) TestsetParam=LabTest.ParameterValueFactory ParamFilter=TestsetParam.Filter NewParamList=TestsetParam.NewList(ParamFilter.Text) for n in range (1,NewParamList.Count + 1): param=NewParamList.Item(n) print param.ActualValue |
Playing a sound in a ipython notebook I would like to be able to play a sound file in a ipython notebook.My aim is to be able to listen to the results of different treatments applied to a sound directly from within the notebook.Is this possible? If yes, what is the best solution to do so? | The previous answer is pretty old. You can use IPython.display.Audio now. Like this:import IPythonIPython.display.Audio("my_audio_file.mp3")Note that you can also process any type of audio content, and pass it to this function as a numpy array.If you want to display multiple audio files, use the following:IPython.display.display(IPython.display.Audio("my_audio_file.mp3"))IPython.display.display(IPython.display.Audio("my_audio_file.mp3")) |
Saving XML using ETree in Python. It's not retaining namespaces, and adding ns0, ns1 and removing xmlns tags I see there are similar questions here, but nothing that has totally helped me. I've also looked at the official documentation on namespaces but can't find anything that is really helping me, perhaps I'm just too new at XML formatting.I understand that perhaps I need to create my own namespace dictionary? Either way, here is my situation:I am getting a result from an API call, it gives me an XML that is stored as a string in my Python application. What I'm trying to accomplish is just grab this XML, swap out a tiny value (The b:string value user ConditionValue/Default but that's irrelevant to this question)and then save it as a string to send later on in a Rest POST call.The source XML looks like this:<Context xmlns="http://Test.the.Sdk/2010/07" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><xmlns i:nil="true" xmlns="http://schema.test.org/2004/07/Test.Soa.Vocab" xmlns:a="http://schema.test.org/2004/07/System.Xml.Serialize"/><Conditions xmlns:a="http://schema.test.org/2004/07/Test.Soa.Vocab"> <a:Condition> <a:xmlns i:nil="true" xmlns:b="http://schema.test.org/2004/07/System.Xml.Serialize"/> <Identifier>a23aacaf-9b6b-424f-92bb-5ab71505e3bc</Identifier> <Name>Code</Name> <ParameterSelections/> <ParameterSetCollections/> <Parameters/> <Summary i:nil="true"/> <Instance>25486d6c-36ba-4ab2-9fa6-0dbafbcf0389</Instance> <ConditionValue> <ComplexValue i:nil="true"/> <Text i:nil="true" xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"/> <Default> <ComplexValue i:nil="true"/> <Text xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"> <b:string>NULLCODE</b:string> </Text> </Default> </ConditionValue> <TypeCode>String</TypeCode> </a:Condition> <a:Condition> <a:xmlns i:nil="true" xmlns:b="http://schema.test.org/2004/07/System.Xml.Serialize"/> <Identifier>0af860f6-5611-4a23-96dc-eb3863975529</Identifier> <Name>Content Type</Name> <ParameterSelections/> <ParameterSetCollections/> <Parameters/> <Summary i:nil="true"/> <Instance>6364ec20-306a-4cab-aabc-8ec65c0903c9</Instance> <ConditionValue> <ComplexValue i:nil="true"/> <Text i:nil="true" xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"/> <Default> <ComplexValue i:nil="true"/> <Text xmlns:b="http://schemas.microsoft.com/2003/10/Serialization/Arrays"> <b:string>Standard</b:string> </Text> </Default> </ConditionValue> <TypeCode>String</TypeCode> </a:Condition></Conditions>My job is to swap out one of the values, retaining the entire structure of the source, and use this to submit a POST later on in the application. The problem that I am having is that when it saves to a string or to a file, it totally messes up the namespaces:<ns0:Context xmlns:ns0="http://Test.the.Sdk/2010/07" xmlns:ns1="http://schema.test.org/2004/07/Test.Soa.Vocab" xmlns:ns3="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><ns1:xmlns xsi:nil="true" /><ns0:Conditions><ns1:Condition><ns1:xmlns xsi:nil="true" /><ns0:Identifier>a23aacaf-9b6b-424f-92bb-5ab71505e3bc</ns0:Identifier><ns0:Name>Code</ns0:Name><ns0:ParameterSelections /><ns0:ParameterSetCollections /><ns0:Parameters /><ns0:Summary xsi:nil="true" /><ns0:Instance>25486d6c-36ba-4ab2-9fa6-0dbafbcf0389</ns0:Instance><ns0:ConditionValue><ns0:ComplexValue xsi:nil="true" /><ns0:Text xsi:nil="true" /><ns0:Default><ns0:ComplexValue xsi:nil="true" /><ns0:Text><ns3:string>NULLCODE</ns3:string></ns0:Text></ns0:Default></ns0:ConditionValue><ns0:TypeCode>String</ns0:TypeCode></ns1:Condition><ns1:Condition><ns1:xmlns xsi:nil="true" /><ns0:Identifier>0af860f6-5611-4a23-96dc-eb3863975529</ns0:Identifier><ns0:Name>Content Type</ns0:Name><ns0:ParameterSelections /><ns0:ParameterSetCollections /><ns0:Parameters /><ns0:Summary xsi:nil="true" /><ns0:Instance>6364ec20-306a-4cab-aabc-8ec65c0903c9</ns0:Instance><ns0:ConditionValue><ns0:ComplexValue xsi:nil="true" /><ns0:Text xsi:nil="true" /><ns0:Default><ns0:ComplexValue xsi:nil="true" /><ns0:Text><ns3:string>Standard</ns3:string></ns0:Text></ns0:Default></ns0:ConditionValue><ns0:TypeCode>String</ns0:TypeCode></ns1:Condition></ns0:Conditions>I've narrowed the code down to the most basic form and I'm still getting the same results so it's not anything to do with how I'm manipulating the file normally:import xml.etree.ElementTree as ETimport requestsget_context_xml = 'http://localhost/testapi/returnxml' #returns first XML example above.source_context_xml = requests.get(get_context_xml)Tree = ET.fromstring(source_context_xml)#Ensure the original namespaces are intact.for Conditions in Tree.iter('{http://schema.test.org/2004/07/Test.Soa.Vocab}Condition'): print "success"with open('/home/memyself/output.xml','w') as f: f.write(ET.tostring(Tree)) | You need to register the prefix and the namespace before you do fromstring() (Reading the xml) to avoid the default namespace prefixes (like ns0 and ns1 , etc.) .You can use the ET.register_namespace() function for that, Example -ET.register_namespace('<prefix>','http://Test.the.Sdk/2010/07')ET.register_namespace('a','http://schema.test.org/2004/07/Test.Soa.Vocab')You can leave the <prefix> empty if you do not want a prefix.Example/Demo ->>> r = ET.fromstring('<a xmlns="blah">a</a>')>>> ET.tostring(r)b'<ns0:a xmlns:ns0="blah">a</ns0:a>'>>> ET.register_namespace('','blah')>>> r = ET.fromstring('<a xmlns="blah">a</a>')>>> ET.tostring(r)b'<a xmlns="blah">a</a>' |
How to write a customized LSTM in tensorflow? I am trying to reimplement this paper Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems, in which they add a gate to the LSTM cell and change how the state is computed.How can I do this in tensorflow? Do I need to add a new OP ? | The tf.nn.rnn() and tf.nn.dynamic_rnn() functions accept an argument cell of type tf.nn.rnn_cell.RNNCell. For example you can take a look at the implementation of tf.nn.rnn_cell.BasicLSTMCell (in particular the BasicLSTMCell.__call__() method), which might be a good starting point for your customized LSTM. |
Advice extracting //td text and numbers I have been working through the tutorial adapting it to a project I want to achieve. I seem to have something going wrong that i just can't find the error to.When using 'scrapy shell' I can get the response I expect. So for this site Nrl LadderIn [1]: hxs.select('//td').extract()Out[1]: [u'<td>\r\n<div id="ls-nav">\r\n<ul><li><a href="http://www.nrlstats.com/"><span>Home</span></a></li>\r\n<li class="ls-nav-on"><a href="/nrl"><span>NRL</span></a></li>\r\n<li><a href="/nyc"><span>NYC</span></a></li>\r\n<li><a href="/rep"><span>Rep Matches</span></a></li>\r\n\r\n</ul></div>\r\n</td>', u'<td style="text-align:left" colspan="5">Round 4</td>', u'<td colspan="5">Updated: 26/3/2012</td>', u'<td style="text-align:left">1. Melbourne</td>', u'<td>4</td>', u'<td>4</td>', u'<td>0</td>', u'<td>0</td>', u'<td>0</td>', u'<td>122</td>', u'<td>39</td>', u'<td>83</td>', u'<td>8</td>', u'<td style="text-align:left">2. Canterbury-Bankstown</td>',And on it goes.I am really struggling to understand how to alter the tutorial project to change it to a different data type. Is there anyway to bring up a help or documentation list to see what types I should use in items when using 'td' or any other item. Like i say it works easy in the shell but I cannot transform it to the files. Specifically both the team names and the points are 'td' but the team name is text.here is what I have done.from scrapy.spider import BaseSpiderfrom scrapy.selector import HtmlXPathSelectorfrom nrl.items import NrlItemclass nrl(BaseSpider): name = "nrl" allowed_domains = ["http://live.nrlstats.com/"] start_urls = [ "http://live.nrlstats.com/nrl/ladder.html", ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//td') items = [] for site in sites: item = nrlItem() item['team'] = site.select('/text()').extract() item['points'] = site.select('/').extract() items.append(item) return items | I didn't quite understand your question, but here is a starting point, imo (haven't tested; see some comments in the code):from scrapy.spider import BaseSpiderfrom scrapy.selector import HtmlXPathSelectorfrom nrl.items import NrlItemclass nrl(BaseSpider): name = "nrl" allowed_domains = ["live.nrlstats.com"] # domains should be like this start_urls = [ "http://live.nrlstats.com/nrl/ladder.html", ] def parse(self, response): hxs = HtmlXPathSelector(response) rows = hxs.select('//table[@class="tabler"]//tr[starts-with(@class, "r")]') # select team rows items = [] for row in rows: item = nrlItem() columns = row.select('./td/text()').extract() # select columns for the selected row item['team'] = columns[0] item['P'] = int(columns[1]) item['W'] = int(columns[2]) ... items.append(item) return itemsUPDATE://table[@class="tabler"//tr[starts-with(@class, "r")] is an xpath query. See some xpath examples here. hxs.select(xpath_query) always returns a list of nodes (also of type HtmlXPathSelector) which fall under the given query.hxs.extract() returns string representation of the node(s).P.S. Beware that scrapy supports XPath 1.0, but not 2.0 (at least on Linux, not sure about Windows), so some of the newest xpath features might not work.See also: http://doc.scrapy.org/en/latest/topics/selectors.htmlhttp://doc.scrapy.org/en/latest/topics/firefox.html |
How To Dynamically Use User Input for Jira Python So I am trying to make an interactive method of pulling out Jira information, based on a Jira Key.Full Code:import osfrom atlassian import Jiraimport jsonwith open('secrets.json','r') as f: config = json.load(f)jira_instance = Jira( url = "https://mirantis.jira.com", username = (config['user']['username']), password = (config['user']['password']))projects = jira_instance.get_all_projects(included_archived=None)value = input("Please enter your Jira Key and the Issue ID:\n")jira_key = (value)issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated')#issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')print(issue)The main thing that is breaking is this:issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated')For some odd reason, it doesn't like the way I am using user input for jira_key even though it will print out what I want if I use print(jira_key)Am I invoking it wrong?I basically need this:issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')Whereby which, DESDI-212 will be user input.When I try it using '(jira_key)' it responds back with this error: rbarrett@MacBook-Pro-2 ~/Projects/Mirantis/Dataeng/Python python test_single.py ✔ 10422 22:03:34Please enter your Jira Key and the Issue ID:DESDI-212Traceback (most recent call last): File "/Users/rbarrett/Projects/Mirantis/Dataeng/Python/test_single.py", line 19, in <module> issue = jira_instance.issue('(jira_key)', fields='summary,history,created,updated') File "/usr/local/lib/python3.9/site-packages/atlassian/jira.py", line 676, in issue return self.get("rest/api/2/issue/{0}?fields={1}".format(key, fields), params=params) File "/usr/local/lib/python3.9/site-packages/atlassian/rest_client.py", line 264, in get response = self.request( File "/usr/local/lib/python3.9/site-packages/atlassian/rest_client.py", line 236, in request self.raise_for_status(response) File "/usr/local/lib/python3.9/site-packages/atlassian/jira.py", line 3715, in raise_for_status raise HTTPError(error_msg, response=response)requests.exceptions.HTTPError: Issue does not exist or you do not have permission to see it.I expect to see this, which if I use 'DESDI-212' instead of '(jira_key)' it actually works:{'expand': 'renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations', 'id': '372744', 'self': 'https://mirantis.jira.com/rest/api/2/issue/372744', 'key': 'DESDI-212', 'fields': {'summary': 'Add the MSR version to be parsed into Loadstone', 'updated': '2021-06-23T17:33:21.206-0700', 'created': '2021-06-01T12:54:06.136-0700'}} | So it turns out I was invoking it wrong.I needed to drop the '' around the '(jira_key)' and just invoke it as follows with (jira_key) instead:import osfrom atlassian import Jiraimport jsonwith open('secrets.json','r') as f: config = json.load(f)jira_instance = Jira( url = "https://mirantis.jira.com", username = (config['user']['username']), password = (config['user']['password']))projects = jira_instance.get_all_projects(included_archived=None)value = input("Please enter your Jira Key and the Issue ID:\n")jira_key = (value)issue = jira_instance.issue((jira_key), fields='summary,history,created,updated')#issue = jira_instance.issue('DESDI-212', fields='summary,history,created,updated')print(issue)As such I got the expected output I needed, not it's working as expected: rbarrett@MacBook-Pro-2 ~/Projects/Mirantis/Dataeng/Python python test_single.py ✔ 10428 22:22:17Please enter your Jira Key and the Issue ID:DESDI-212{'expand': 'renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations', 'id': '372744', 'self': 'https://mirantis.jira.com/rest/api/2/issue/372744', 'key': 'DESDI-212', 'fields': {'summary': 'Add the MSR version to be parsed into Loadstone', 'updated': '2021-06-23T17:33:21.206-0700', 'created': '2021-06-01T12:54:06.136-0700'}} |
Design pattern to organize non-trivial ORM queries? I am developing a web API with 10 tables or so in the backend, with several one-to-many and many-to-many associations. The API essentially is a database wrapper that performs validated updates and conditional queries. It's written in Python, and I use SQLAlchemy for ORM and CherryPy for HTTP handling.So far I have separated the 30-some queries the API performs into functions of their own, which look like this:# in module "services.inventory"def find_inventories(session, user_id, *inventory_ids, **kwargs): query = session.query(Inventory, Product) query = query.filter_by(user_id=user_id, deleted=False) ... return query.all()def find_inventories_by(session, app_id, user_id, by_app_id, by_type, limit, page): ....# in another service moduledef remove_old_goodie(session, app_id, user_id): try: old = _current_goodie(session, app_id, user_id) services.inventory._remove(session, app_id, user_id, [old.id]) except ServiceException, e: # log it and do stuff....The CherryPy request handler calls the query methods, which are scattered across several service modules, as needed. The rationale behind this solution is, since they need to access multiple model classes, they don't belong to individual models, and also these database queries should be separated out from direct handling of API accesses.I realize that the above code might be called Foreign Methods in the realm of refactoring. I could well live with this way of organizing for a while, but as things are starting to look a little messy, I'm looking for a way to refactor this code.Since the queries are tied directly to the API and its business logic, they are hard to generalize like getters and setters.It smells to repeat the session argument like that, but as the current implementation of the API creates a new CherryPy handler instance for each API call and therefore the session object, there is no global way of getting at the current session.Is there a well-established pattern to organize such queries? Should I stick with the Foreign Methods and just try to unify the function signature (argument ordering, naming conventions etc.)? What would you suggest? | The standard way to have global access to the current session in a threaded environment is ScopedSession. There are some important aspects to get right when integrating with your framework, mainly transaction control and clearing out sessions between requests. A common pattern is to have an autocommit=False (the default) ScopedSession in a module and wrap any business logic execution in a try-catch clause that rolls back in case of exception and commits if the method succeeded, then finally calls Session.remove(). The business logic would then import the Session object into global scope and use it like a regular session.There seems to be an existing CherryPy-SQLAlchemy integration module, but as I'm not too familiar with CherryPy, I can't comment on its quality.Having queries encapsulated as functions is just fine. Not everything needs to be in a class. If they get too numerous just split into separate modules by topic.What I have found useful is too factor out common criteria fragments. They usually fit rather well as classmethods on model classes. Aside from increasing readability and reducing duplication, they work as implementation hiding abstractions up to some extent, making refactoring the database less painful. (Example: instead of (Foo.valid_from <= func.current_timestamp()) & (Foo.valid_until > func.current_timestamp()) you'd have Foo.is_valid()) |
Scheduling a task in python i'm trying to schedule a task every 5 seconds, here what i did:import scheduleimport timeimport tweepy from threading import Timerdef job(): iGen = (i for i in range(1, 6)) for i in iGen: i += 1 mymessage = "My message here " + str(i) print(mymessage)schedule.every(5).seconds.do(job)while 1: schedule.run_pending() time.sleep(1)but the result is:My message here 2 ..after 5 secsMy message here 3My message here 4My message here 5My message here 6My message here 2 ..after 5 secsMy message here 3My message here 4My message here 5My message here 6My message here 2 ..after 5 secsMy message here 3My message here 4My message here 5My message here 6what i need is:My message here 2 ..after 5 secsMy message here 3 ..after 5 secs My message here 4 ..after 5 secsMy message here 5 ..after 5 secsMy message here 6 ..after 5 secssorry for the newbie question, Thank you | Your job is to loop over 2-6, printing for each. It sounds like you want the job to just print once each time it runs. This would do that, but would not number the messages.import scheduleimport timedef job(): print("Message")schedule.every(5).seconds.do(job)while 1: schedule.run_pending() time.sleep(1)To get numbering is a bit more complicated, but you can do it with a static variable:import scheduleimport timedef job(): job.i += 1 print("Message: " + str(job.i))job.i = 1schedule.every(5).seconds.do(job)while 1: schedule.run_pending() time.sleep(1) |
Disable OpenGL for Python / Matplotlib I'm doing a Python course for which I have installed Arch Linux in a VM. When I use Matplotlib.pyplot to plot things (x vs y) I get a bunch of errors.libGL error: pci id for fd 12: 80ee:beef, driver (null)OpenGL Warning: glFlushVertexArrayRangeNV not found in mesa tableOpenGL Warning: glVertexArrayRangeNV not found in mesa tableOpenGL Warning: glCombinerInputNV not found in mesa tableOpenGL Warning: glCombinerOutputNV not found in mesa tableOpenGL Warning: glCombinerParameterfNV not found in mesa tableOpenGL Warning: glCombinerParameterfvNV not found in mesa tableOpenGL Warning: glCombinerParameteriNV not found in mesa tableOpenGL Warning: glCombinerParameterivNV not found in mesa tableOpenGL Warning: glFinalCombinerInputNV not found in mesa tableOpenGL Warning: glGetCombinerInputParameterfvNV not found in mesa tableOpenGL Warning: glGetCombinerInputParameterivNV not found in mesa tableOpenGL Warning: glGetCombinerOutputParameterfvNV not found in mesa tableOpenGL Warning: glGetCombinerOutputParameterivNV not found in mesa tableOpenGL Warning: glGetFinalCombinerInputParameterfvNV not found in mesa tableOpenGL Warning: glGetFinalCombinerInputParameterivNV not found in mesa tableOpenGL Warning: glDeleteFencesNV not found in mesa tableOpenGL Warning: glFinishFenceNV not found in mesa tableOpenGL Warning: glGenFencesNV not found in mesa tableOpenGL Warning: glGetFenceivNV not found in mesa tableOpenGL Warning: glIsFenceNV not found in mesa tableOpenGL Warning: glSetFenceNV not found in mesa tableOpenGL Warning: glTestFenceNV not found in mesa tablelibGL error: core dri or dri2 extension not foundlibGL error: failed to load driver: vboxvideoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoOpenGL Warning: XGetVisualInfo returned 0 visuals for 00007f6ff33d0240OpenGL Warning: Retry with 0x8002 returned 0 visualsOpenGL Warning: glXGetFBConfigAttrib for 00007f6ff33d0240, failed to get XVisualInfoWhen I turn of 3D support for the VM it simply asks for openGL. My script does create a plot (empty canvas) but without a line.I think it should be possible to draw some lines without openGL, right? How to go about this...Edit: I think it was a VirtualBox bug combined with an error in my Python code. I could actually get good graphs with the error messages present in the end. In the latest versions of VirtualBox I'm not getting the error anymore. Thanx for the suggestions. | so, despite of all the errors, I never had anything not working actually, the fact that I didn't see graphs was not due to the error in the original post. It was something else, I guess unrelated tot mpl and more related to lack of 3D acceleration in VirtualBox. |
How to use findNumbers in Google PhoneNumberLib? I am using Googles Phone Number Library to find phone numbers in a text file. That phone number can be in any format or from any country. Regex is not solving the problem. I was coding in 3rd party python version of it, but it is not that good and I can't find a way to use FindNumbers function. How to use it in Java or even better in python?Here is an Example: 440-991-6659(F) | IN the python port that you link to, there is a PhoneNumberMatcher class that provides the FindNumbers functionality. The code is here.From the project's README: Sometimes, you've got a larger block of text that may or may not have some phone numbers inside it. For this, the PhoneNumberMatcher object provides the relevant functionality; you can iterate over it to retrieve a sequence of PhoneNumberMatch objects. Each of these match objects holds a PhoneNumber object together with information about where the match occurred in the original string.>>> text = "Call me at 510-748-8230 if it's before 9:30, or on 703-4800500 after 10am.">>> for match in phonenumbers.PhoneNumberMatcher(text, "US"):... print match...PhoneNumberMatch [11,23) 510-748-8230PhoneNumberMatch [51,62) 703-4800500>>> for match in phonenumbers.PhoneNumberMatcher(text, "US"):... print phonenumbers.format_number(match.number, phonenumbers.PhoneNumberFormat.E164)...+15107488230+17034800500 |
I cant understand this code in Python, can you help me? I had a code assignment but i could'nt find the answer, so i check it on the net. the code is written in python. The code is absolutely right but i cannot understand it. I am pretty much new to python so plz help me.Here is the questionAssume s is a string of lower case characters.Write a program that prints the longest substring of s in which the letters occur in alphabetical order. For example, if s = 'azcbobobegghakl', then your program should printLongest substring in alphabetical order is: begghIn the case of ties, print the first substring. For example, if s = 'abcbcd', then your program should printLongest substring in alphabetical order is: abcThe code is: # initialise tracker variables maxLen=0 current=s[0] longest=s[0] # step through s indices for i in range(len(s) - 1): if s[i + 1] >= s[i]: current += s[i + 1] # if current length is bigger update if len(current) > maxLen: maxLen = len(current) longest = current else: current=s[i + 1] i += 1print ('Longest substring in alphabetical order is: ' + longest) | s="abdhbdwba"maxLen=0 # sets the current highest length to 0current=s[0] # sets the current letter to the first letter (this is the output string)longest=s[0] # sets the longest letter to the first letter(just for programming sake)# step through s indicesfor i in range(len(s) - 1): # goes over every letter in the string s except the last letter if s[i + 1] >= s[i]: # checks if the next letter in the string is greater than (in ascii code) the current letter current += s[i + 1] # if it is, adds the next letter to the current value if len(current) > maxLen: # if we've got to a sequence that is larger, just set the max length to the length of the sequance maxLen = len(current) # just lets the max length to the current length longest = current # just sets the longest to the current value else: current=s[i + 1] # just sets the current as isi += 1 # not sure why this is here?print ('Longest substring in alphabetical order is: ' + longest) # just prints it outLets just go over some basics:for i in range(x): print(i)Will print i, i+1, i+2...i+(x - 1)x = y[i + 1]x will now equal the (i + 1)th index in the arraylen(x)Will output how long the string is in x |
Cursor when returning dictionary and print where are the keys I am trying to understand the data structures returned by cursorI have the following code:con = psycopg2.connect("dbname='testdb2' user='kevin'") cursor = con.cursor(cursor_factory=psycopg2.extras.DictCursor)cursor.execute("SELECT * FROM Cars")rows = cursor.fetchall()for row in rows: print row["id"], row["name"], row["price"]Which outputs:1 Audi 526422 Mercedes 571273 Skoda 9000etc....if I sayfor row in rows: print rowsit outputs[[1, 'Audi', 52642], [2, 'Mercedes', 57127], [3, 'Skoda', 9000], [4, 'Volvo', 29000], [5, 'Bentley', 350000], [6, 'Citroen', 21000], [7, 'Hummer', 41400], [8, 'Volkswagen', 21600]]Where are the keys ? I was expecting an out put like this [['Id': '1' , 'name':'Audi', 'price:'52642'], ['Id': '2' , 'name':'Mercedes', 'price:'57127'] ....etcI am not sure if it from my lack of understanding python that I did expect that output. | Each rown is a DictRow which inherits from list:https://github.com/psycopg/psycopg2/blob/master/lib/extras.pyclass DictRow(list): """A row object that allow by-column-name access to data.""" __slots__ = ('_index',) def __init__(self, cursor): self._index = cursor.index self[:] = [None] * len(cursor.description) def __getitem__(self, x): if not isinstance(x, (int, slice)): x = self._index[x] return list.__getitem__(self, x) def __setitem__(self, x, v): if not isinstance(x, (int, slice)): x = self._index[x] list.__setitem__(self, x, v) def items(self): return list(self.iteritems()) def keys(self): return self._index.keys() def values(self): return tuple(self[:]) def has_key(self, x): return x in self._index def get(self, x, default=None): try: return self[x] except: return default def iteritems(self): for n, v in self._index.iteritems(): yield n, list.__getitem__(self, v) def iterkeys(self): return self._index.iterkeys() def itervalues(self): return list.__iter__(self) def copy(self): return dict(self.iteritems()) def __contains__(self, x): return x in self._index def __getstate__(self): return self[:], self._index.copy() def __setstate__(self, data): self[:] = data[0] self._index = data[1] if _sys.version_info[0] > 2: items = iteritems; del iteritems keys = iterkeys; del iterkeys values = itervalues; del itervalues del has_key |
Check if the string contains the substring returns true when its actually false Is it a problem with my editor or what stupid mistake am I making ? Here is the screen-shotThis code returns true and it actually shoulda = "https://www.reddit.com/comments/ado0ym/use_reddit_coins_to_award_gold_to_your_favorite/"b = "use_reddit_coins_to_award_gold_to_your_favorite"if b in a: print("true")# Results return trueBut this must return False but returns True a = "https: // www.reddit.com/comments/ado0ym/"b = "use_reddit_coins_to_award_gold_to_your_favorite"if b in a: print("true")# Results return true | works fine: First one returns True, second one returns False:If you're running your code, it should correctly print true because the first set is True, and then prints nothing after that:trueif both were True, you would seetruetrueSee below:a = "https://www.reddit.com/comments/ado0ym/use_reddit_coins_to_award_gold_to_your_favorite/"b = "use_reddit_coins_to_award_gold_to_your_favorite"print (b in a)a = "https: // www.reddit.com/comments/ado0ym/"b = "use_reddit_coins_to_award_gold_to_your_favorite"print (b in a) Output:TrueFalse |
Error running python manage.py I'm using flask with Ubuntu, and when I run python manage.py I get this Traceback:Traceback (most recent call last): File "manage.py", line 8, in <module> app.run(debug=True,processes=True) File "/proj/local/lib/python2.7/site-packages/flask/app.py", line 772, in run run_simple(host, port, self, **options) File "/proj/local/lib/python2.7/site-packages/werkzeug/serving.py", line 671, in run_simple s.bind((hostname, port)) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args)socket.error: [Errno 98] Address already in use | This means this port on the address you're trying to use (presumably localhost) is already being used by another process. What to do to fix this:kill Python and restart your scriptor find a process that's using your port and kill ituse another port for your appwait for a few minutes, perhaps this port hasn't been 'freed' yet |
Using page text to select `html` element using`Beautiful Soup` I have a page which contains several repetitions of: <div...><h4>...<p>... For example:html = '''<div class="proletariat"><h4>sickle</h4><p>Ignore this text</p></div><div class="proletariat"><h4>hammer</h4><p>This is the text we want</p></div>'''from bs4 import BeautifulSoupsoup = BeautifulSoup(html)If I write print soup.select('div[class^="proletariat"] > h4 ~ p'), I get:[<p>Ignore this text</p>, <p>This is the text we want</p>]How do I specify that I only want the text of p when it is preceded by <h4>hammer</h4>? Thanks | :contains() could help here, but it is not supported.Taking this into account, you can use select() in conjunction with the find_next_sibling():print next(h4.find_next_sibling('p').text for h4 in soup.select('div[class^="proletariat"] > h4') if h4.text == "hammer") |
Split pandas Series rows containing multiline strings into separate rows I have a pandas Series that is filled with strings like this:In: s = pd.Series(['This is a single line.', 'This is another one.', 'This is a string\nwith more than one line.'])Out:0 This is a single line.1 This is another one.2 This is a string\nwith more than one line.dtype: objectHow can I split all rows in this Series that contain the linebreak character \n into rows of their own? What I would expect is:0 This is a single line.1 This is another one.2 This is a string3 with more than one line.dtype: objectI know that I can split each row by the linebreak character withs = s.str.split('\n')which gives 0 [This is a single line.]1 [This is another one.]2 [This is a string, with more than one line.]but this only breaks the string within the row, not into rows of their own for each token. | You could loop over each string in each row to create a new series:pd.Series([j for i in s.str.split('\n') for j in i])It might make more sense to do this on the input rather than creating a temporary series, e.g.:strings = ['This is a single line.', 'This is another one.', 'This is a string\nwith more than one line.']pd.Series([j for i in strings for j in i.split('\n')]) |
Writing camera matrix into xml/yaml file I am using opencv and pythonI have calibrated my camera having the following parameters:camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]] dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]I am working in python.I wrote the following code to save the above into a file but the file was like a normal text file.f = open("../calibration_camera.xml","w")f.write('Camera Matrix:\n'+str(camera_matrix))f.write('\n')f.write('Distortion Coefficients:\n'+str(dist_coefs))f.write('\n')f.close()How can i save this data into an xml/yaml file using python commands thus getting the desired output.Please help. Thanks in advance | Using JSONJSON seems to be the easiest format for serialization in your casecamera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]]dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]data = {"camera_matrix": camera_matrix, "dist_coeff": dist_coeff}fname = "data.json"import jsonwith open(fname, "w") as f: json.dump(data, f)data.json:{"dist_coeff": [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424], "camera_matrix": [[532.80990646, 0.0, 342.49522219], [0.0, 532.93344713, 233.88792491], [0.0, 0.0, 1.0]]}Using YAMLYAML is best option, if you expect human editing of the contentIn contrast to json module, yaml is not part of Python and must be installed first:$ pip install pyyamlHere goes the code to save the data:fname = "data.yaml"import yamlwith open(fname, "w") as f: yaml.dump(data, f)data.yaml:camera_matrix:- [532.80990646, 0.0, 342.49522219]- [0.0, 532.93344713, 233.88792491]- [0.0, 0.0, 1.0]dist_coeff: [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424]Using XMLMy example is using my favourite lxml package, other XML packages are also available.from lxml import etreefrom lxml.builder import Ecamera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]]dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01]def triada(itm): a, b, c = itm return E.Triada(a = str(a), b = str(b), c = str(c))camera_matrix_xml = E.CameraMatrix(*map(triada, camera_matrix))dist_coeff_xml = E.DistCoef(*map(E.Coef, map(str, dist_coeff)))xmldoc = E.CameraData(camera_matrix_xml, dist_coeff_xml)fname = "data.xml"with open(fname, "w") as f: f.write(etree.tostring(xmldoc, pretty_print=True))data.xml:<CameraData> <CameraMatrix> <Triada a="532.80990646" c="342.49522219" b="0.0"/> <Triada a="0.0" c="233.88792491" b="532.93344713"/> <Triada a="0.0" c="1.0" b="0.0"/> </CameraMatrix> <DistCoef> <Coef>-0.281325798</Coef> <Coef>0.0291150014</Coef> <Coef>0.00121234399</Coef> <Coef>-0.000140823665</Coef> <Coef>0.154861424</Coef> </DistCoef></CameraData>You shall play a bit with the code to format strings representing the numbers with proper precision. This I leave to you. |
pandas: comparing non-identical list of panda dataframes based on values from a certain column I have a two lists of panda dataframes as follows,import pandas as pdimport numpy as nplist_one = [pd.DataFrame({'sent_a.1': [0, 3, 2, 1], 'sent_a.2': [0, 1, 4, 0], 'sent_b.3': [0, 6, 0, 8],'sent_b.4': [1, 1, 8, 6],'ID':['id_1','id_1','id_1','id_1']}), pd.DataFrame({'sent_a.1': [0, 3], 'sent_a.2': [0, 2], 'sent_b.3': [0, 6],'sent_b.4': [1, 1],'ID':['id_2','id_2']})]list_two = [pd.DataFrame({'sent_a.1': [0, 5], 'sent_a.2': [0, 1], 'sent_b.3': [0, 6],'sent_b.4': [1, 1],'ID':['id_2','id_2']}), pd.DataFrame({'sent_a.1': [0, 5, 3, 1], 'sent_a.2': [0, 2, 3, 1], 'sent_b.3': [0, 6, 6, 8],'sent_b.4': [1, 5, 8, 5],'ID':['id_1','id_1','id_1','id_1']})]I would like to compare the dataframes in these two lists and if the values are the same, I would like to replace the value with 'True' and if the values are different, I would like to set them to 'False' and save the result in a different list of panda dataframes. I have done the following,for dfs in list_one: for dfs2 in list_two: g = np.where(dfs == dfs2, 'True', 'False') print (g)but I get the error,ValueError: Can only compare identically-labeled DataFrame objectshow can I sort values in these two lists, based on the values from column 'ID'?EditI would like the dataframes that have the same value for column 'ID' to be compared. meaning that dataframes that have 'ID' == 'id_1' are to be compared with one another and dataframes that have 'ID' == 'id_2' to be compared with each other (not a cross comparison)so the desired output is:output = [ sent_a.1 sent_a.2 sent_b.3 sent_b.4 ID 0 True True True True id_1 1 False False True False id_1 2 False False False True id_1 3 False False True True id_1, sent_a.1 sent_a.2 sent_b.3 sent_b.4 ID 0 True True True True id_2 1 True True False False id_2] | Based on your current exampleFor your first question:how can I sort values in these two lists, based on the values from column 'ID'?list_one = sorted(list_one,key=lambda x: x['ID'].unique()[0][3:], reverse=False)list_two =sorted(list_two,key=lambda x: x['ID'].unique()[0][3:], reverse=False)ValueError: Can only compare identically-labeled DataFrame objectserror due to different index values order in dataframes or dataframes are of different shapesFirst way of comparison:for dfs in list_one: for dfs2 in list_two: if dfs.shape == dfs2.shape: g = np.where(dfs == dfs2, 'True', 'False') print (g)Second way:I would like the dataframes that have the same value for column 'ID' to be comparedfor dfs in list_one: for dfs2 in list_two: if (dfs['ID'].unique() == dfs2['ID'].unique()) and (dfs.shape == dfs2.shape): g = np.where(dfs == dfs2, 'True', 'False') print (g) |
Using Flask, trying to get AJAX to update a span after updating mongo record, but it's opening a new page Feel like I am stumbling over something fairly simple here.I am not understanding something about AJAX and Flask.I have a project wherein I display mongodb records in the browser, which has been working fine.I added functionality for users to increment votes on a record; to Vote it up if they like it. But originally I was then refreshing the entire page with the new vote, using a redirect, which is clumsy. So I am trying to get AJAX to send the data over to the mongodb record and then update the span where I want the votes to appear without having to reload the entire page.Problem is, the setup I have going, while still updating the record, is now loading a new page with the HTML i want returned only to the span where the vote tally should be (that is, it's loading a new page with only the word "test" in it (the test value I am currently returning)).The jQuery (the library I am using) is loading fine and there are no other problems (as far as I can tell).I have the relevant HTML and JS here:<!-- All Standard HTML up here, removed for simplicity --><script> $('#vote_link').bind('click', function(e){ e.preventDefault(); var url = $(this).attr('href'); $('#vote_tally').load(url); });</script><a href='/vote_up/{{ item._id }}' id='vote_link'>Vote for Me!</a><br>Likes: <span id='vote_tally'>{{ item.votes }}</span><!-- All Standard HTML down here, removed for simplicity -->and the python is here: from flask import Flask, render_template, request, redirect, flash, jsonify#from mongokit import Connection, Document#from flask.ext.pymongo import PyMongofrom pymongo import Connection#, json_util#from pymongo.objectid import ObjectId #this is deprecatedimport bson.objectid'''my pymongo connection - removed for simplicity''''''bunch of other routes - also removed for same reason'''#increment a vote@app.route('/vote_up/<this_record>')def vote_up(this_record): vandalisms.update({'_id':bson.objectid.ObjectId(this_record)}, {"$inc" : { "votes": 1 }}, upsert=True) ''' also trying to return value for votes field from mongo record, but one step at a time here ''' #result = vandalisms.find({'_id':bson.objectid.ObjectId(this_record)}, {'votes':1}) result = 'test' return resultI am also having trouble figuring out how to return the individual vote value for the specified mongodb record back to the browser, even with jsonify (which returns {"votes":'_id'}, but that's another issue. Hopefully someone can help me understand how to make AJAX work for me with Flask in this regard.Thanks in advance,Edit-24Jul2012-2:27PM CST:I suspect that the jQuery isn't even activating. It seems to be loading the new page based on the link's href attribute, hence it's no use to have e.prevenDefault(); when that's not being run. Furthermore, an alert('I have been clicked'); never runs when the click event takes place. Again, the jQuery is loaded, but the click event is not activating the jQuery, and I don't know why not. | My guess (based on your edit) is that you have more than one element on the page with the ID of vote_link - this is not allowed in HTML (the ID property must be unique across the document). If you want to have multiple links sharing the same behavior use a class instead ($(".vote_link") for example). |
Not using all python sys.argv New to python, but my question is about sys.argv.I have program that I want to execute different sets of code depending on how many arguments are passed to it. python test.py hello awesome worldwould run a different set of code frompython test.py hello worldIf I define 3 sys.argv then it is expecting 3 arguments every time otherwise I get: IndexError: list index out of range | Wrap it in if statements:if len(sys.argv) == 1: #do somethingelif len(sys.argv) == 2: #do something elseelif len(sys.argv) == 3: #do something differentelse: #do the last possibility |
Django -- Process Multiple Form Fields I am very new to Python / Django and would appreciate any and all help I can get here! I am trying to take in multiple form fields and haven't been able to find a great clean way to do so. My code is trying to take in a foreign Key radio selection (the team), and a number (the bet size), for each instance. I ended up creating the code to iterate over the request.POST.items to determine which game each team belongs to, which is working fine, however I am having trouble taking in the input for the Bet size as the "value" field is already being assigned to each game. I have debated using a model form instead of the methodology I have chosen, but cannot find a great way to take in the foreign key data. How would you suggest altering the code to take in the bet size field? Is there an alternative way to process using Model Forms that you would suggest?Please find my code below!Thanks in advanceViews.py:def pick_game(request):# Check what kind of request this is? GET/POST?if request.method == 'GET': game_list = Game.objects.order_by('-picks') # form = PickGameForm() page_variables = { "game_list": game_list, 'form':form } return render(request, 'social/pickGame.html', page_variables)else: for key, value in request.POST.items(): print(key, value) if "choice" in key: game_id = int(key.split("_")[1]) team_id = int(value) game = Game.objects.get(pk=game_id) team = Team.objects.get(pk=team_id) PlayerPick.objects.create( player_profile=request.user.playerprofile, game=game, team=team, bet_size=bet_size ) else: bet_size = request.POST.get('Bet')pickGame.html:<h1>Games</h1><form method="post"> {% csrf_token %} {% for game in game_list %} <h2>Game {{ game.number }}</h2> <p><input type="radio" name="{{ game.pk }}" value="{{ game.team1.pk }}"> {{ game.team1.name }}</p> <p><input type="radio" name="{{ game.pk }}" value="{{ game.team2.pk }}"> {{ game.team2.name }}</p> <p><input type="number" name="Bet"> How much Money? </p> {% endfor %} <hr> <button type="submit">Submit</button></form>PlayerPick model:class PlayerPick(models.Model):player_profile = models.ForeignKey('PlayerProfile')team = models.ForeignKey('Team')game = models.ForeignKey('Game')bet_size = models.IntegerField(default=0, blank=True)correct = models.BooleanField(default=False, blank=True)pick_time = models.DateTimeField(auto_now_add=True) | I think your manual approach is quite OK, and all you have to do is find a way to uniquely identify the Bet field for each game. You could to this in your html:<input type="number" name="{{game.pk}}-Bet">And then get the value in your view just before creating your PlayerPick object:bet_size = request.POST.get('%s-Bet' % game.pk)If you want to use Django Forms, you can also imitate this behaviour by using the prefix parameter when creating your form. You have to define this both for the unbound and bound forms, so that the prefixed field names can be recognized:# When creating the forms and passing them to the templatePlayerPickForm(prefix=str(game.pk)+'-')# When verifying posted dataPlayerPickForm(request.POST, prefix=str(game.pk)+'-') |
Last occurence of comma in python dataframe please help me on replace comma with & in the last occurence of commaDF['MSG'] =0 20.00, 20.001 4.00, 3.00, 2.002 100.003 10.00, 70.00, 10.004 10.00, 10.00, 10.00, 10.00, 10.005 99.006 50.00, 50.007 70.008 10.00, 20.00, 65.00output is:0 20.00, 20.001 4.00, 3.00& 2.002 100.003 10.00, 70.00& 10.004 10.00, 10.00, 10.00, 10.00& 10.005 99.006 50.00, 50.007 70.008 10.00, 20.00& 65.00if the comma occurences are more than 2 then expected in above output in dataframeplese help me | Assuming it's a clean list of numbers, you can change it to a string like this:list_of_numbers = [1, 2, 3, 4]print(', '.join([str(i) for i in list_of_numbers[:-1]]) + f" & {list_of_numbers[-1]}")gives1, 2, 3 & 4 |
Remove lines containing numbers attached to letters with Python I have a txt file containing one sentence per line, and there are lines containing numbers attached to letters. For instance:The boy3 was strolling on the beach while four seagulls appeared flying.There were 3 women sunbathing as well.All children were playing happily.I would like remove lines like the first one (i.e. having numbers stuck to words) but not lines like the second which are properly written.Has anybody got a slight idea? | You can use a simple regex pattern. We start with [0-9]+. This pattern detects any number 0-9 an indefinite amounts of times. Meaning 6, or 56, or 56790 works. If you want to detect sentences that have numbers attached to a string you could use something like this: ([a-zA-Z][0-9]+)|([0-9]+[a-zA-Z]) This regex string matches a string with a letter before a number or after a number. You can search strings using:import relines = [ 'The boy3 was strolling on the beach while 4 seagulls appeared flying.', 'There were 3 women sunbathing as well.',]for line in lines: res = re.search("([a-zA-Z][0-9]+)|([0-9]+[a-zA-Z])", line) if res is None: # remove lineHowever you can add more characters to the allowed letters if your sentences can include special characters and such. |
Pandas - select top N In Pandas I have separated my data by type and I need to summarize the frequency of the categorical data. I need to get all levels up to 50 levels. Right now I have something like this (example data follows):# Librariesimport numpy as npimport pandas as pd# Categorical variablesdf = pd.DataFrame(np.random.randint(low = 0, high = 1000000, size = (1000, 2)), columns=['CASE_NUMBER', 'CLIENT_ID'])df['CASE_NUMBER'] = df['CASE_NUMBER'].apply(str)df['CLIENT_ID'] = df['CLIENT_ID'].apply(str)df['PRODUCTCATEGORY'] = np.random.randint(low=0, high=2, size=(1000, 1))df['PRODUCTTYPE'] = np.random.randint(low=0, high=2, size=(1000, 1))df['PRODUCTTYPE'] = np.random.randint(low=0, high=2, size=(1000, 1))df['PRODUCT_CATEGORY_DESC'] = np.random.randint(low=0, high=2, size=(1000, 1))df['PRODUCT_DESC'] = np.random.randint(low=0, high=2, size=(1000, 1))df.loc[df['PRODUCTCATEGORY'] == 0 , 'PRODUCTCATEGORY'] = "AC2"df.loc[df['PRODUCTCATEGORY'] == 1 , 'PRODUCTCATEGORY'] = "AC1"df.loc[df['PRODUCTTYPE'] == 0 , 'PRODUCTTYPE'] = "AT2"df.loc[df['PRODUCTTYPE'] == 1 , 'PRODUCTTYPE'] = "AT1"df.loc[df['PRODUCT_CATEGORY_DESC'] == 0 , 'PRODUCT_CATEGORY_DESC'] = "Revocable"df.loc[df['PRODUCT_CATEGORY_DESC'] == 1 , 'PRODUCT_CATEGORY_DESC'] = "Irrevocable"df.loc[df['PRODUCT_DESC'] == 0 , 'PRODUCT_DESC'] = "Immediate"df.loc[df['PRODUCT_DESC'] == 1 , 'PRODUCT_DESC'] = ""I made some very ugly way attempts that started something like what's below, but asides from being verbose it is slow and also adds unnecessary rows if the max number of levels in all columns is < 50:e = df.describe()table2 = pd.DataFrame({ 'Variable Name': e.columns, })for n in e.columns: for i in range(50): grouped = df.groupby([n]).size().reset_index() grouped = grouped.sort_values(0, ascending=False) table2 = pd.concat([table2, grouped], ignore_index=True, axis=1)Here is an example of what I'm ultimately going for (note: the counts are made up numbers that do not really correspond the the above data). You do not have to handle Variable Name and Percent (but bonus points for you if you do!): | The key to the solution was in a comment from @JonClements:table2 = df.melt().groupby(['variable', 'value']).size() From there I just added some logic to truncate and transform the results:table2 = table2.to_frame(name='Count')table2 = table2.reset_index(inplace=False)table2['Percent'] = table2['Count'] / len(df.index)for v in table2['variable'].unique(): tmp = table2[table2.variable.str.contains(v) == True] table2 = table2[table2.variable.str.contains(v) == False] if tmp.shape[0] > 50: tmp0 = tmp.iloc[:50,] tmp1 = pd.DataFrame([{'variable':v, 'value': 'Other', 'Count':tmp.shape[0]-50, 'Percent':sum(tmp0['Percent']) }]) tmp = tmp0.append(tmp1) table2 = table2.append(tmp)print(table2) |
django output empty csv I'm using django and I'm trying to export the CSV_data list into csv file. Below is my csv.py:#coding=utf-8from django.http import HttpResponsefrom django.template import loader, Contextfrom demo.views import CSV_datadef output(request, filename): response = HttpResponse(mimetype='text/csv') response['Content-Disposition'] = 'attachment; filename=%s.csv' % filename t = loader.get_template('csv.txt') c = Context({ 'data': CSV_data, }) response.write(t.render(c)) return responseCSV_data is a variable in views.py, I tried to print it in template, the value is ok. [(u'2012-06-01', [0, 0, 0]), ('2012-06-08', [0, 0, 0]), ('2012-06-15', [0, 0, 0]), ('2012-06-22', [0, 0, 0]), ('2012-06-29', [0, 0, 0]), ('2012-07-06', [0, 0, 0]), ('2012-07-13', [0, 0, 0]), ('2012-07-20', [0, 0, 0]), ('2012-07-27', [0, 0, 0]), ('2012-08-03', [131, 164, 79.88]), ('2012-08-10', [110, 198, 55.56]), ('2012-08-17', [112, 197, 56.85]), ('2012-08-24', [147, 283, 51.94]), ('2012-08-31', [0, 306, 0.0]), ('2012-09-07', [418, 418, 100.0]), ('2012-09-14', [342, 342, 100.0]), ('2012-09-21', [732, 732, 100.0]), ('2012-09-28', [689, 689, 100.0]), ('2012-10-05', [775, 775, 100.0]), ('2012-10-12', [469, 469, 100.0]), ('2012-10-19', [477, 477, 100.0]), ('2012-10-26', [897, 897, 100.0]), ('2012-11-02', [216, 216, 100.0]), ('2012-11-09', [1046, 1046, 100.0]), ('2012-11-16', [840, 840, 100.0]), ('2012-11-23', [948, 948, 100.0])]However, the generated csv is always empty.I tried to add the CSV_data definition to the csv.py file, like this:#coding=utf-8from django.http import HttpResponsefrom django.template import loader, ContextCSV_data = [(u'2012-06-01', [0, 0, 0]), ('2012-06-08', [0, 0, 0]), ('2012-06-15', [0, 0, 0]), ('2012-06-22', [0, 0, 0]), ('2012-06-29', [0, 0, 0]), ('2012-07-06', [0, 0, 0]), ('2012-07-13', [0, 0, 0]), ('2012-07-20', [0, 0, 0]), ('2012-07-27', [0, 0, 0]), ('2012-08-03', [131, 164, 79.88]), ('2012-08-10', [110, 198, 55.56]), ('2012-08-17', [112, 197, 56.85]), ('2012-08-24', [147, 283, 51.94]), ('2012-08-31', [0, 306, 0.0]), ('2012-09-07', [418, 418, 100.0]), ('2012-09-14', [342, 342, 100.0]), ('2012-09-21', [732, 732, 100.0]), ('2012-09-28', [689, 689, 100.0]), ('2012-10-05', [775, 775, 100.0]), ('2012-10-12', [469, 469, 100.0]), ('2012-10-19', [477, 477, 100.0]), ('2012-10-26', [897, 897, 100.0]), ('2012-11-02', [216, 216, 100.0]), ('2012-11-09', [1046, 1046, 100.0]), ('2012-11-16', [840, 840, 100.0]), ('2012-11-23', [948, 948, 100.0])]def output(request, filename): response = HttpResponse(mimetype='text/csv') response['Content-Disposition'] = 'attachment; filename=%s.csv' % filename t = loader.get_template('csv.txt') c = Context({ 'data': CSV_data, }) response.write(t.render(c)) return responseThen the output csv is not empty. So I guess there's something wrong when import CSV_data from views.py.The problem is I've tested that CSV_data value in views is correct. So what could go wrong?****************UPDATE****************: original code in views.py is like: CSV_data = [] def part_usage_result(request): ...(details omit) usageDictWeek = helper.getResultByWeek(modelName, spareCode, start, end) #returns a list CSV_data=usageDictWeekI change to: CSV_data = [] def part_usage_result(request): ...(details omit) usageDictWeek = helper.getResultByWeek(modelName, spareCode, start, end) #returns a list for each in usageDictWeek: CSV_data.append(each)Now the content of csv is correct.Still don't know why this happens | As you didn't provide helper.getResultByWeek details and how it is called, I guess it returns a global variable with a list value, and this variable is modified somewhere in between. CSV_data = usageDictWeekdo not copy a list, but creates another reference to existing one. When later original usageDictWeek is modified, CSV_data is modified as well.When you do instead CSV_data[:] = usageDictWeek a new copy of usageDictWeek is created and assigned to CSV_data. |
Flask request empty after redirect Using Flask, I'm able to access request.form data in the function poll(), but after a redirect, request.form is empty. I'm sure this is intentional and I have to explicitly pass this, but how?from flask import render_template, redirect, requestfrom app import appfrom forms import PollForm@app.route('/poll', methods = ['GET', 'POST'])def poll(): form = PollForm() if form.validate_on_submit(): print request.form # returns ImmutableMultiDict with data return redirect('/details') return render_template('poll.html', form=form)@app.route('/details')def details(): print request.form # returns empty ImmutableMultiDict return render_template('details.html') | It's common to redirect from a POST, but you shouldn't need your form data anymore in the details function.You should process the form submission in the poll function and then redirect to details, which I assume would display some updated data - e.g. from a database.@app.route('/poll', methods = ['GET', 'POST'])def poll(): form = PollForm() if form.validate_on_submit(): # use request.form to update your database return redirect('/details') return render_template('poll.html', form=form)@app.route('/details')def details(): # query the database to show the updated poll return render_template('details.html') |
Python Scrapy not always downloading data from website Scrapy is used to parse an html page. My question is why sometimes scrapy returns the response I want, but sometimes does not return a response. Is it my fault? Here's my parsing function:class AmazonSpider(BaseSpider): name = "amazon" allowed_domains = ["amazon.org"] start_urls = [ "http://www.amazon.com/s?rh=n%3A283155%2Cp_n_feature_browse-bin%3A2656020011" ]def parse(self, response): sel = Selector(response) sites = sel.xpath('//div[contains(@class, "result")]') items = [] titles = {'titles': sites[0].xpath('//a[@class="title"]/text()').extract()} for title in titles['titles']: item = AmazonScrapyItem() item['title'] = title items.append(item) return items | I believe you are just not using the most adequate XPath expression. Amazon's HTML is kinda messy, not very uniform and therefore not very easy to parse. But after some experimenting I could extract all the 12 titles of a couple of search results with the following parse function:def parse(self, response): sel = Selector(response) p = sel.xpath('//div[@class="data"]/h3/a') titles = p.xpath('span/text()').extract() + p.xpath('text()').extract() items = [] for title in titles: item = AmazonScrapyItem() item['title'] = title items.append(item) return itemsIf you care about the actual order of the results the above code might not be appropriate but I believe that is not the case. |
Swaping values of two lists based on given index I have a list which consists out of two numpy arrays, the first one telling the index of a value and the second containing the belonging value itself. It looks a little like this:x_glob = [[0, 2], [85, 30]]A function is now receiving the following input:x = [-10, 0, 77, 54]My goal is to swap the values of x with the values from x_glob based on the given index array from x_glob. This example should result in something like this:x_new = [85, 0, 30, 54]I do have a solution using a loop. But I am pretty sure there is a way in python to solve this issue more efficient and elegant. Thank you! | NumPy arrays may be indexed with other arrays, which makes this replacement trivial.All you need to do is index your second array with x_glob[0], and then assign x_glob[1]x[x_glob[0]] = x_glob[1]To see how this works, just look at the result of the indexing:>>> x[x_glob[0]]array([-10, 77])The result is an array containing the two values that we need to replace, which we then replace with another numpy array, x_glob[1], to achieve the desired result.>>> x_glob = np.array([[0, 2], [85, 30]])>>> x = np.array([-10, 0, 77, 54])>>> x[x_glob[0]] = x_glob[1]>>> xarray([85, 0, 30, 54]) |
How to make a customized grouped dataframe with multiple aggregations I have a standard dataframe like the one below : Id Type Speed Efficiency Durability0 Id001 A OK OK nonOK1 Id002 A nonOK OK nonOK2 Id003 B nonOK nonOK nonOK3 Id004 B nonOK nonOK OK4 Id005 A nonOK nonOK OK5 Id006 A OK OK OK6 Id007 A OK nonOK OK7 Id008 B nonOK nonOK OK8 Id009 C OK OK OK9 Id010 B OK OK nonOK10 Id011 C OK nonOK OK11 Id012 C OK nonOK OK12 Id013 C nonOK OK OK13 Id014 C nonOK nonOK OK14 Id015 C nonOK nonOK OKAnd I'm trying to get this kind of output : Type Test Speed Efficiency Durability0 A OK 3 3 31 A nonOK 2 2 22 B OK 1 1 23 B nonOK 3 3 24 C OK 3 2 65 C nonOK 3 4 0I tried with df.groupby('Type').agg('count') but it doesn't give the expected output.Is it possible to make this kind of transformation with pandas, please ? | You can also use the following solution using pandas method chaining:import pandas as pd(pd.melt(df, id_vars='Type', value_vars=['Speed', 'Efficiency', 'Durability'], value_name='Test') .groupby(['Type', 'Test', 'variable']) .size() .reset_index() .pivot(index=['Type', 'Test'], columns='variable', values=0) .reset_index())variable Type Test Durability Efficiency Speed0 A OK 3.0 3.0 3.01 A nonOK 2.0 2.0 2.02 B OK 2.0 1.0 1.03 B nonOK 2.0 3.0 3.04 C OK 6.0 2.0 3.05 C nonOK NaN 4.0 3.0 |
Machine learning with vectors in both features and target How can I train a model with vectors/arrays as features? I seem to consistently getting errors when doing this...My feature matrix would look something like this: A B C Profile0 1 4 4 [1,2,3,4]1 2 4 5 [2,2,4,1]while my target vector would look something like this:0 [0,4,5,0]1 [1,5,6,0]etc etc but I'm having trouble with fit(x, y) when using linear_regression from sklearn. Here is the output to print(x) and print(y):x:Beams/Beam[0]/Parameters/Energy Beams/Beam[0]/Parameters/BunchPopulation Beams/Beam[0]/BunchShape/Parameters/LongitudinalSigmaLabFrame Simulation/NumberOfParticles initialXHist0 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...1 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...2 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...3 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...4 25.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...... ... ... ... ... ...995 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...996 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...997 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...998 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...999 26.0 1.300000e+11 1.05 5000 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...1000 rows × 5 columnsy:0 [8, 4, 6, 13, 5, 5, 10, 11, 15, 9, 19, 18, 16,...1 [6, 5, 8, 8, 9, 12, 6, 20, 9, 20, 18, 12, 24, ...2 [6, 6, 7, 8, 13, 10, 12, 7, 14, 14, 18, 24, 16...3 [2, 5, 10, 3, 6, 8, 13, 12, 7, 18, 12, 20, 22,...4 [5, 3, 5, 9, 8, 8, 8, 9, 14, 13, 10, 15, 21, 1... ... 995 [2, 9, 4, 5, 10, 5, 10, 15, 16, 13, 12, 13, 21...996 [2, 3, 5, 5, 11, 15, 18, 15, 14, 13, 16, 17, 1...997 [4, 5, 6, 8, 5, 7, 7, 26, 13, 16, 17, 16, 17, ...998 [1, 3, 5, 7, 5, 6, 16, 10, 17, 12, 12, 18, 24,...999 [3, 4, 8, 9, 8, 4, 14, 17, 11, 16, 7, 20, 14, ...Name: finalXHist, Length: 1000, dtype: objectCan anyone advise? The error I get is: ---------------------------------------------------------------------------TypeError Traceback (most recent call last)TypeError: only size-1 arrays can be converted to Python scalarsThe above exception was the direct cause of the following exception:ValueError Traceback (most recent call last)/tmp/ipykernel_826/1502489859.py in <module> 3 4 # Train the model using the training sets----> 5 regr.fit(X_train, y_train) 6 7 # Make predictions using the testing set/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/linear_model/_base.py in fit(self, X, y, sample_weight) 516 accept_sparse = False if self.positive else ['csr', 'csc', 'coo'] 517 --> 518 X, y = self._validate_data(X, y, accept_sparse=accept_sparse, 519 y_numeric=True, multi_output=True) 520 /cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params) 431 y = check_array(y, **check_y_params) 432 else:--> 433 X, y = check_X_y(X, y, **check_params) 434 out = X, y 435 /cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0:---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator) 869 raise ValueError("y cannot be None") 870 --> 871 X = check_array(X, accept_sparse=accept_sparse, 872 accept_large_sparse=accept_large_sparse, 873 dtype=dtype, order=order, copy=copy,/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0:---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0/cvmfs/sft.cern.ch/lcg/views/LCG_101swan/x86_64-centos7-gcc8-opt/lib/python3.9/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 671 array = array.astype(dtype, casting="unsafe", copy=False) 672 else:--> 673 array = np.asarray(array, order=order, dtype=dtype) 674 except ComplexWarning as complex_warning: 675 raise ValueError("Complex data not supported\n"ValueError: setting an array element with a sequence.I've tried googling it but no luck so far, I guess there is something wrong with the way these two objects are set up. | The error is being raised for X (third-to-last part of the traceback): you cannot have an array-valued feature. You need to do some feature engineering to generate a flat table of data to train on; whether that's flattening the arrays into individual features, or extracting some statistic based on those arrays, or something else depends on what those arrays mean (and would be a better question for datascience.SE or stats.SE).Having arrays for y may have a similar issue, but if treating them as individual outputs is what you're after, it becomes either a "multioutput" regression or a "multilabel" classification, which are handled by subsets of sklearn estimators. |
How to create the list inside the dictionary using python I am trying to Automate the dataset creation in quicksight using Boto3.but I am stuck some point . please any one help to solve this.Here my code :qs = boto3.client('quicksight')response = qs.describe_data_set( AwsAccountId='xxxxxxxx', DataSetId='testdatasetv4')columns =response['DataSet']['PhysicalTableMap']['string']['RelationalTable']['InputColumns']for dic in columns: for key in dic: print({dic[key]})I need a output like this:response1 = Client.create_data_set( AwsAccountId=data['AwsAccountId1'], DataSetId=data['DatasetId'], Name='testdataset', PhysicalTableMap={ 'string': { 'RelationalTable': { 'DataSourceArn':response['Arn'], 'Schema': 'public', 'Name': 'sales', 'InputColumns': [ { 'Name': 'salesid', 'Type': 'INTEGER' }, { 'Name': 'listid', 'Type': 'INTEGER' }, { 'Name': 'sellerid', 'Type': 'INTEGER' }, { 'Name': 'buyerid', 'Type': 'INTEGER' }, { 'Name': 'eventid', 'Type': 'INTEGER' }, { 'Name': 'dateid', 'Type': 'INTEGER' }, { 'Name': 'qtysold', 'Type': 'INTEGER' }, { 'Name': 'pricepaid', 'Type': 'DECIMAL' }, { 'Name': 'commission', 'Type': 'DECIMAL' }, { 'Name': 'saletime', 'Type': 'DATETIME' }, ] } } },How can I add the above Input columns through a code. I am able extract the input columns but I didn't any idea to add input columns . please help me to do this. | Here's an example of creating a dictionary and adding different nested elements. You'll need to adapt for solution.columns = ['key1', 'key2', 'key3']vals = ['1', '2', '3']mydict = {}mydict['firstkey'] = 1mydict['anotherkey'] = {}mydict['anotherkey']['secondkey'] = 2mydict['needalist'] = {}mydict['needalist']['mylist'] = [{k:vals[i]} for i, k in enumerate(columns)]mydict{'firstkey': 1, 'anotherkey': {'secondkey': 2}, 'needalist': {'mylist': [{'key1': '1'}, {'key2': '2'}, {'key3': '3'}]}} |
Regex for finding trigonometry function with variable I have the string:-15*sin(h)**2+121*sin(h)-216I'm currently usinginput_text = re.findall(r"sin|cos|tan|\d|\w|\(|\)|\+|-|\*+", input_text.strip().lower())to try to tokenize this string, but it returns the following:['-', '1', '5', '*', 'sin', '(', 'h', ')', '**', '2', '+', '1', '2', '1', '*', 'sin', '(', 'h', ')', '-', '2', '1', '6']Could someone help me modify my regex statement so I get['sin(h)']as a token instead of it being broken into['sin', '(', 'h', ')']On top of that could I use [a-zA-Z] so I can tokenize the trig functions for any letter? As in sin([a-zA-Z]) | Don't make (, \w, and ) alternatives to the trig functions, make them part of that same match.(?:sin|cos|tan)\(\w\)|\+|-|\*+ |
How does GridSearchCV compute training scores? I'm having a hard time figuring out parameter return_train_score in GridSearchCV. From the docs:return_train_score : boolean, optional If False, the cv_results_ attribute will not include training scores.My question is: what are the training scores?In the following code I'm splitting data into ten stratified folds. As a consequence grid.cv_results_ contains ten test scores, namely 'split0_test_score', 'split1_test_score' , ..., 'split9_test_score'. I'm aware that each of those is the success rate obtained by a 5-nearest neighbors classifier that uses the corresponding fold for testing and the remaining nine folds for training.grid.cv_results_ also contains ten train scores: 'split0_train_score', 'split1_train_score' , ..., 'split9_train_score'. How are these values calculated?from sklearn import datasetsfrom sklearn.model_selection import GridSearchCVfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.model_selection import StratifiedKFold X, y = datasets.load_iris(True)skf = StratifiedKFold(n_splits=10, random_state=0)knn = KNeighborsClassifier()grid = GridSearchCV(estimator=knn, cv=skf, param_grid={'n_neighbors': [5]}, return_train_score=True)grid.fit(X, y)print('Mean test score: {}'.format(grid.cv_results_['mean_test_score']))print('Mean train score: {}'.format(grid.cv_results_['mean_train_score']))#Mean test score: [ 0.96666667]#Mean train score: [ 0.96888889] | It is the train score of the prediction model on all folds excluding the one you are testing on. In your case, it is the score over the 9 folds you trained the model on. |
html to pdf convertion css not working I try to convert the following page to pdflink using xhtml2pdf library for python.But the problem is the css styles are not working properly.How can i solve the problem ? | You need to write all css in header. Import will not work in pdf.<link href="//maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css">this need to be change like following:<style>/*! * Bootstrap v4.0.0 (https://getbootstrap.com) * Copyright 2011-2018 The Bootstrap Authors * Copyright 2011-2018 Twitter, Inc. * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) */:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;-- red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;-- teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray- dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;-- ......</style> |
How to override attribute in Base class Python3 , so that subsequent operations remains same verywhere? I have a use case, where I have to override one attribute in base class init, but the operations after that ( by making use of that attribute ) remains the same.class Person: def __init__(self, name, phone, record_file = None): self.name = name self.phone = phone if self.record_file: self.contents = json.load(open(self.record_file)) else: self.contents = {'person_specific_details': details} #### Do some operations with self.contentsclass Teenager(Person): def __init__(self, **kwargs): super().__init__(**kwargs) # If self.record_file is None: # self.contents = new for Teenager self.contents = {'teenager_specific_details': teenager_details} # But further operations remains the same (#### Do some operations with self.contents)t = Teenager(phone='xxxxxx', name='XXXXXXX')I am not able to acheive it properly. Can anyone help? | Your main problem is that you want to change an intermediate value in the Person.__init__, which won't work. But you could create an optional argument for the contents and just use that instead of the default one.Like this:class Person: def __init__(self, name, phone, record_file=None, contents=None): self.name = name self.phone = phone if record_file: with open(record_file) as fp: self.contents = json.load(fp) else: if contents: # can be utilized by other subclasses self.contents = contents else: self.contents = {"person_specific_details": details} #### Do some operations with self.contentsclass Teenager(Person): def __init__(self, **kwargs): contents = {"teenager_specific_details": teenager_details} super().__init__(contents=contents, **kwargs)t = Teenager(phone="xxxxxx", name="XXXXXXX")This way you can pass the Teenager specific contents to the base initializaion, and it can proceed further with that one. |
Tensorflow use : codec can't decode byte XX in position XX : invalid continuation byte i'm trying to train a model, I'm used the code that can be found here : https://medium.com/@martin.lees/image-recognition-with-machine-learning-in-python-and-tensorflow-b893cd9014d2The thing is, even when I just copy / paste the code, I got a problem that I really don't understand why I have it. I searched a lot on the tensorflow Github but found nothing to settle my problem.Here is the traceback :Traceback (most recent call last): File "D:\pokemon\PogoBot\PoGo-Adb\ml_test_data_test.py", line 108, in <module> tf.app.run(main=main) File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "C:\Users\pierr\anaconda3\lib\site-packages\absl\app.py", line 303, in run _run_main(main, args) File "C:\Users\pierr\anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main sys.exit(main(argv)) File "D:\pokemon\PogoBot\PoGo-Adb\ml_test_data_test.py", line 104, in main saver.save(sess, "./model") File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1183, in save model_checkpoint_path = sess.run( File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 957, in run result = self._run(None, fetches, feed_dict, options_ptr, File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1180, in _run results = self._do_run(handle, final_targets, final_fetches, File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1358, in _do_run return self._do_call(_run_fn, feeds, fetches, targets, options, File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call return fn(*args) File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1349, in _run_fn return self._call_tf_sessionrun(options, feed_dict, fetch_list, File "C:\Users\pierr\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1441, in _call_tf_sessionrun return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 109: invalid continuation byteAnd here is the code :from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport cv2from os import listdirfrom os.path import isfile, joinimport numpy as npimport tensorflow as tf2import tensorflow.compat.v1 as tftf.disable_v2_behavior()import mathclass Capchat: data_dir = "data_test//" nb_categories = 9 X_train = None # X is the data array Y_train = None # Y is the labels array, you'll see this notation pretty often train_nb = 0 # number of train images X_test = None Y_test = None test_nb = 0 # number of tests images index = 0 # the index of the array we will fill def readimg(self, file, label, train = True): im = cv2.imread(file); # read the image to PIL image im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY).flatten() # put it in black and white and as a vector # the train var definies if we fill the training dataset or the test dataset if train : self.X_train[self.index] = im self.Y_train[self.index][label - 1] = 1 else : self.X_test[self.index] = im self.Y_test[self.index][label - 1] = 1 self.index += 1 def __init__(self): total_size = [f for f in listdir(self.data_dir + "1/") if isfile(join(self.data_dir + "1/", f))].__len__() # ge the total size of the dataset self.train_nb = math.floor(total_size * 0.8) # we get 80% of the data to train self.test_nb = math.ceil(total_size *0.2) # 20% to test # We fill the arrays with zeroes 840 is the number of pixels in an image self.X_train = np.zeros((self.train_nb*self.nb_categories, 735), np.int32) self.Y_train = np.zeros((self.train_nb*self.nb_categories, 3), np.int32) self.X_test = np.zeros((self.test_nb*self.nb_categories, 735), np.int32) self.Y_test = np.zeros((self.test_nb*self.nb_categories, 3), np.int32) # grab all the files files_1 = [f for f in listdir(self.data_dir+"1/") if isfile(join(self.data_dir+"1/", f))] files_2 = [f for f in listdir(self.data_dir+"2/") if isfile(join(self.data_dir+"2/", f))] files_3 = [f for f in listdir(self.data_dir+"3/") if isfile(join(self.data_dir+"3/", f))] for i in range(self.train_nb): # add all the files to training dataset self.readimg(self.data_dir+"1/"+files_1[i], 1) self.readimg(self.data_dir+"2/"+files_2[i], 2) self.readimg(self.data_dir+"3/"+files_3[i], 3) self.index = 0 for i in range (self.train_nb, self.train_nb + self.test_nb): self.readimg(self.data_dir+"1/" + files_1[i], 1, False) self.readimg(self.data_dir+"2/" + files_2[i], 2, False) self.readimg(self.data_dir+"3/" + files_3[i], 3, False) print("donnée triée")def main(_): # Import the data cap = Capchat() # Create the model x = tf.placeholder(tf.float32, [None, 735]) W = tf.Variable(tf.zeros([735, 3]), name="weights") b = tf.Variable(tf.zeros([3]), name="biases") mult = tf.matmul(x, W) # W * X... y = tf.add(mult, b, name="calc") # + b # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 3]) # cost function cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) # optimizer train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # allows to save the model later saver = tf.train.Saver() # start a session to run the network on sess = tf.InteractiveSession() # initialize global variables tf.global_variables_initializer().run() # Train for 1000 steps, notice the cap.X_train and cap.Y_train for _ in range(1000): sess.run(train_step, feed_dict={x: cap.X_train, y_: cap.Y_train}) # Extract one hot encoded output via argmax correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # Test for accuraccy on the testset, notice the cap.X_test and cap.Y_test accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("\nATTENTION RESULTAT ",sess.run(accuracy, feed_dict={x: cap.X_test, y_: cap.Y_test})) # save the model learned weights and biases saver.save(sess, "./model") if __name__ == '__main__': tf.app.run(main=main) | The error was really stupid,because I'm on windows, this linesaver.save(sess, "./model")was the cause of the error, so I changed it with this :saver.save(sess, "model\\model")And now this is working. |
creating multiple columns with a loop based on other column in pandas Hello everyone I have a working code in python but it is written in a crude way because I am still learning the fundamentals and require some insight.I am creating 40 columns based on one column like i shared a small part of it below:df["Bonus Payout 80%"]=0df["Bonus Payout 81%"]=df["Monthly gross salary 100% (LC)"]*0.01df["Bonus Payout 82%"]=df["Monthly gross salary 100% (LC)"]*0.02df["Bonus Payout 83%"]=df["Monthly gross salary 100% (LC)"]*0.03df["Bonus Payout 84%"]=df["Monthly gross salary 100% (LC)"]*0.04df["Bonus Payout 85%"]=df["Monthly gross salary 100% (LC)"]*0.05df["Bonus Payout 80%"]=df['Bonus Payout 80%'].apply('{:,.2f}'.format)df["Bonus Payout 81%"]=df['Bonus Payout 81%'].apply('{:,.2f}'.format)df["Bonus Payout 82%"]=df["Bonus Payout 82%"].apply('{:,.2f}'.format)df["Bonus Payout 83%"]=df["Bonus Payout 83%"].apply('{:,.2f}'.format)df["Bonus Payout 84%"]=df["Bonus Payout 84%"].apply('{:,.2f}'.format)df["Bonus Payout 85%"]=df["Bonus Payout 85%"].apply('{:,.2f}'.format)the lines of code goes on until bonus payout 120%how can i tidy this up and convert it to a more coder way?any help is appreciatededit :my first lines of code is :df["Bonus Payout 80%"]=df["Monthly gross salary 100% (LC)"]*0.00df["Bonus Payout 80%"]=df['Bonus Payout 80%'].apply('{:,.2f}'.format)and the last onedf["Bonus Payout 120%"]=df["Monthly gross salary 100% (LC)"]*0.40df["Bonus Payout 120%"]=df['Bonus Payout 120%'].apply('{:,.2f}'.format) | You can use f-strings and for loops:j = 0for i in range(80,121): df[f"Bonus Payout {i}%"]=df["Monthly gross salary 100% (LC)"]*j df[f"Bonus Payout {i}%"]=df[f'Bonus Payout {i}%'].apply('{:,.2f}'.format) j += 0.01P.S.: I have edited my answer after question edit. |
Realtime JSON string transfer from android/ios app to a Windows software I want to create an android/ios app that would send a normal string (or a json) to a software in Windows which i will also make. Example, in my mobile app when i press a button, the text on my Windows software will change to whatever text that was sent by my the mobile app in realtime, not after 1 minute or so.I will use python for that Windows software.I will also have an azure/aws cloud virtual machine instance that will serve as my "bridge" server for this string communication transfer between mobile app and the windows software.My questions are:What is the best way to code this?What's the best practice of doing this kind of real time transfers? NOTE: I have minimal experience with socket programming and I'm curious if that's the "industry standard" of doing this kind of tasks or if there's an easier way of doing it. Thank you very much! | your question covers a lot of subjects, but I'll try to give you some basic information.basic best practicesDon't use raw sockets, the industry standard is mostly using an HTTP server (Django or Flask) with RESTful API using JSON as your serialization protocol. I also recommend that you'll make your server stateless.Investigate similar use cases and especially webhooks. the simplest way you execute your plan is giving each end of your connection a unique id, and then, for example, change something in phone app with -> create API request to your server to change something in a windows application with the id 123 -> the server will look in its database for the address of the device with id 123 -> the server will send an HTTP request to the windows software. this model has many disadvantages like forcing register in the server phone apps to related windows software, what do you do if the address of the software changes (routers have dynamic IP).Now, for the windows application, you may want to compile python to an executable file and create a standard installer for your program (guides are attached). I also recommend that you'll check Kivy which is a GUI framework that can be compiled to windows easily.As mentioned, this topic is huge and if you want to create a real industry-standard application, you'll have to consider many more things like using HTTPS and other security issues, the ability to scale out your application and handle huge amounts of requests, have a CICD pipeline, testing and testing infrastructure, and many more.some related links and guides, good luck!Django web server framework Flask best practise stack unit testing with pytesttesting automationcompile python with pyinstallercreate installer file for your executable Kivy |
numpy timedelta64 not showing fraction I want to convert 847hours into days, Actual result is 847/24= 35,29..But, numpy show only "35 days"import numpy as npx= np.timedelta64(847, 'h')x= np.timedelta64(x, 'D')print(x) #Returns 35 days, Expected 35,29 | The magnitude of a timedelta64 is always stored as a 64-bit integer (cf. Datetime Units). To obtain fractional days, we can do:import numpy as npx = np.timedelta64(847, 'h')x = x / np.timedelta64(1, 'D')print(x)The result 35.291666666666664 is inevitably no longer a timedelta64. |
How can I stop networkx to change the source and the target node? I make a Graph (not Digraph) from a data frame (Huge network) with networkx.I used this code to creat my graph:nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe). | If you mean the order has changed, check out nx.OrderedGraph |
Get values from between two other values for each row in the dataframe I want to extract the integer values for each Hole_ID between the From and To values (inclusive). And save them to a new data frame with the Hole IDs as the column headers.import pandas as pdimport numpy as npdf=pd.DataFrame(np.array([['Hole_1',110,117],['Hole_2',220,225],['Hole_3',112,114],['Hole_4',248,252],['Hole_5',116,120],['Hole_6',39,45],['Hole_7',65,72],['Hole_8',79,83]]),columns=['HOLE_ID','FROM', 'TO'])Example starting data HOLE_ID FROM TO0 Hole_1 110 1171 Hole_2 220 2252 Hole_3 112 1143 Hole_4 248 2524 Hole_5 116 1205 Hole_6 39 456 Hole_7 65 727 Hole_8 79 83This is what I would like:Out[5]: Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_80 110 220 112 248 116 39 65 791 111 221 113 249 117 40 66 802 112 222 114 250 118 41 67 813 113 223 Nan 251 119 42 68 824 114 224 Nan 252 120 43 69 835 115 225 Nan Nan Nan 44 70 Nan6 116 Nan Nan Nan Nan 45 71 Nan7 117 Nan Nan Nan Nan Nan 72 NanI have tried to use the range function, which works if I manually define the range:for i in df['HOLE_ID']: df2[i]=range(int(1),int(10))gives Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_80 1 1 1 1 1 1 1 11 2 2 2 2 2 2 2 22 3 3 3 3 3 3 3 33 4 4 4 4 4 4 4 44 5 5 5 5 5 5 5 55 6 6 6 6 6 6 6 66 7 7 7 7 7 7 7 77 8 8 8 8 8 8 8 88 9 9 9 9 9 9 9 9but this won't take the df To and From values as inputs to the range.df2=pd.DataFrame()for i in df['HOLE_ID']: df2[i]=range(df['To'],df['From'])gives an error. | Apply a method that returns a series of a range between from and to and then transpose the result, eg:import numpy as npdf.set_index('HOLE_ID').apply(lambda v: pd.Series(np.arange(v['FROM'], v['TO'] + 1)), axis=1).TGives you:HOLE_ID Hole_1 Hole_2 Hole_3 Hole_4 Hole_5 Hole_6 Hole_7 Hole_80 110.0 220.0 112.0 248.0 116.0 39.0 65.0 79.01 111.0 221.0 113.0 249.0 117.0 40.0 66.0 80.02 112.0 222.0 114.0 250.0 118.0 41.0 67.0 81.03 113.0 223.0 NaN 251.0 119.0 42.0 68.0 82.04 114.0 224.0 NaN 252.0 120.0 43.0 69.0 83.05 115.0 225.0 NaN NaN NaN 44.0 70.0 NaN6 116.0 NaN NaN NaN NaN 45.0 71.0 NaN7 117.0 NaN NaN NaN NaN NaN 72.0 NaN |
Removing unwanted characters, and writing from a JSON response So, I am trying to extract specific data and write it to a file, this JSON response has odd brackets around the information I want and need to be stripped off and I'm not really sure how to get to the 'desired output'.Maybe its better to do it in an xls document? The end goal is to compare this list against another to find which hosts are missing.Its a very lengthy response, so I just grabbed a snippet.The JSON response [ { "adapter_list_length": 3, "adapters": [ "adapter1", "adapter2", "adapter3" ], "id": "", "labels": [ "", "" ], "specific_data.data.hostname": [ "HOSTNAME1" ], "specific_data.data.last_seen": "", "specific_data.data.network_interfaces.ips": [ "123.45.67.89" ], "specific_data.data.os.type": [ "" ] }, { "adapter_list_length": 3, "adapters": [ "adapter1", "adapter2", "adapter3" ], "id": "", "labels": [ "", "" ], "specific_data.data.hostname": [ "HOSTNAME2"My test writer:names = [item['specific_data.data.hostname'] for item in data]with open ('namelist.csv', mode='w') as csv_file: csv_writer = csv.writer(csv_file, delimiter='\n', quotechar='"', quoting=csv.QUOTE_MINIMAL) csv_writer.writerow(names)Current output:['HOSTNAME1'] ['HOSTNAME2']Desired Output:Hostnames: IPaddress:HOSTNAME1 123.45.67.89 HOSTNAME2 123.456.78.9 .... ....... .... | You can have it done this way:import csvdata = [{'adapter_list_length': 3, 'adapters': ['adapter1', 'adapter2', 'adapter3'], 'id': '', 'labels': ['', ''], 'specific_data.data.hostname': ['HOSTNAME1'], 'specific_data.data.last_seen': '', 'specific_data.data.network_interfaces.ips': ['123.45.67.89'], 'specific_data.data.os.type': ['']}, {'adapter_list_length': 3, 'adapters': ['adapter1', 'adapter2', 'adapter3'], 'id': '', 'labels': ['', ''], 'specific_data.data.hostname': ['HOSTNAME2'],'specific_data.data.last_seen': '', 'specific_data.data.network_interfaces.ips': ['123.45.67.80'], 'specific_data.data.os.type': ['']}] names = [item['specific_data.data.hostname'][0] for item in data]ips = [item['specific_data.data.network_interfaces.ips'][0] for item in data]dets = list(zip(names,ips))print('Hostnames:','\t','IPaddress:')for i,j in dets: print(i,'\t',j)fields = ['Hostnames:', 'IPaddress:'] rows = [list(x) for x in dets] filename = "dumb.csv"with open(filename, 'w') as csvfile: csvwriter = csv.writer(csvfile) csvwriter.writerow(fields) csvwriter.writerows(rows) |
Append min value of two columns in pandas data frame dfPurchase 13254 7 df2df2 = pd.DataFrame(columns=['Mean','Median','Max','Col4'])df2 = df2.append({'Mean': (df['Purchase'].mean()),'Median':df['Purchase'].median(),'Max':(df['Purchase'].max()),'Col4':(df2[['Mean','Median']].min(axis=1))}, ignore_index=True)Output obtained Mean Median Max Col4 3.66 3.5 7 Series([], dtype: float64)Output expected Mean Median Max Col4 3.66 3.5 7 3.5 #Value in Col4 is Min(Mean, Median of df2)Can anyone help? | Use np.minimum and passed mean with median:df2 = pd.DataFrame(columns=['Mean','Median','Max','Col4'])df2 = (df2.append({'Mean': df['Purchase'].mean(), 'Median':df['Purchase'].median(), 'Max': df['Purchase'].max(), 'Col4': np.minimum(df['Purchase'].mean(), df['Purchase'].median())}, ignore_index=True))print (df2) Mean Median Max Col40 3.666667 3.5 7.0 3.5Or better is use Series.agg with new value of min in next step, last create one row DataFrame:s = df['Purchase'].agg(['mean','median','max'])s.loc['col4'] = s[['mean','median']].min()df = s.to_frame(0).Tprint (df) mean median max col40 3.666667 3.5 7.0 3.5 |
Expected a list of dataframe got just one dataframe Am trying to convert list of sheets from an excel file into a csv, so beginning with the following codes, i want to read the files first, but i only get the first sheet, and the rest are lostimport pandas as pddef accept_xcl_file(file): xcl_file = pd.ExcelFile(file) sheets= xcl_file.sheet_names file = xcl_file.parse(sheet_names = sheets) return file,sheetsfile, sheet = accept_xcl_file('Companies.xlsx')sheet >>this is the output from sheet['companies', 'fruits', 'vehicles', 'sales', 'P&L', 'price', 'clubs', 'countries', 'housing', 'life-expectancy']file['fruits'] >>i get a keyerror when i try to index the file, but when i use 'companies' key i get the correct data. going by the documentation i should expect a DataFrame or dict of DataFramesanyhelp.. | The read_excel method is already available in pandas to import Excel data.Try this instead of your code:import pandas as pdfile = pd.read_excel('Companies.xlsx')# file is a dict object# keys are the sheet names as strings# items are the pd.DataFrame objects containing sheet data |
Combining list of numbers and strings in python As an R user, I know very little about python. I am using python moviepy to pick up a long list of photos to generate a video in RStudio notebook. What I did previously was to use R to generate the list of photos.R code:v_list <- c(paste0("v_", c(1:10, rep(10, 5), rep(11, 10)), ".jpg"))and then in python chunk to convert this list to python, v_list = r.v_list.I wonder if there is an easy way to generate the list directly in python. It appears there are many questions on this topic. Through those answers, I managed to produce the following code:Python:v_list = ["v_" + str(x) + ".jpg" for x in range(1, 10)]+["v_" + str(x) + ".jpg" for x in [10]*5] + ["v_" + str(x) + ".jpg" for x in [11]*10]My question: is it possible to make this code simpler? | How aboutv_list = [f"v_{x}.jpg" for x in list(range(1, 10)) + [10] * 5 + [11] * 10] |
Recursively copy the secrets from one VAULT path to another I am trying to copy all the secrets along with the subfolders from one VAULT path to another.Example:source = "/path/namespace/TEAM1/jenkins"(note: the above source path consists of subfolders like job1,job2,job3... and all these subfolders contains the respective secrets in the form of key-value pairs)destination="/path/namespace/team1/jenkins"I could able to manually copy each secret to the destination folder, but wondering any code snippet would help me here to achieve this. Like recursively copy all the secrets along with the respective sub-folders to the destination PATH. | Taking vault secret backup from one path to another like.input_path: secret/tmp1output_path: secret/tmp2so now with this python script you can sync all secret from secret/tmp1 to secret/tmp2Need to add input_path and output_path in python script then just run.Link for python script.https://github.com/vinamra1502/vault-backup-restoreWith this script you can copy all secrets along with the subfolders from one vault path to others.Ex. secret/tmp1 secret copy to secret/tmp2 path. |
How do I fix this speed varible writing back to file? I've been writing a program, I've run into an error. My current code is: import tkinter as tkspeed = 80def onKeyPress(event, value): global speed text.delete("%s-1c" % 'insert', 'insert') text.insert('end', 'Current Speed: %s\n\n' % (speed, )) with open("speed.txt", "r+") as p: speed = p.read() speed = int(speed) speed = min(max(speed+value, 0), 100) with open("speed.txt", "r+") as p: p.writelines(str(speed)) print(speed) if speed == 100: text.insert('end', 'You have reached the speed limit') if speed == 0: text.insert('end', 'You can not go any slower')speed = 80root = tk.Tk()root.geometry('300x200')text = tk.Text(root, background='black', foreground='white', font=('Comic Sans MS', 12))text.pack()# Individual key bindingsroot.bind('<KeyPress-w>', lambda e: onKeyPress(e, 1)) root.bind('<KeyPress-s>', lambda e: onKeyPress(e, -1)) #root.mainloop()I believe speed = min(...) is causing the error. However do you guys have any idea? | One problem (I guess it's the problem you're having) is that you are trying to overwrite the content of file speed.txt, however, the value you are writing contains fewer characters than already contained in the file.This can lead to unexpected values winding up in your file, e.g. if the file contains10Consider what happens if you try to decrement the value by 1 (user hit the s key):with open('speed.txt', 'r+') as p: speed = int(p.read())speed -= 1 # speed is now 9with open("speed.txt", "r+") as p: p.writelines(str(speed))speed.txt now contains:90Instead of decreasing the speed to 9, it has actually been increased to 90! If the speed was already 100 and you tried to decrement it, you would end up with 990 in the file.This is because opening the file with mode r+ opens the file for reading and writing and positions the file pointer at the beginning of the file. A write will only overwrite the first n characters where n is the length of the data written. Hence you can get the sort of corruption shown above.You can fix this by opening the file with mode 'w' for the _second__ open(). This will completely overwrite the file. And you don't need to use writelines(), just use write(). |
Pix2pix program terminates after giving Thread warning of Tensorflow I am trying to run https://github.com/eriklindernoren/Keras-GAN/blob/master/pix2pix/pix2pix.pypython pix2pix.pyExecution terminates giving following messageUsing TensorFlow backend.WARNING:tensorflow:From C:\Users\kulkarni\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.Instructions for updating:Colocations handled automatically by placer.2019-05-29 14:43:23.767965: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX22019-05-29 14:43:23.770965: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.Tried following solution given at Why Keras model on "bare" CPU is faster? but no luck.I am running this on Windows 7 Intel i3 CPU 64-bit machine.How to do proper settings to get the code running? | It's not throwing any error. So I'm guessing the script isn't finding the training dataset. Try downloading the dataset and try running it again.bash download_dataset.sh facadespython pix2pix.py |
SSL error with Python requests despite up-to-date dependencies I am getting an SSL "bad handshake" error. Most similar responses to this problem seem to stem from old libraries, 1024bit cert. incompatibility, etc... I think i'm up to date, and can't figure out why i'm getting this error.SETUP:requests 2.13.0 certifi 2017.01.23'OpenSSL 1.0.2g 1 Mar 2016'I'm hitting this API (2048bit certificate key): https://api.sidecar.io/rest/v1/provision/application/device/count/And getting this error: requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",)See l.44 of https://github.com/sidecar-io/sidecar-python-sdk/blob/master/sidecar.pyIf I turn verify=False in requests, I can bypass, but i'd rather figure out why the certification is failing.Any help is greatly appreciated; thanks! | The validation fails because the server you access is setup improperly, i.e. it is not a fault of your setup or code. Looking at the report from SSLLabs you see This server's certificate chain is incomplete. Grade capped to B.This means that the server sends a certificate chain which is missing an intermediate certificate to the trusted root and thus your client can not build the trust chain. Most desktop browsers work around this problem by trying to get the missing certificate from somewhere else but normal TLS libraries will fail in this case. You would need to explicitly add the missing chain certificate as trusted to work around this problem:import requestsrequests.get('https://api.sidecar.io', verify = 'mycerts.pem')mycerts.pem should contain the missing intermediate certificate and the trusted root certificate. A tested version for mycerts.pem can be found in http://pastebin.com/aZSKfyb7. |
How to get default browser name using python? Following solutions (actually it is only one) doesn't work to me : How to get a name of default browser using python How to get name of the default browser in windows using python?Solution was:from _winreg import HKEY_CURRENT_USER, OpenKey, QueryValue# In Py3, this module is called winreg without the underscorewith OpenKey(HKEY_CURRENT_USER, r"Software\Classes\http\shell\open\command") as key: cmd = QueryValue(key, None)But unfortunately, in Windows 10 Pro I don't have targeted registry value. I've tried to find alternative keys in Regedit, but no luck.Please take a look, what my registry virtually contains: | The following works for me on Windows 10 pro:from winreg import HKEY_CURRENT_USER, OpenKey, QueryValueExreg_path = r'Software\Microsoft\Windows\Shell\Associations\UrlAssociations\https\UserChoice'with OpenKey(HKEY_CURRENT_USER, reg_path) as key: print(QueryValueEx(key, 'ProgId'))Result (first with Chrome set as default, then with IE):$ python test.py('ChromeHTML', 1)$ python test.py('IE.HTTPS', 1) |
Dump pandas DataFrame to SQL statements I need to convert pandas DataFrame object to a series of SQL statements that reproduce the object.For example, suppose I have a DataFrame object:>>> df = pd.DataFrame({'manufacturer': ['Audi', 'Volkswagen', 'BMW'], 'model': ['A3', 'Touareg', 'X5']})>>> df manufacturer model0 Audi A31 Volkswagen Touareg2 BMW X5I need to convert it to the following SQL representation (not exactly the same):CREATE TABLE "Auto" ("index" INTEGER, "manufacturer" TEXT, "model" TEXT);INSERT INTO Auto (manufacturer, model) VALUES ('Audi', 'A3'), ('Volkswagen', 'Touareg'), ('BMW', 'X5');Luckily, pandas DataFrame object has to_sql() method which allows dumping the whole DataFrame to a database through SQLAlchemy engine. I decided to use SQLite in-memory database for this:>>> from sqlalchemy import create_engine>>> engine = create_engine('sqlite://', echo=False) # Turning echo to True just logs SQL statements, I'd avoid parsing this logs>>> df.to_sql(name='Auto', con=engine)I'm stuck at this moment. I can't dump SQLite in-memory database to SQL statements either I can't find sqlalchemy driver that would dump SQL statements into a file instead of executing them.Is there a way to dump all queries sent to SQLAlchemy engine as SQL statements to a file? My not elegant solution so far:>>> from sqlalchemy import MetaData>>> meta = MetaData()>>> meta.reflect(bind=engine)>>> print(pd.io.sql.get_schema(df, name='Auto') + ';')CREATE TABLE "Auto" ("manufacturer" TEXT, "model" TEXT);>>> print('INSERT INTO Auto ({}) VALUES\n{};'.format(', '.join([repr(c) for c in df.columns]), ',\n'.join([str(row[1:]) for row in engine.execute(meta.tables['Auto'].select())])))INSERT INTO Auto ('manufacturer', 'model') VALUES('Audi', 'A3'),('Volkswagen', 'Touareg'),('BMW', 'X5');I would actually prefer a solution that does not require building the SQL statements manually. | SQLite actually allows one to dump the whole database to a series of SQL statements with dump command. This functionality is also available in python DB-API interface for SQLite: sqlite3, specifically, through connection object's iterdump() method. As far as I know, SQLAlchemy does not provide this functionality.Thus, to dump pandas DataFrame to a series of SQL statements one needs to first dump it to in-memory SQLite database, and then dump this database using iterdump() method:from sqlalchemy import create_engine engine = create_engine('sqlite://', echo=False)df.reset_index().to_sql(name=table_name, con=engine) # reset_index() is needed to preserve index column in dumped datawith engine.connect() as conn: for line in conn.connection.iterdump(): stream.write(line) stream.write('\n')engine().connect().connection allows to get raw DBAPI connection. |
Python requests, how to send json request without " " my code looks like data = { "undelete_user":'false' } data_json = json.dumps(data) print(data_json)Output is: {"undelete_user": "false"}i need output to be without "" so it can look like {"undelete_user": false}otherwise when i send request, i will get "failed to decode JSON" error | import jsondata = { "undelete_user": False}data_json = json.dumps(data)print(data_json)All you had to do was remove 'false' and put False, because you're considering your false as a string, and it should be a boolean. I hope it helped! |
For loop not iterating? I am a python newbie and I seem to be having an issue and I can't see what I am doing wrong. I am trying to make it so that when I enter a string it turns the string into pig latin. The issue is that when I do this it only prints out the first word in the string converted. Would anyone be able to point me in the right direction?Cheersdef pig_latin(data): words = data.split() piglatin = [] vowels = ["a", "i", "e", "u", "o", "1", "2", "3", "4", "5", "6", "7", "8", "9", "0"] for word in words: if word[0] in vowels: word = word + "way" else: word = word.replace(word[0],"") + word[0] + "ay" word = word.lower() piglatin.append(word) piglatin = "".join(piglatin) return piglatin | Your return statement is inside the for loop due to bad indentation, so obviously it will return after one iteration.Here is the code that will fix this, along with some other changes:def pigetize(text, wovels): return ((text + "way") if text[0] in wovels else (text[1:]+text[0]+"ay")).lower()def pig_latin(data): words = data.split() piglatin = [] vowels = ["a", "i", "e", "u", "o"] + [str(x) for x in range(10)] for word in words: piglatin.append(pigetize(word, wovels)) return "".join(piglatin) |
Python, append within a loop So I need to save the results of a loop and I'm having some difficulty. I want to record my results to a new list, but I get "string index out of range" and other errors. The end goal is to record the products of digits 1-5, 2-6, 3-7 etc, eventually keeping the highest product. def product_of_digits(number): d= str(number) for integer in d: s = 0 k = [] while s < (len(d)): j = (int(d[s])*int(d[s+1])*int(d[s+2])*int(d[s+3])*int(d[s+4])) s += 1 k.append(j) print(k)product_of_digits(n) | Similar question some time ago. Hi ChauxviveThis is because you are checking until the last index of d as s and then doing d[s+4] and so on... Instead, you should change your while loop to:while s < (len(d)-4): |
How to serialize and deserialize objects with cbor2? I'm trying to serialize and deserialize object using cbor2 but even after following the documentation I cannot properly do it. Let' suppose I have the following two classes:class A(object): def __init__(self): self.a = 5 self.b = set() def a(self): return self.aclass B(object): def __init__(self, a): self._a = a def a(self): return aa = A()b = B(a)Can anyone show me how to do it for the object a please?Thanks | Sorry for a late answer.CBOR2 is currently missing support for serializing sets which could be stored as tagged arrays.There is a ticket for adding support here: https://github.com/agronholm/cbor2/issues/14 |
Pandas: Incrementally count occurrences in a column I have a DataFrame (df) which contains a 'Name' column. In a column labeled 'Occ_Number' I would like to keep a running tally on the number of appearances of each value in 'Name'. For example:Name Occ_Number abc 1 def 1 ghi 1 abc 2 abc 3 def 2 jkl 1 jkl 2I've been trying to come up with a method using>df['Name'].value_counts()but can't quite figure out how to tie it all together. I can only get the grand total from value_counts(). My process thus far involves creating a list of the 'Name' column string values which contain counts greater than 1 with the following code:>things = df['Name'].value_counts()>things = things[things > 1]>queries = things.index.valuesI was hoping to then somehow cycle through 'Name' and conditionally add to Occ_Number by checking against queries, but this is where I'm getting stuck. Does anybody know of a way to do this? I would appreciate any help. Thank you! | You can use cumcount to avoid a dummy column:>>> df["Occ_Number"] = df.groupby("Name").cumcount()+1>>> df Name Occ_Number0 abc 11 def 12 ghi 13 abc 24 abc 35 def 26 jkl 17 jkl 2 |
Connection of Event hubs to Azure Databricks I want to add libraries in Azure Databricks for connecting to Event Hubs. I will be writing notebooks in python. So which library should I add for connecting to Event Hubs?As per my search till now I got a spark connecting library in Maven coordinates. But I don't think I will be able to import it in python. | Structured streaming integration for Azure Event Hubs is ultimately run on the JVM, so you'll need to import the libraries from the Maven coordinate below: groupId = com.microsoft.azure artifactId = azure-eventhubs-spark_2.11 version = 2.3.10Note: For Python applications, you need to add this above library and its dependencies when deploying your application.For more details, refer "Structured streaming + Event Hubs Integration Guide for PySpark" and "Attach libraries to Spark Cluster".And also, you may refer SO thread, which addresses a similar issue.Hope this helps. |
Having trouble comparing a variable to an input in a while loop I'm having some trouble working on a basic program I'm making whilst I try and learn Python, the problem is I am trying to compare a users input to a variable that I have set and it is not working when I try and compare them.This is the loop in question: if del_question == "1": symbol = input("What symbol would you like to change?: ") while len(symbol) != 1 or symbol not in words: print("Sorry, that is not a valid symbol") symbol = input("What symbol would you like to change?: ") letter = input("What would you like to change it to?: ") while letter in words and len(letter) != 1: print("Sorry, that is not a valid letter") letter = input("What letter would you like to change?: ") dictionary[symbol] = letter words = words.replace(symbol, letter) print("Here is your new code: \n", words)The game is about breaking the code by pairing letters and symbols, this is where the letters and symbols are paired but on the letter input when I try and make it so that you are unable to pair up the same letter twice it simply bypasses it. It is working on the symbol input but I'm not sure on why it's not working here.Here is the text file importing:code_file = open("words.txt", "r")word_file = open("solved.txt", "r")letter_file = open("letter.txt", "r")and:solved = word_file.read()words = code_file.read()clue = clues_file.read()This is the contents of the words file:#+/084&"#3*#%#+8%203:,1$&!-*%.#7&33&#*#71%&-&641'2#))859&330* | Your bug is a simple logic error. You have an and conditional when you really want an or conditional. Change your second while statement to:while letter in words or len(letter) != 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.