questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Python won't assign the right value in simple "if" statement I have a Django project where users can create reservations.I am using Tastypie for the API to which I submit simple POST request to in order to create a reservation in the database. Let's say we have a request like this(from Chrome Network Tab){passenger_name: "John", service_time_start: "12:12:12", flight_time: "12:12:12",…}car_type: falsepassenger_email: "john@gmail.com"passenger_lastname: "Smith"passenger_name: "John"passenger_number: "5"route_end: "Los Angeles"route_start: "New York"I'm using this code(models.py) at the server-side.I want to assign a car_type to the instance with Django Signals.''' Change car type for new reservation '''@receiver(pre_save, sender=Reservation)def smart_car_options(sender, instance, *args, **kwargs): print 'SIGNAL: pass_no: ', instance.passenger_number print 'SIGNAL: car_type: ', type(instance.car_type) if instance.car_type == False: if instance.passenger_number <= 3: instance.car_type = 'CAR' if 4 < instance.passenger_number <= 8: instance.car_type = 'VAN' if instance.passenger_number > 8: instance.car_type = 'BUS' print "--------------------------------------------" print 'SIGNAL: pass_no: ', instance.passenger_number print 'SIGNAL: car_type: ', instance.car_typeAs you can see, the instance.car_type is beeing set to 'BUS', when it should be set to 'VAN'. What am I doing wrong?Also, here is my console output for the request:[20/Mar/2015 02:10:21] "POST /api/reservation/ HTTP/1.1" 201 736[20/Mar/2015 02:10:40] "GET /forms/as_shuttle/ HTTP/1.1" 200 16237[20/Mar/2015 02:10:52] "GET /static/debug_toolbar/css/toolbar.css HTTP/1.1" 304 0SIGNAL: pass_no: 5SIGNAL: car_type: <type 'bool'>--------------------------------------------SIGNAL: pass_no: 5SIGNAL: car_type: BUS[20/Mar/2015 02:11:06] "POST /api/reservation/ HTTP/1.1" 201 732It looks like passenger_number is beeing set correctly, but it still won't assign the right value to the instance. It's always assigning the last value, 'BUS'.PS: The same code is working fine in the django admin.
It may be that you are not converting passenger_number to an integer so the comparison operators are comparing between a string and an integer.>>> "5" < 8False>>> "5" > 8TrueSo try something like num_passengers = int(instance.passenger_number) and then do your comparisons against that instead.
django admin sort foreign key field list Is there an option in the django admin view for ordering of foreign key fields? i.e. I have a foreign key to a "School" model, which shows as a dropdown, sorted on pk-- I would like it to be sorted alphabetically.
Sure - you can...ModelAdmin specific ordering via formfield_for_foreignkeySet default global ordering via a ManagerSet default ordering via model meta classForm specific ordering then passed to your ModelAdminModelAdmin specific method: (the other methods are in my answer in the post linked to above)class MyModelAdmin(admin.ModelAdmin): def formfield_for_foreignkey(self, db_field, request, **kwargs): if db_field.name == "school": kwargs["queryset"] = School.objects.order_by('name') return super(MyModelAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs)Examples for the other 3 non admin specific methods in my post linked above.
How do you use "NextToken" in AWS API calls I've run into a little issue that I am really struggling to understand how it works. I have a tool I am writing that basically does a describe-organization to collect all the accounts in our AWS organization. Per the documentation here it says it responds with a json of the accounts which in my case will be hundreds and hundreds of accounts. So I wrote some very simple code to switch roles into our master account and make the call:import boto3import uuidimport pprintiam_client = boto3.client('iam')sts_client = boto3.client('sts')org_client = boto3.client('organizations')print("Starting in account: %s" % sts_client.get_caller_identity().get('Account'))assumedRoleObject = sts_client.assume_role( RoleArn="arn:aws:iam::123456xxx:role/MsCrossAccountAccessRole", RoleSessionName="MasterPayer")credentials = assumedRoleObject['Credentials']org_client = boto3.client( 'organizations', aws_access_key_id = credentials['AccessKeyId'], aws_secret_access_key = credentials['SecretAccessKey'], aws_session_token = credentials['SessionToken'],)getListAccounts = org_client.list_accounts( NextToken='string')But when I execute the code,Β I get the following error:"botocore.errorfactory.InvalidInputException: An error occurred (InvalidInputException) when calling the ListAccounts operation: You specified an invalid value for nextToken. You must get the value from the response to a previous call to the API."I'm really stumped on what that means. I see the NextToken, and I can find many references to it in the AWS documentation but I can't figure out how to actually USE it. Like, what do I need to do with it?
Don't take the boto3 examples literally (they are not actual examples). Here is how this works:1) The first time you make a call to list_accounts you'll do it without the NextToken, so simply getListAccounts = org_client.list_accounts()2) This will return a JSON response which looks roughly like this (this is what is saved in your getListAccounts variable): { "Accounts": [<lots of accounts information>], "NextToken": <some token>}Note that the NextToken is only returned in case you have more accounts than one list_accounts call can return, usually this is 100 (the boto3 documentation does not state how many by default). If all accounts were returned in one call there is no NextToken in the response!3) So if and only if not all accounts were returned in the first call you now want to return more accounts and you will have to use the NextToken in order to do this: getListAccountsMore = org_client.list_accounts(NextToken=getListAccounts['NextToken'])4) Repeat until no NextToken is returned in the response anymore (then you retrieved all accounts).This is how the AWS SDK handles pagination in many cases. You will see the usage of the NextToken in other service clients as well.
IMDB web crawler - Scrapy - Python import scrapyfrom imdbscrape.items import MovieItemclass MovieSpider(scrapy.Spider): name = 'movie' allowed_domains = ['imdb.com'] start_urls = ['https://www.imdb.com/search/title?year=2017,2018&title_type=feature&sort=moviemeter,asc'] def parse(self, response): urls = response.css('h3.lister-item-header > a::attr(href)').extract() for url in urls: yield scrapy.Request(url=response.urljoin(url),callback=self.parse_movie) nextpg = response.css('div.desc > a::attr(href)').extract_first() if nextpg: nextpg = response.urljoin(nextpg) yield scrapy.Request(url=nextpg,callback=self.parse) def parse_movie(self, response): item = MovieItem() item['title'] = self.getTitle(response) item['year'] = self.getYear(response) item['rating'] = self.getRating(response) item['genre'] = self.getGenre(response) item['director'] = self.getDirector(response) item['summary'] = self.getSummary(response) item['actors'] = self.getActors(response) yield itemI have wrote the above code for scraping all imdb movies from 2017 to till date. But this code only scrapes 100 movies. Please Help.
I believe the issue is with nextpg = response.css('div.desc > a::attr(href)').extract_first()On this pagehttps://www.imdb.com/search/title?year=2017,2018&title_type=feature&sort=moviemeter,ascthe code for the next page link is this<div class="desc"> <span class="lister-current-first-item">1</span> to <span class="lister-current-last-item">50</span> of 24,842 titles <span class="ghost">|</span> <a href="?year=2017,2018&title_type=feature&sort=moviemeter,asc&page=2&ref_=adv_nxt" class="lister-page-next next-page" ref-marker="adv_nxt">Next Β»</a></div>Your code grabs the href of the link with the anchor text Next >> which is thishttps://www.imdb.com/search/title?year=2017,2018&title_type=feature&sort=moviemeter,asc&page=2&ref_=adv_nxtyou go to that page and you scrape the next 50 movieshowever the html in the div with a class of desc has TWO links in it. Not one like the first page.The first link is the previous link, not the next link. <div class="desc"> <span class="lister-current-first-item">51</span> to <span class="lister-current-last-item">100</span> of 24,842 titles <span class="ghost">|</span> <a href="?year=2017,2018&title_type=feature&sort=moviemeter,asc&page=1&ref_=adv_prv" class="lister-page-prev prev-page" ref-marker="adv_nxt">Β« Previous</a> <span class="ghost">|</span> <a href="?year=2017,2018&title_type=feature&sort=moviemeter,asc&page=3&ref_=adv_nxt" class="lister-page-next next-page" ref-marker="adv_nxt">Next Β»</a></div>What I would do is set a counter to 0. Increment on a successful scrape.If the counter is greater than 0 then grab the second link and goto that link and scrape the results on that pageIf the counter is not greater than 0 then grab the first link and goto that and scrape the results on that page
how do you write out "A is greater than B, C, and D" in an efficient matter? How can I make this statement shorter?
A simple solution is to combine B, C and D into one "value" using max, so you only perform one explicit test. For example, instead of:if A > B and A > C and A > D:just write:if A > max(B, C, D):An alternative but functionally similar approach would be to use all with a generator expression:if all(A > x for x in (B, C, D)):In theory, the generator expression approach is "faster", in that it performs fewer total comparisons and can short-circuit. That said, generator expressions have some overhead in terms of per loop Python byte code execution, so odds are the two perform similarly in practice, unless the objects in question have very expensive implementations of __gt__ (the overload for >).Note that if you don't need a single unique maximum, a slightly different behavior, but with far fewer tests, could be performed:maxval = max(A, B, C, D)if maxval is A: ...elif maxval is B: ...elif maxval is C: ...elif maxval is D: ...This does differ behaviorally though; unlike the other code, a tie here is resolved semi-arbitrarily with only one element "winning", where the code you describe would treat a tie as "no winner". Replacing is with == and elif with if would treat a tie as "all tied for max are winners". It all depends on your desired behavior.
macOS vim locale different than shell In my macOS environment, my locale environment variables include an encoding$ localeLANG="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_CTYPE="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_ALL="en_US.UTF-8"However, if I open vim then run locale, the encoding is missing!:!localeLANG="en_US"LC_COLLATE="en_US"LC_CTYPE="en_US"LC_MESSAGES="en_US"LC_MONETARY="en_US"LC_NUMERIC="en_US"LC_TIME="en_US"LC_ALL="en_US"Press ENTER or type command to continueThis causes a problem when running python programs from vimreturn io.open(self.dotenv_path) E LookupError: unknown encoding:Looking for ideas on how to fix vim's behavior so that it no longer strips the encoding information from the environment
Disregard, it looks like I had some settings in vimrc that were clobbering the environment settings. Everything is OK once I remove the following from vimrctry lang en_UScatchendtry
Why this does not add the recursive values? I learned I can fix this problem making c global. But I still do not understand why c does not add the values when the fuction is called from inside the function.def a(b,c): for n in b: #print n c += str(n) #c += "\n" if type(n)is tuple: a(n,c) return cb=((1,2,3),(4,5,6),(7,8,9))print a(b,c)It returns (1, 2, 3)(4, 5, 6)(7, 8, 9)and I want (1, 2, 3)123(4, 5, 6)456(7, 8, 9)789
Assuming the other logic is correct, you're discarding the recursive return resultsYou can fix that with c = a(n,c)
Make a dictionary from two functions (Python) I need to make a dictionary from two functions (ZoekAccesieCode + ZoekOrganisme). The function ZoekAccesieCode returns lines like "Q6GZX2" and ZoekOrganisme like "Frog virus 3 (isolate Goorha)". ZoekAccesieCode need to be the key and ZoekOrganisme need to be the value. Here is my code:import refile = open("ploop.txt")text = file.read()file.close()def main(): hits = VindHits() accesie = ZoekAccesieCode(hits) organisme = ZoekOrganisme(hits, accesie) MaakDict(accesie, organisme)def VindHits(): eiwitten = text.split("\n\n")[1:] eiwitHits = [] for eiwit in eiwitten: if re.search(r"[AG].{4}GK[ST]", eiwit): eiwitHits.append(eiwit) return(eiwitHits)def ZoekAccesieCode(hits): for eiwit in hits: accesieCode = re.findall(r">sp\|(.{6})", eiwit)[0] return accesieCodedef ZoekOrganisme(hits, accesie): for eiwit in hits: organisme = re.findall(r"\n.+?\[(.+?)\]", eiwit)[0] return organismedef MaakDict(accesie, organisme):main()Some sample data from the file: Hits for PS00017|ATP_GTP_A (pattern) ATP/GTP-binding site motif A (P-loop) : [occurs frequently] Pattern: [AG]-x(4)-G-K-[ST] Approximate number of expected random matches in ~ 100'000 sequences (50'000'000 residues): 3371>sp|Q6GZX2|003R_FRG3G (438 aa)Uncharacterized protein 3R. [Frog virus 3 (isolate Goorha) (FV-3)]MARPLLGKTSSVRRRLESLSACSIFFFLRKFCQKMASLVFLNSPVYQMSNILLTERRQVDRAMGGSDDDGVMVVALSPSDFKTVLGSALLAVERDMVHVVPKYLQTPGILHDMLVLLTPIFGEALSVDMSGATDVMVQQIATAGFVDVDPLHSSVSWKDNVSCPVALLAVSNAVRTMMGQPCQVTLIIDVGTQNILRDLVNLPVEMSGDLQVMAYTKDPLGKVPAVGVSVFDSGSVQKGDAHSVGAPDGLVSFHTHPVSSAVELNYHAGWPSNVDMSSLLTMKNLMHVVVAEEGLWTMARTLSMQRLTKVLTDAEKDVMRAAAFNLFLPLNELRVMGTKDSNNKSLKTYFEVFETFTIGALMKHSGVTPTAFVDRRWLDNTIYHMGFIPWGRDMRFVVEYDLDGTNPFLNTVPTLMSVKRKAKIQEMFDNMVSRMVTS 2 - 9: ArpllGKT>sp|Q6GZX1|004R_FRG3G (60 aa)Uncharacterized protein 004R. [Frog virus 3 (isolate Goorha) (FV-3)]MNAKYDTDQGVGRMLFLGTIGLAVVVGGLMAYGYYYDGKTPSSGTSFHTASPSFSSRYRY 33 - 40: GyyydGKT>sp|Q6GZW0|015R_FRG3G (322 aa)Uncharacterized protein 015R. [Frog virus 3 (isolate Goorha) (FV-3)]MEQVPIKEMRLSDLRPNNKSIDTDLGGTKLVVIGKPGSGKSTLIKALLDSKRHIIPCAVVISGSEEANGFYKGVVPDLFIYHQFSPSIIDRIHRRQVKAKAEMGSKKSWLLVVIDDCMDNAKMFNDKEVRALFKNGRHWNVLVVIANQYVMDLTPDLRSSVDGVFLFRENNVTYRDKTYANFASVVPKKLYPTVMETVCQNYRCMFIDNTKATDNWHDSVFWYKAPYSKSAVAPFGARSYWKYACSKTGEEMPAVFDNVKILGDLLLKELPEAGEALVTYGGKDGPSDNEDGPSDDEDGPSDDEEGLSKDGVSEYYQSDLDD 34 - 41: GkpgsGKS>sp|P32234|128UP_DROME (368 aa)GTP-binding protein 128up. [Drosophila melanogaster (Fruit fly)]MSTILEKISAIESEMARTQKNKATSAHLGLLKAKLAKLRRELISPKGGGGGTGEAGFEVAKTGDARVGFVGFPSVGKSTLLSNLAGVYSEVAAYEFTTLTTVPGCIKYKGAKIQLLDLPGIIEGAKDGKGRGRQVIAVARTCNLIFMVLDCLKPLGHKKLLEHELEGFGIRLNKKPPNIYYKRKDKGGINLNSMVPQSELDTDLVKTILSEYKIHNADITLRYDATSDDLIDVIEGNRIYIPCIYLLNKIDQISIEELDVIYKIPHCVPISAHHHWNFDDLLELMWEYLRLQRIYTKPKGQLPDYNSPVVLHNERTSIEDFCNKLHRSIAKEFKYALVWGSSVKHQPQKVGIEHVLNDEDVVQIVKKV 71 - 78: GfpsvGKS>sp|P05080|194K_TRVSY (1707 aa)Replicase large subunit. [Tobacco rattle virus (strain SYM)]MANGNFKLSQLLNVDEMSAEQRSHFFDLMLTKPDCEIGQMMQRVVVDKVDDMIRERKTKDPVIVHEVLSQKEQNKLMEIYPEFNIVFKDDKNMVHGFAAAERKLQALLLLDRVPALQEVDDIGGQWSFWVTRGEKRIHSCCPNLDIRDDQREISRQIFLTAIGDQARSGKRQMSENELWMYDQFRKNIAAPNAVRCNNTYQGCTCRGFSDGKKKGAQYAIALHSLYDFKLKDLMATMVEKKTKVVHAAMLFAPESMLVDEGPLPSVDGYYMKKNGKIYFGFEKDPSFSYIHDWEEYKKYLLGKPVSYQGNVFYFEPWQVRGDTMLFSIYRIAGVPRRSLSSQEYYRRIYISRWENMVVVPIFDLVESTRELVKKDLFVEKQFMDKCLDYIARLSDQQLTISNVKSYLSSNNWVLFINGAAVKNKQSVDSRDLQLLAQTLLVKEQVARPVMRELREAILTETKPITSLTDVLGLISRKLWKQFANKIAVGGFVGMVGTLIGFYPKKVLTWAKDTPNGPELCYENSHKTKVIVFLSVVYAIGGITLMRRDIRDGLVKKLCDMFDIKRGAHVLDVENPCRYYEINDFFSSLYSASESGETVLPDLSEVKAKSDKLLQQKKEIADEFLSAKFSNYSGSSVRTSPPSVVGSSRSGLGLLLEDSNVLTQARVGVSRKVDDEEIMEQFLSGLIDTEAEIDEVVSAFSAECERGETSGTKVLCKPLTPPGFENVLPAVKPLVSKGKTVKRVDYFQVMGGERLPKRPVVSGDNSVDARREFLYYLDAERVAQNDEIMSLYRDYSRGVIRTGGQNYPHGLGVWDVEMKNWCIRPVVTEHAYVFQPDKRMDDWSGYLEVAVWERGMLVNDFAVERMSDYVIVCDQTYLCNNRLILDNLSALDLGPVNCSFELVDGVPGCGKSTMIVNSANPCVDVVLSTGRAATDDLIERFASKGFPCKLKRRVKTVDSFLMHCVDGSLTGDVLHFDEALMAHAGMVYFCAQIAGAKRCICQGDQNQISFKPRVSQVDLRFSSLVGKFDIVTEKRETYRSPADVAAVLNKYYTGDVRTHNATANSMTVRKIVSKEQVSLKPGAQYITFLQSEKKELVNLLALRKVAAKVSTVHESQGETFKDVVLVRTKPTDDSIARGREYLIVALSRHTQSLVYETVKEDDVSKEIRESAALTKAALARFFVTETVLXRFRSRFDVFRHHEGPCAVPDSGTITDLEMWYDALFPGNSLRDSSLDGYLVATTDCNLRLDNVTIKSGNWKDKFAEKETFLKPVIRTAMPDKRKTTQLESLLALQKRNQAAPDLQENVHATVLIEETMKKLKSVVYDVGKIRADPIVNRAQMERWWRNQSTAVQAKVVADVRELHEIDYSSYMYMIKSDVKPKTDLTPQFEYSALQTVVYHEKLINSLFGPIFKEINERKLDAMQPHFVFNTRMTSSDLNDRVKFLNTEAAYDFVEIDMSKFDKSANRFHLQLQLEIYRLFGLDEWAAFLWEVSHTQTTVRDIQNGMMAHIWYQQKSGDADTYNANSDRTLCALLSELPLEKAVMVTYGGDDSLIAFPRGTQFVDPCPKLATKWNFECKIFKYDVPMFCGKFLLKTSSCYEFVPDPVKVLTKLGKKSIKDVQHLAEIYISLNDSNRALGNYMVVSKLSESVSDRYLYKGDSVHALCALWKHIKSFTALCTLFRDENDKELNPAKVDWKKAQRAVSNFYDW 904 - 911: GvpgcGKS>sp|P03589|1A_AMVLE (1126 aa)Replication protein 1a. [Alfalfa mosaic virus (strain 425 / isolate Leiden)]MNADAQSTDASLSMREPLSHASIQEMLRRVVEKQAADDTTAIGKVFSEAGRAYAQDALPSDKGEVLKISFSLDATQQNILRANFPGRRTVFSNSSSSSHCFAAAHRLLETDFVYRCFGNTVDSIIDLGGNFVSHMKVKRHNVHCCCPILDARDGARLTERILSLKSYVRKHPEIVGEADYCMDTFQKCSRRADYAFAIHSTSDLDVGELACSLDQKGVMKFICTMMVDADMLIHNEGEIPNFNVRWEIDRKKDLIHFDFIDEPNLGYSHRFSLLKHYLTYNAVDLGHAAYRIERKQDFGGVMVIDLTYSLGFVPKMPHSNGRSCAWYNRVKGQMVVHTVNEGYYHHSYQTAVRRKVLVDKKVLTRVTEVAFRQFRPNADAHSAIQSIATMLSSSTNHTIIGGVTLISGKPLSPDDYIPVATTIYYRVKKLYNAIPEMLSLLDKGERLSTDAVLKGSEGPMWYSGPTFLSALDKVNVPGDFVAKALLSLPKRDLKSLFSRSATSHSERTPVRDESPIRCTDGVFYPIRMLLKCLGSDKFESVTITDPRSNTETTVDLYQSFQKKIETVFSFILGKIDGPSPLISDPVYFQSLEDVYYAEWHQGNAIDASNYARTLLDDIRKQKEESLKAKAKEVEDAQKLNRAILQVHAYLEAHPDGGKIEGLGLSSQFIAKIPELAIPTPKPLPEFEKNAETGEILRINPHSDAILEAIDYLKSTSANSIITLNKLGDHCQWTTKGLDVVWAGDDKRRAFIPKKNTWVGPTARSYPLAKYERAMSKDGYVTLRWDGEVLDANCVRSLSQYEIVFVDQSCVFASAEAIIPSLEKALGLEAHFSVTIVDGVAGCGKTTNIKQIARSSGRDVDLILTSNRSSADELKETIDCSPLTKLHYIRTCDSYLMSASAVKAQRLIFDECFLQHAGLVYAAATLAGCSEVIGFGDTEQIPFVSRNPSFVFRHHKLTGKVERKLITWRSPADATYCLEKYFYKNKKPVKTNSRVLRSIEVVPINSPVSVERNTNALYLCHTQAEKAVLKAQTHLKGCDNIFTTHEAQGKTFDNVYFCRLTRTSTSLATGRDPINGPCNGLVALSRHKKTFKYFTIAHDSDDVIYNACRDAGNTDDSILARSYNHNF 838 - 845: GvagcGKT>sp|Q9AT00|TGD3_ARATH (345 aa)Protein TRIGALACTOSYLDIACYLGLYCEROL 3, chloroplastic. [Arabidopsis thaliana (Mouse-ear cress)]MLSLSCSSSSSSLLPPSLHYHGSSSVQSIVVPRRSLISFRRKVSCCCIAPPQNLDNDATKFDSLTKSGGGMCKERGLENDSDVLIECRDVYKSFGEKHILKGVSFKIRHGEAVGVIGPSGTGKSTILKIMAGLLAPDKGEVYIRGKKRAGLISDEEISGLRIGLVFQSAALFDSLSVRENVGFLLYERSKMSENQISELVTQTLAAVGLKGVENRLPSELSGGMKKRVALARSLIFDTTKEVIEPEVLLYDEPTAGLDPIASTVVEDLIRSVHMTDEDAVGKPGKIASYLVVTHQHSTIQRAVDRLLFLYEGKIVWQGMTHEFTTSTNPIVQQFATGSLDGPIRY 117 - 124: GpsgtGKSCan someone help me out with the right code?
Going off your barely readable code.def make_dict(a, b): return {a:b}
How can I catch the resulting page of a post request using requests? I'm using Python with Requests and I'm trying to send data to a form for a website that shortens links. I want to store the link to their shortened link in a variable once I use requests.post.I've read about status codes, but I don't quite know how to used them. Here's what the form for input looks like.<form method="post" action="paste.php"> <div class="row"> <div class="12u"> <input type="text" name="name" id="name" placeholder="Name of paste" /> </div> </div> <div class="row"> <div class="12u"> <textarea name="paste" id="paste" placeholder="Paste contents"></textarea> </div> </div> <div class="row"> <div class="12u"> <a type="submit" class="button form-button-submit">Upload</a> <a href="#" class="button button-alt form-button-reset">Undo</a> </div> </div></form>Here's what I have so far.def create_paste(url): title = 'title' # Define your title. paste = 'paste' # Define your paste content. try: durk = requests.post(url, data={'name': title, 'paste': paste}) if durk.status_code == 302: print('Pasted!') else: print('Not pasted!') except: raiseHow can I go about doing this? Thanks.EDIT: Solved the problem by re-reading the Requests documentation. If I allow redirects and use r.url, I can show the paste link.def derp(): title = 'adsfjkl' # Define your title. paste = 'asdfasdhfasdflkjashdfkjasht' # Define your paste content. r = requests.post('http://website.com/paste.php', data={'name': title, 'paste': paste}, allow_redirects=True) if r.status_code == 200: print('Pasted! %s' % (r.url)) else: print('Not pasted!') print r.url
The easiest way would be to indeed use Requests.post and then pass this post result to beautifulsoup and parse the result.For instance: import requestfrom bs4 import BeautifulSoup as BSresponse = request.get('http://stackoverflow.com/') bs = BS(response.text)//do some parsing using beautifulsoup to get the link
How to output in different directory? I have this:from os import pathbase_path = "C:\\texts\\*.txt"for file in files: with open (file) as in_file, open(path.join(base_path,"%s_tokenized.txt" % file), "w") as out_file: data = in_file.readlines() for line in data: words = line.split() str1 = ','.join(words) out_file.write(str1) out_file.write("\n")It produced tokenized files in the same directory it reads from. How can I output those out_files in different directory such as "C:\\texts\\Tokenized" ?I know there are some ways to move those new files to other directory after producing them, but what I wanna know is that if there is anyway to output new files to other directory at the same time they are produced in above code?
Is this what you're looking for:import osimport globsource_pattern = 'c:/texts/*.txt'output_directory = 'c:/texts/tokenized'# Iterate over files matching source_patternfor input_file in glob.glob(source_pattern): # build the output filename base,ext = os.path.splitext(os.path.basename(input_file)) output_file = os.path.join(output_directory,base + '_tokenized' + ext) with open(input_file) as in_file, open(output_file,'w') as out_file: for line in in_file: out_file.write(','.join(line.split()) + '\n')
Python 3: End of File error with no message I'm having some trouble figure out how to create an EOFError without printing something after it. This is the section of the program I'm having trouble with:def main(): try: k = float(input("Number? ")) newton(k) print("The approximate square root of", k,"is:",newton(k)) print("The error is:",(newton(k))-(math.sqrt(k))) except EOFError: print("End of File")I'm trying to make this so that it doesn't print anything after the user presses Ctrl+D. The program should be killed right after Ctrl+D.I've trying doing print("") but that creates an extra space. Thanks in advance
def main(): try: k = float(input("Number? ")) newton(k) print("The approximate square root of", k,"is:",newton(k)) print("The error is:",(newton(k))-(math.sqrt(k))) except EOFError: passAs a separate note, I noticed that you are using 2 spaces in your code indentation. It's a good practice to use 4 spaces instead.
libvlc and dbus interface I'm trying a to create a basic media player using libvlc which will be controlled through dbus. I'm using the gtk and libvlc bindings for python. The code is based on the official example from the vlc websiteThe only thing I modified is to add the dbus interface to the vlc instance# Create a single vlc.Instance() to be shared by (possible) multiple players.instance = vlc.Instance()print vlc.libvlc_add_intf(instance, "dbus"); // this is what i added. // returns 0 which is okAll is well, the demo works and plays any video files. but for some reason the dbus control module doesn't work (I can't believe I just said the dreaded "doesn't work" words):I already have the working client dbus code which binds to the MPRIS 2 interface. I can control a normal instance of a VLC media player - that works just fine, but with the above example nothing happens. The dbus control module is loaded properly, since libvlc_add_intf doesn't return an error and i can see the MPRIS 2 service in D-Feet (org.mpris.MediaPlayer2.vlc).Even in D-Feet, trying to call any of the methods of the dbus vlc object returns no error but nothing happens.Do I need to configure something else in order to make the dbus module control the libvlc player? ThanksUPDATEIt seems that creating the vlc Instance and setting a higher verbosity, shows that the DBus calls are received but they have no effect whatsoever on the player itself. Also, adding the RC interface to the instance instead of DBus, has some problems too: When I run the example from the command line it drops me to the RC interface console where i can type the control commands, but it has the same behaviour as DBus - nothing happens, no error, nada, absolutely nothing. It ignores the commands completely.Any thoughts?UPDATE 2Here is the code that uses libvlc to create a basic player: from dbus.mainloop.glib import DBusGMainLoop import gtk import gobject import sys import vlc from gettext import gettext as _ # Create a single vlc.Instance() to be shared by (possible) multiple players. instance = vlc.Instance("--one-instance --verbose 2") class VLCWidget(gtk.DrawingArea): """Simple VLC widget. Its player can be controlled through the 'player' attribute, which is a vlc.MediaPlayer() instance. """ def __init__(self, *p): gtk.DrawingArea.__init__(self) self.player = instance.media_player_new() def handle_embed(*args): if sys.platform == 'win32': self.player.set_hwnd(self.window.handle) else: self.player.set_xwindow(self.window.xid) return True self.connect("map", handle_embed) self.set_size_request(640, 480) class VideoPlayer: """Example simple video player. """ def __init__(self): self.vlc = VLCWidget() def main(self, fname): self.vlc.player.set_media(instance.media_new(fname)) w = gtk.Window() w.add(self.vlc) w.show_all() w.connect("destroy", gtk.main_quit) self.vlc.player.play() DBusGMainLoop(set_as_default = True) gtk.gdk.threads_init() gobject.MainLoop().run() if __name__ == '__main__': if not sys.argv[1:]: print "You must provide at least 1 movie filename" sys.exit(1) if len(sys.argv[1:]) == 1: # Only 1 file. Simple interface p=VideoPlayer() p.main(sys.argv[1])the script can be run from the command line like:python example_vlc.py file.aviThe client code which connects to the vlc dbus object is too long to post so instead pretend that i'm using D-Feet to get the bus connection and post messages to it. Once the example is running, i can see the players dbus interface in d-feet, but i am unable to control it. Is there anything else that i should add to the code above to make it work?
I can't see your implementation of your event loop, so it's hard to tell what might be causing commands to not be recognized or to be dropped. Is it possible your threads are losing the stacktrace information and are actually throwing exceptions?You might get more responses if you added either a psuedo-code version of your event loop and DBus command parsing or a simplified version?
Configuring Flask to correctly load Bootstrap js and css files How can you use the "url_for" directive in Flask to correctly set things up so a html page that uses Bootstrap and RGraph works ?Say my html page looks like this (partial snippet) :-<!doctype html><html lang="en"> <head> <meta charset="utf-8"> <link href="scripts/bootstrap/dist/css/bootstrap.css" rel="stylesheet"> <title>HP Labs: Single Pane Of Glass (Alpha)</title> <script src="scripts/RGraph/libraries/RGraph.common.core.js" ></script> <script src="scripts/RGraph/libraries/RGraph.line.js" ></script> <script src="scripts/RGraph/libraries/RGraph.common.effects.js" ></script> <script src="scripts/RGraph/libraries/RGraph.line.js" ></script> ...... </html>Here's what I've done/want to do :-Created a "templates" directory alongside my Flask module and placed this html file in it.Created a "static" directory alongside my Flask module but am unsure where and how many "url_for" type statements to use and where they should go. So currently the "scripts" directory is a sub-directory in the "templates" directory (I know this is incorrect).I'd like to be able to reference all the Bootstrap and RGraph js and css correctly (right now seeing lots of 404s).Can anyone direct me to correctly configure Flask (running the dev server) to do this ? Right now the js and css doesn't work.Thanks !
Put the scripts directory in your static subdirectory, then use:<link href="{{ url_for('static', filename='scripts/bootstrap/dist/css/bootstrap.css') }}" rel="stylesheet">The pattern here is:{{ url_for('static', filename='path/inside/the/static/directory') }}which will be replaced with the correct URL for static resources, even if you ever switched all these files to a different hosting (like a CDN).
OpenCV darken oversaturated webcam image I have a (fairly cheap) webcam which produces images which are far lighter than it should be. The camera does have brightness correction - the adjustments are obvious when moving from light to dark - but it is consistently far to bright. I am looking for a way to reduce the brightness without iterating over the entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or better, is there a standard way of sending hints to a webcam to reduce the brightness?import cv2# create video capturecap = cv2.VideoCapture(0)window = cv2.namedWindow("output", 1)while True: # read the frames _,frame = cap.read() cv2.imshow("output",frame) if cv2.waitKey(33)== 27: break# Clean up everything before leavingcv2.destroyAllWindows()cap.release()
I forgot Raspberry Pi is just running a regular OS. What an awesome machine. Thanks for the code which confirms that you just have a regular cv2 image.Simple vectorized scaling (without playing with each pixel) should be simple. Below just scales every pixel. It would be easy to add a few lines to normalize the image if it has a major offset.import numpy#...scale = 0.5 # whatever scale you wantframe_darker = (frame * scale).astype(numpy.uint8)#...Does that look like the start of what you want?
Need help for scrapy with CrawlSpider I am new to scrapy and stucked when I try to extract data from multiple websites by using CrawlSpider.Here is my codes:class ivwSpider(CrawlSpider): name = "ivw-online" allowed_domains = ["ausweisung.ivw-online.de/"] start_urls = ["http://ausweisung.ivw-online.de/index.php?i=1161&a=o44847"] pagelink = LinkExtractor(allow=('index.php?i=1161&a=o\d{5}')) print(pagelink) rules = (Rule(pagelink, callback='parse_item', follow=True), ) def parse_item(self, response): sel = Selector(response) item = IVWItem() item["Type"] = sel.xpath('//div[@class ="statistik"]//tr[1]//td/text()')[0].extract() item["Zeitraum"] = sel.xpath('//div[@class ="tabelle"]//tr[1]//div[@style="width:210px; text-align:center;"]/text()')[0].extract() item["Company"] = sel.xpath('//div[@class ="stammdaten"]//tr//td/text()').extract()[-1] item["Video_PIs"] = sel.xpath('//div[@class ="tabelle"]//tr[11]//td[@class ="z5"]/text()').extract() item["Video_Visits"] = sel.xpath('//div[@class ="tabelle"]//tr[11]//td[@class ="z4"]/text()').extract() item["PIs"] = sel.xpath('//div[@class ="statistik"]//tr[3]//td/text()')[1].extract() item["Visits"] = sel.xpath('//div[@class ="statistik"]//tr[1]//td/text()')[1].extract() return itemWhen the code is executed, return nothing. Is it the problem with rules definition? Any helps here is really appreciated!
While the start_url is already a detail page where I could not find a list to other competitors I went up in the website hierarchy on level to the url http://ausweisung.ivw-online.de/index.php?i=116 as start. There is a table with a long list of competitors.From this start_url you can fetch the urls of all the companies and create requests directly with your callback like so:class ivwSpider(scrapy.Spider): name = "ivw-online" allowed_domains = ["ausweisung.ivw-online.de"] start_urls = ["http://ausweisung.ivw-online.de/index.php?i=116"] def parse(self, response): sel_rows = response.xpath('//div[@class="daten"]/div[@class="tabelle"]//tr') for sel_row in sel_rows: url_detail = sel_row.xpath('./td[@class="a_main_txt"][1]/a/@href').extract_first() if url_detail: url = response.urljoin(url_detail) # print url yield scrapy.Request(url, callback=self.parse_item) def parse_item(self, response): sel = Selector(response) item = IVWItem() item["Type"] = sel.xpath('//div[@class ="statistik"]//tr[1]//td/text()')[0].extract() item["Zeitraum"] = sel.xpath('//div[@class ="tabelle"]//tr[1]//div[@style="width:210px; text-align:center;"]/text()')[0].extract() item["Company"] = sel.xpath('//div[@class ="stammdaten"]//tr//td/text()').extract()[-1] item["Video_PIs"] = sel.xpath('//div[@class ="tabelle"]//tr[11]//td[@class ="z5"]/text()').extract() item["Video_Visits"] = sel.xpath('//div[@class ="tabelle"]//tr[11]//td[@class ="z4"]/text()').extract() item["PIs"] = sel.xpath('//div[@class ="statistik"]//tr[3]//td/text()')[1].extract() item["Visits"] = sel.xpath('//div[@class ="statistik"]//tr[1]//td/text()')[1].extract() yield itemPlease notice that the base class is no longer CrawlSpider but Spider.
How to calculate shifted columns over Groups in Python Pandas I have the following pandas dataframe: Circuit-ID DATETIME LATE? 78899 07/06/2018 15:30 178899 08/06/2018 17:30 078899 09/06/2018 20:30 123544 12/07/2017 23:30 123544 13/07/2017 19:30 023544 14/07/2017 20:30 1And I need to calculate the shifted value for the DATETIME and LATE? columns to get the following result:Circuit DATETIME LATE? DATETIME-1 LATE-1 78899 07/06/2018 15:30 1 NA NA78899 08/06/2018 17:30 0 07/06/2018 15:30 178899 09/06/2018 20:30 1 08/06/2018 17:30 023544 12/07/2017 23:30 1 NA NA23544 13/07/2017 19:30 0 12/07/2017 23:30 123544 14/07/2017 20:30 1 13/07/2017 19:30 0I tried the following code :df.groupby(['circuit ID, DATETILE', LATE? ]) \ .apply(lambda x : x.sort_values(by=['circuit ID, 'DATETILE', 'LATE?'], ascending = [True, True, True]))['LATE?'] \ .transform(lambda x:x.shift()) \ .reset_index(name= 'LATE-1') But I keep getting erroneous results on some rows where the first shifted value is different from Nan.Could you please indicate a more clean way to get the desired result?
Use groupby and shift, then join it back:df.join(df.groupby('Circuit-ID').shift().add_suffix('-1')) Circuit-ID DATETIME LATE? DATETIME-1 LATE?-10 78899 07/06/2018 15:30 1 NaN NaN1 78899 08/06/2018 17:30 0 07/06/2018 15:30 1.02 78899 09/06/2018 20:30 1 08/06/2018 17:30 0.03 23544 12/07/2017 23:30 1 NaN NaN4 23544 13/07/2017 19:30 0 12/07/2017 23:30 1.05 23544 14/07/2017 20:30 1 13/07/2017 19:30 0.0A similar solution uses concat for joining:pd.concat([df, df.groupby('Circuit-ID').shift().add_suffix('-1')], axis=1) Circuit-ID DATETIME LATE? DATETIME-1 LATE?-10 78899 07/06/2018 15:30 1 NaN NaN1 78899 08/06/2018 17:30 0 07/06/2018 15:30 1.02 78899 09/06/2018 20:30 1 08/06/2018 17:30 0.03 23544 12/07/2017 23:30 1 NaN NaN4 23544 13/07/2017 19:30 0 12/07/2017 23:30 1.05 23544 14/07/2017 20:30 1 13/07/2017 19:30 0.0
Python Decorator in Inheritance I have the following code:class Foo: iterations = 3class Bar(Foo): @test_decorator(<????>) def hello(self): print("Hello world!")def test_decorator(input): def my_decorator(func): def wrapper(*args, **kwargs): print("Something is happening before the function is called.") for _ in range(input): func(*args, **kwargs) print("Something is happening after the function is called.") return wrapper return my_decoratorI would like to pass my iterations variable which is in the parent class to the decorator test_decorator which is in my child class, instead of <????>I tried the following ways:self.iteration doesn't work since we don't have access to selfFoo.iterations doesn't work because it will act as a constant, if we change iterations "hello world" will be displayed only 3 times instead of 5 (as in the example below)Example:b = Bar()b.iterations = 5b.hello()# "hello world" will be displayed 3 timesIs there a way to do this or is it anti pattern to python ?
I found a solution to your problem.The idea is to write your own decorator.Since that decorator is a function wrapping your method, it has access to the class instance using the *args first index. From there, you can access the iterations variable:def decorator(iterations, source): def inner_transition(func): return func return inner_transitiondef custom_transition(source): def inner_custom_transition(func): def wrapper(*args, **kwargs): iterations = args[0].iterations return decorator(iterations=iterations, source=source)(func)(*args, **kwargs) return wrapper return inner_custom_transitionclass Foo: iterations = 3class Bar(Foo): @custom_transition(source="my_source") def hello(self, string_to_show, additional_string = "default"): print(string_to_show) print(additional_string)bar = Bar()bar.hello("hello", additional_string="++++")Result:hello++++
TensorFlow: Running the DNN Iris Example I am attempting to run the example provided on the official TensorFlow website found here: https://www.tensorflow.org/versions/r0.10/tutorials/tflearn/index.htmlFor completeness, the code in question that I am running is the following:from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport tensorflow as tfimport numpy as np# Data setsIRIS_TRAINING = "iris_training.csv"IRIS_TEST = "iris_test.csv"# Load datasets.training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING, target_dtype=np.int)test_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TEST, target_dtype=np.int)# Build 3 layer DNN with 10, 20, 10 units respectively.classifier = tf.contrib.learn.DNNClassifier(hidden_units=[10, 20, 10], n_classes=3, model_dir="/tmp/iris_model")# Fit model.classifier.fit(x=training_set.data, y=training_set.target, steps=2000)# Evaluate accuracy.accuracy_score = classifier.evaluate(x=test_set.data, y=test_set.target)["accuracy"]print('Accuracy: {0:f}'.format(accuracy_score))# Classify two new flower samples.new_samples = np.array( [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float)y = classifier.predict(new_samples)print('Predictions: {}'.format(str(y)))Now, it seems when I run this example, I am encountering the following error:WARNING:tensorflow:Change warning: `feature_columns` will be required after 2016-08-01.Instructions for updating:Pass `tf.contrib.learn.infer_real_valued_columns_from_input(x)` or `tf.contrib.learn.infer_real_valued_columns_from_input_fn(input_fn)` as `feature_columns`, where `x` or `input_fn` is your argument to `fit`, `evaluate`, or `predict`.WARNING:tensorflow:Setting feature info to TensorSignature(dtype=tf.float32, shape=TensorShape([Dimension(None), Dimension(4)]), is_sparse=False)WARNING:tensorflow:Setting targets info to TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(None)]), is_sparse=False)E tensorflow/core/client/tensor_c_api.cc:485] Tensor name "hiddenlayer_2/weights/Adagrad" not found in checkpoint files /tmp/iris_model/model.ckpt-16000-?????-of-00001 [[Node: save/restore_slice_18 = RestoreSlice[dt=DT_FLOAT, preferred_shard=0, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/restore_slice_18/tensor_name, save/restore_slice_18/shape_and_slice)]]Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 730, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 712, in _run_fn status, run_metadata) File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status))tensorflow.python.framework.errors.NotFoundError: Tensor name "hiddenlayer_2/weights/Adagrad" not found in checkpoint files /tmp/iris_model/model.ckpt-16000-?????-of-00001 [[Node: save/restore_slice_18 = RestoreSlice[dt=DT_FLOAT, preferred_shard=0, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/restore_slice_18/tensor_name, save/restore_slice_18/shape_and_slice)]]During handling of the above exception, another exception occurred:Traceback (most recent call last): File "test.py", line 26, in <module> steps=2000) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 240, in fit max_steps=max_steps) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 578, in _train_model max_steps=max_steps) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py", line 276, in _supervised_train scaffold=scaffold) as super_sess: File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 212, in __init__ self._sess = recoverable_session.RecoverableSession(self._create_session) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/recoverable_session.py", line 46, in __init__ WrappedSession.__init__(self, sess_factory()) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 232, in _create_session init_fn=self._scaffold.init_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/session_manager.py", line 164, in prepare_session max_wait_secs=max_wait_secs, config=config) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/session_manager.py", line 224, in recover_session saver.restore(sess, ckpt.model_checkpoint_path) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1129, in restore {self.saver_def.filename_tensor_name: save_path}) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 382, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 655, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 723, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 743, in _do_call raise type(e)(node_def, op, message)tensorflow.python.framework.errors.NotFoundError: Tensor name "hiddenlayer_2/weights/Adagrad" not found in checkpoint files /tmp/iris_model/model.ckpt-16000-?????-of-00001 [[Node: save/restore_slice_18 = RestoreSlice[dt=DT_FLOAT, preferred_shard=0, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/restore_slice_18/tensor_name, save/restore_slice_18/shape_and_slice)]]Caused by op 'save/restore_slice_18', defined at: File "test.py", line 26, in <module> steps=2000) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 240, in fit max_steps=max_steps) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 578, in _train_model max_steps=max_steps) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py", line 252, in _supervised_train keep_checkpoint_max=keep_checkpoint_max) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 152, in __init__ lambda: training_saver.Saver(sharded=True, File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 164, in _get_or_default op = default_constructor() File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 153, in <lambda> max_to_keep=keep_checkpoint_max)) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 861, in __init__ restore_sequentially=restore_sequentially) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 515, in build filename_tensor, per_device, restore_sequentially, reshape) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 312, in _AddShardedRestoreOps name="restore_shard")) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 272, in _AddRestoreOps values = self.restore_op(filename_tensor, vs, preferred_shard) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 187, in restore_op preferred_shard=preferred_shard) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/io_ops.py", line 203, in _restore_slice preferred_shard, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 359, in _restore_slice preferred_shard=preferred_shard, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2310, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1232, in __init__ self._traceback = _extract_stack()However, if I change the "hidden_units" variable to [10] instead of [10,20,10], the code runs (albiet, still with the warnings, but it runs without error). Is there some reason that adding other layers to this network is resulting in the errors I am recieivng? Any input would be helpful, thanks!
You need to change your model_dir if you update the model parameters. Likewise, deleting what was in the /tmp/iris folder had the same effect as it keeps the model state there and will try to update it when you re-fit or fail if you change the parameters.
Numpy outer addition of subarrays Is there a way, in numpy, to perform what amounts to an outer addition of subarrays?That is to say, I have 2 arrays of the form 2x2xNxM, which may each be considered a stack of 2x2 matrices N high and M wide. I would like to add each of these matrices to each matrix from the other array, to form a 2x2xNxMxNxM array in which the last four indices correspond to the indices in my initial two arrays so that I can index output[:,:,x1,y1,x2,y2] == a1[:,:,x1,y1] + a2[:,:,x2,y2].If these were arrays of scalars, it would be trivial, all I'd have to do is:A, B = a.ravel(), b.ravel()four_D = (a[...:np.newaxis] + b).reshape(*a1.shape, *a2.shape)for (x1, y1, x2, y2), added in np.ndenumerate(four_D): assert added == a1[x1,y1] + a2[x2,y2]However, this doesn't work for the case where a and b comprise of matrices. I could, of course, use nested for loops, but my dataset is going to be fairly large, and I'm expecting to run this over multiple datasets.Is there an efficient way to do this?
Extend arrays to have more dimensions and then leverage broadcasting -output = a1[...,None,None] + a2[...,None,None,:,:]Sample run -In [38]: # Setup input arrays ...: N = 3 ...: M = 4 ...: a1 = np.random.rand(2,2,N,M) ...: a2 = np.random.rand(2,2,N,M) ...: ...: output = np.zeros((2,2,N,M,N,M)) ...: for x1 in range(N): ...: for x2 in range(N): ...: for y1 in range(M): ...: for y2 in range(M): ...: output[:,:,x1,y1,x2,y2] = a1[:,:,x1,y1] + a2[:,:,x2,y2] ...: ...: output1 = a1[...,None,None] + a2[...,None,None,:,:] ...: ...: print np.allclose(output, output1)True
Printing dictionary in a While Loop I have been looking around to see if anyone has actually done it but couldn't find it so hoping I can get some help here.newDict = {'Jan':31, 'Feb':29, 'Mar':31, 'Apr':30, 'May':31, 'Jun':30, 'Jul':31, 'Aug':30}I created this dict and I want to use a while loop to output it this way: Jan 31Feb 29Mar 31Apr 30May 31Jun 30Jul 31Aug 30I am able to do it with a for loop, just curious how it can be done with a while loop.
You can make your dictionary an iterator calling iteritems (Python 2.x), or iter on the items() (Python 3.x)# Python 2.xfrom __future__ import print_functionitems = newDict.iteritems()# Python 3.xitems = iter(newDict.items())while True: try: item = next(items) print(*item) except StopIteration: breakNote: We're importing print_function on Python 2.x because print would be a statement instead of a function, and hence the line print(*item) would actually fail
IndentationError: expected an indented block after 'if' statement on line 32 The erro points to line 34 where finally is, but I tried to reorganize it in many different positions and it insists that the indentation is incorrect. Though as far as I learn the try, except and finally block should be aligned together. Can anyone help me figured out what the hell it wants from me?? try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect(('10.0.2.15', 666)) while True: data = sock.recv(1024) url = data[:data.find('|')] except: if url.count("/") == 2: finally:
The error is actually the lack of a statement suite on line 33, after the if statement on line 32. The omission is detected on line 34 because line 33 is blank and finally: is not the required statement or block of statements. The finally: itself would be fine if it also were followed by a suite.except: if url.count("/") == 2: # You need a statement suite here!!!finally:
Need help installing Anaconda and Spyder so i'm having some bad issues with my conda installation. I'm running this on MacOS Monterey 12.4I installed the latest version "Anaconda3-2022.05-MacOSX-x86_64.sh"I then installed an env for spyder.conda create -n spyder spyderEverything seems to go ok, however everytime I attempt to run my code I get a spyder internal problem.Traceback (most recent call last): File "/Users/person.blank/anaconda3/envs/spyder/lib/python3.9/site-packages/qtconsole/base_frontend_mixin.py", line 138, in _dispatch handler(msg) File "/Users/person.blank/anaconda3/envs/spyder/lib/python3.9/site-packages/spyder/plugins/ipythonconsole/widgets/debugging.py", line 278, in _handle_input_request return super(DebuggingWidget, self)._handle_input_request(msg) File "/Users/person.blank/anaconda3/envs/spyder/lib/python3.9/site-packages/qtconsole/frontend_widget.py", line 512, in _handle_input_request self._readline(msg['content']['prompt'], callback=callback, password=msg['content']['password']) File "/Users/person.blank/anaconda3/envs/spyder/lib/python3.9/site-packages/qtconsole/console_widget.py", line 2422, in _readline self._show_prompt(prompt, newline=False, separator=False)TypeError: _show_prompt() got an unexpected keyword argument 'separator'After this I removed and recreated the spyder env with:-conda create -n spyder -c conda-forge spyderWhen I attempt to launch spyder it never connects to the kernel in IPython Console.So at this point I tried doing another:conda update --allThis updates the files, but when I launch spyder again I get a missing dependencies.Mandatory:qtpy >=2.1.0 : 2.0.1 (NOK)Should it be this difficult to get anaconda and spyder working?I've always had it working without issue in the past.Anyway any help in solving this would be gratefully appreciated.
So after research this extensively I thought I would provide the solution that ultimately worked.To resolve this issue you need to create an env from the conda-forge repo, in order to get the compatible versions of spyder and the spyder kernels.conda create -n spyder-cf -c conda-forge spyder jupyter_client=7.3.1So just breaking this down for completeness:-Creates an env with the name spyder-cfcreate -n spyder-cfUse the conda-forge repo-c conda-forgeInstall spyder and the jupyter client from conda-forgespyder jupyter_client=7.3.1Once done you can activate the spyder-cf env and run spyder. However if you wish to use alternative env's then simply create the env you like and open it from spyder. For example, if I want a python 3.10 env I would do the following:conda create -n env310 -c conda-forge python=3.10 spyder-kernelsIt's important to install this from conda-forge so that you get the correct spyder kernel headers.Change to the spyder-cf envactivate spyder-cfActivate your python 3.10 envspyder activate env310
len function not behaving as expected I'm building a function that's applying an additional tag to an aws instance based on a dict of tags that are passed in.Expected behavior:When more than one TEST_CUSTOMER_ID is are passed in, the function should return the following dictionary of tags:{'foo': 'bar', 'Account': 'shared'}The current behavior of the unit test function is only returning:{'foo': 'bar', 'Account': [['test_customer_id1', 'test_customer_id2']]} How can I fix this?def get_acct_value(tags, customer_id, ceid): customer_id = [customer_id] if "Account" in tags: if tags["Account"] == customer_id: pass else: if len(customer_id) > 1: tags["Account"] = "shared" elif len(customer_id) == 1: tags["Account"] = customer_id else: raise exceptions.CustomerNotFoundError(f"No customer(s) found on {ceid}") else: if len(customer_id) > 1: tags["Account"] = "shared" elif len(customer_id) == 1: tags["Account"] = customer_id else: raise exceptions.CustomerNotFoundError(f"No customer(s) found on {ceid}") return tagsUnit test:TAGS_NO_VALUE = {'foo': 'bar'}TEST_CUSTOMER_ID_LIST = ["test_customer_id1", "test_customer_id2"]TEST_CEID = "test_ceid"def test_get_account_value_customer_list(): response = tagging.get_acct_value(TAGS_NO_VALUE, TEST_CUSTOMER_ID_LIST, TEST_CEID) print(response)Other unit tests:All three tests should return: {'Account': customer_id, 'foo': 'bar'}TEST_CUSTOMER_ID = "test_customer_id"TAGS_UNEXPECTED_VALUE = {'Account': '', 'foo': 'bar'} TAGS_EXPECTED_VALUE = {'Account': customer_id, 'foo': 'bar'} def test_get_acct_value_no_value(): response = tagging.get_acct_value(TAGS_NO_VALUE, TEST_CUSTOMER_ID, TEST_CEID) print(response)def test_get_acct_value_unexpected_value(): response = tagging.get_acct_value(TAGS_UNEXPECTED_VALUE, TEST_CUSTOMER_ID, TEST_CEID) print(response)def test_get_acct_value_expected_value(): response = tagging.get_acct_value(TAGS_EXPECTED_VALUE, TEST_CUSTOMER_ID, TEST_CEID) print(response)
I think you're complicating yourself a great deal here. Let's break out this function differently:def get_acct_value(tags, customer_ids, ceid): if len(customer_ids) == 0: raise exceptions.CustomerNotFoundError(f"No customer(s) found on {ceid}") tag = "shared" if len(customer_ids) > 1 else customer_ids[0] tags["Account"] = tag return tagsFirst, we know that if customer_ids, a list, is empty, we should raise an exception. Do this first. It's labelled 'bounds checking', and should be done before you try and process any data - that way you don't have to redo it on every branch of your code.Secondly, we know that if the list is greater than one, we want our tag to be 'shared', meaning we have more than one customer id. Let's set a temporary variable with the name tag with 'shared' if we have a list greater than one. If the list is exactly one, we set it to the only available customer id.Finally, we do the actual work - setting the account to the tag we have selected. Lines 4 and five could be combined to tags["Account"] = "shared" if len(customer_ids) > 1 else customer_id[0].Notably, your proximal issue is that the type of customer_ids being passed in must be a list. If it is a solitary value then you'll have an issue. You try to solve this by just casting it to a list, but if you want to accept either a list or a single value, you're better doing something like this:customer_ids = customer_ids if isinstance(customer_ids, list) else [customer_ids]This would result in something like:def get_acct_value(tags, customer_ids, ceid): customer_ids = list() if customer_ids is None else customer_ids customer_ids = customer_ids if isinstance(customer_ids, list) else [customer_ids] print(f"type={type(customer_ids)} {customer_ids=}") if len(customer_ids) == 0: raise exceptions.CustomerNotFoundError(f"No customer(s) found on {ceid}") tags["Account"] = "shared" if len(customer_ids) > 1 else customer_ids[0] return tagsI've added an initial check for customer_ids to ensure it is not None, which would break your second check (to convert the value to a list if it is not one), since list(None) throws a TypeError.Note that I would sooner name this function update_account_tags() or something like that, since it returns no value, just a dictionary of tags, which has an updated value for account.Some guidance: if you find yourself doing a check, if a in b, where b is a dictionary, and you're planning to do something with a, the best thing to do is use the dictionary's function get().v = b.get(a, my_default)my_default here can be whatever you want, and by default is None. So these are equivalent:v = b.get(a)v = b[a] if a in b else NoneSecondly, if you find yourself in a situation where you're doing a check like this:if tags["Account"] == customer_id: passelse: tags["Account"] = customer_idYou might as well simply do this:tags["Account"] = customer_idThe result is the same, and it's equally computationally complex. (If customer_id is replaced with a function like get_customer_id() this may not be entirely true, but as a first instinct it'll do you well.)
Setting of Server Side Cookie is failing - Python Tornado Framework I am trying to set server side cookies(sessions) in client side. But it is failing.The process is as follows:1) Inititate a call to an API which is residing in abc.com from blog.abc.com. 2) The API from abc.com is going to return a response object to blog.abc.com. I am getting the set_cookie in response object from the API, but in the client side it is not setting!The following are the request and response headers from abc.com when I called the API from blog.abc.com Request Headers: Host: abc.com Connection: keep-alive Accept: */* Origin: http://blog.abc.com Referer: http://blog.abc.com/Response Headers: Connection: keep-alive Set-Cookie: session_token="2|1:0|10:1474371987|13:session_token|44:NDFjZjNiZjFiYjcwNDk4ZTk4NDllYmRhNmNkYzFjZTA=|f60c277db4f5f38ec31f55bc581ddcf997f38fa493276508156f983500de4b25"; expires=Wed, 21 Sep 2016 11:46:27 GMT; Path=/ Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * X-Frame-Options: DENYBut in the client side (blog.abc.com), there is no cookie ("session-token").Can anyone please help. Thanks in advance.
The cookie should still be getting set correctly on abc.com, but you can't see that cookie from javascript on blog.abc.com. This is part of the same-origin policy: you can't see cookies that are set for another domain (regardless of access-control-allow-*). If you need this information in the client, it needs to be in the body of the response.
Stepping through entities in Python 3.3 html.parser I have the following Parser:class Parser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.tableCount = 0 def handle_starttag(self, tag, attrs): if tag == "table": for attr in attrs: if attr[0] == "class" and attr[1] == "space": ## need to do some processing hereIn place of the comments, what I need to do is step all the HTML entities after this point until the end of the table tag (this code is only run when tag == table as shown above.How would I do that? I cannot see any way to step through the all the tags under this tag. Please note I cannot use any external library such as BeautifulSoup (just Python standard library).
class Parser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.inTable = False def handle_starttag(self, tag, attrs): if tag == "table" and ('class','space') in attrs: self.inTable = True if self.inTable: doSomething() def handle_endtag(self, tag): if tag == "table": self.inTable = FalseI guess xml.etree.ElementTree might be easier to use for this kind of situation.
Recommended way to share pandas data frames I am working on a set of visualization tools. All the data sources have different characteristics, and the data is useful by itself, so I'm building individual front ends. Each tool stores its data in HDF5 via pandas. However, I do want the tools to cooperate, so that if they do have timeframes and data sources that overlap, the data can be included in other tools. Is there a recommended way of sharing pandas dataframes, so that when I export it on one system and import it into another, nothing is lost? to_json doesn't work, based on https://github.com/pydata/pandas/issues/9146. There are some references to in other places saying that CSVs aren't guaranteed to export/import equivalently, either. I'm looking at to_html, possibly, but some of the dataframes have multiindexes, and I'm not sure how well that will work. But I don't see any recommended ways of sharing data in the pandas doc, so I'm not sure if there is something people have found to work better than others.
For anyone reading this in the future, I ended up going with to_msgpack; right now, you can't write a dataframe out to a file handle (see https://github.com/pydata/pandas/issues/10491). There is a workaround available, but I'd prefer not to have to use that when I can use to_msgpack directly to a file handle.
Can I manipulate how sorted() organizes things? I'm working with a large amount of data, (a list of tuples), that I would like to organize. To be more specific:# my characters for the items in the strings are 1-9,a-e# the results of my previous program produce a list of tuples# e.g. ('string', int), where int is the count of occurrence of that string in my data# my program currently lists them by count order, starting highest to lowest>>> print results #results from the previous part of my code[('7b7', 23522), ('dcd',23501)....('ccc',1)]>>> for three_grams in results: print (sorted(three_grams))[23522, '7b7'][23501, 'dcd']....[1, 'ccc']I'm not sure exactly why it switches the int and the string...but I want to sort them just the opposite way. Ideally,[('111',803), ('112', 2843), ('113', 10)....('fff', 12)]Is there a way to manipulate how the sorted() function is sorting? Can I make it sort by 1-9a-e in the string bits of the tuples instead?(Also, my previous program for generating these results does not print results with zero counts, and I'd like some help with this. Not sure if I should post that here or make another discussion question with my entire code there? What would be stackoverflow etiquette? I'm still new)
You are sorting the individual result.You need to sort all the results.sorted can take a key parameter. From the documentation: key specifies a function of one argument that is used to extract a comparison key from each list element: key=str.lower. The default value is None (compare the elements directly).We will use result[0] as the key for comparison, i.e. '7b7', 'dcd', and 'ccc':>>> results = [('7b7', 23522), ('dcd',23501), ('ccc',1)]>>> sorted(results, key=lambda result: result[0])[('7b7', 23522), ('ccc', 1), ('dcd', 23501)]If you don't like the lambda, you can use itemgetter:>>> from operators import itemgetter>>> sorted(results, key=itemgetter(0))[('7b7', 23522), ('ccc', 1), ('dcd', 23501)]
Determine a bounding rectangle around a diagonal line A user will define a line on screen which will have, when drawn, a given thickness (or width).I now need to be able to determine the coordinates of a bounding rectangle around this.I have the coordinates A and B, along with the line thickness (W).How can I calculate the coordinates A1, A2, B1 and B2.I searched but was unable to find a question corresponding to this already asked.
Dx= Xb - XaDy= Yb - YaD= sqrt(Dx * Dx + Dy * Dy)Dx= 0.5 * W * Dx / DDy= 0.5 * W * Dy / DThis computes (Dx, Dy) a vector of length W/2 in the direction of AB. Then (-Dy, Dx) is the perpendicular vector.Xmin = min(Xa, Xb) - abs(Dy) Xmax = max(Xa, Xb) + abs(Dy)Ymin = min(Ya, Yb) - abs(Dx)Ymax = max(Ya, Yb) + abs(Dx)Update:I answered for the AABB by mistake.For the four corners of the strokeXa - Dy, Ya + DxXa + Dy, Ya - DxXb - Dy, Yb + DxXb + Dy, Yb - Dx
How to make an array of buttons which change text upon clicking using Tkinter I'm trying to implement a simple GUI for the game of Tic Tac Toe using Tkinter. As a first step, I'm trying to make an array of buttons which change from being unlabeled to having the "X" label when clicked. I've tried the following:import Tkinter as tkclass ChangeButton: def __init__(self, master, grid=np.diag(np.ones(3))): frame = tk.Frame(master) frame.pack() self.grid = grid self.buttons = [[tk.Button()]*3]*3 for i in range(3): for j in range(3): self.buttons[i][j] = tk.Button(frame, text="", command=self.toggle_text(self.buttons[i][j])) self.buttons[i][j].grid(row=i, column=j) def toggle_text(self, button): if button["text"] == "": button["text"] = "X"root = tk.Tk()root.title("Tic Tac Toe")app = ChangeButton(root)root.mainloop()However, the resulting window looks like this:and the buttons don't change when clicked. Any ideas why this does not work?
The primary problem is that with command=self.toggle_text(self.buttons[i][j])), you invoke the callback function and bind its result to command. Instead, you have to bind the function itself tocommand, or alambda` that will invoke that function with the right parameters. A naive way of doing this would look like this:command=lambda: self.toggle_text(self.buttons[i][j]) # won't work!But this will not work inside the loop, as the variables inside the lambda are evaluated when the function is executed, i.e. after the loop, i.e. i and j will take on the last value for each of the functions. For a more detailed explanation, see e.g. here. One way to fix this is to declare those variables as parameters to the lambda, and at the same time use the current values from the loop as their default values, i.e. lambda i=i, j=j: .... This way, i and j are evaluated when the function is declared, not when it is called. Your command and the surrounding loop would then look like this: for i in range(3): for j in range(3): self.buttons[i][j] = tk.Button(frame, text="", command=lambda i=i, j=j: self.toggle_text(self.buttons[i][j])) self.buttons[i][j].grid(row=i, column=j)And there is a another problem, unrelated to the first, with the way you initialize the self.buttons list. By doing [[tk.Button()]*3]*3, the list will hold three references to the same list, each holding three references to the same button. See e.g. here for a more in-depth discussion. Also, you do not need to initialize the buttons in the list at all, as you set those afterwards, in the loop. I'd suggest using a nested list-comprehension instead: self.buttons = [[None for _ in range(3)] for _ in range(3)]
Append a series of strings to a Pandas column I'm a Pandas newbie and have written some code that should append a dictionary to the last column in a row.The last column is named "Holder"Part of my code, which offends the pandas engine is shown belowdf.loc[df[innercat] == -1, 'Holder'] += str(odata)I get the error messageTypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S75') dtype('S75') dtype('S75')When I run my code replacing the "+=" with "=" the code runs just fine although I only get part of the data I want.What am I doing wrong? I've tried removing the str() cast and it still works as an assignment, not an append.Further clarification:Math1 Math1_Notes Physics1 Physics1_Notes Chem1 Chem1_Notes Bio1 Bio1_Notes French1 French1_Notes Spanish1 Spanish1_Notes Holder-1 Gr8 student 0 0 0 0 -1 Foo NaN0 0 0 0 0 -1 Good student NaN0 0 -1 So so 0 0 0 NaN0 -1 Not serious -1 Hooray -1 Voila 0 NaNMy original dataset contains over 300 columns of data, but I've created an example that captures the spirit of what I'm trying to do. Imagine a college with 300 departments each offering 1(or more) courses. The above data is a micro-sample of that data. So for each student, next to their name or admission number, there is a "-1" indicating that they took a certain course. And in addition, the next column USUALLY contains notes from that department about that student.Looking at the 1st row of the data above, we have a student who took Math & Spanish and each department added some comments about the student. For each row, I want to add a dict that summarises the data for each student. Basically a JSON summary of each departments entry. Assuming a string of the general formjson_string = {"student name": a, "data": {"notes": b, "Course name": c}}I intend my code to read my csv, form a dict for each department and APPEND it to Holder column. Thus for the above student(1st row), there will be 2 dicts namely{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}{"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}and the final contents of Holder for row 1 will be{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}when I can successfully append the data, I will probably add a comma or '|' in between the seperate dicts. The line of code that I have written is df.loc[df[innercat] == -1, 'Holder'] = str(odata)whether or not I cast the above line as str(), writing the assignment instead of the append operator appears to overwrite all the previous values and only write the last value into Holder, something like-1 Gr8 student 0 0 0 0 -1 Foo {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}while I want-1 Gr8 student 0 0 0 0 -1 Foo {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}For anyone interested in reproducing what I have done, the main part of my code is shown belowcount = 0substrategy = 0for cat in col_array: count += 1 for innercat in cat: if "Notes" in innercat: #b = str(df[innercat]) continue substrategy += 1 c = count a = substrategy odata = {} odata['did'] = a odata['id'] = a odata['data'] = {} odata['data']['notes'] = b odata['data']['substrategy'] = a odata['data']['strategy'] = c df.loc[df[innercat] == -1, 'Holder'] += str(odata)
is that what you want?In [190]: d1 = {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}In [191]: d2 = {"student name": "Peter", "data": {"notes": "Foo", "Course name": "Spanish1"}}In [192]: import jsonIn [193]: json.dumps(d1)Out[193]: '{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}'In [194]: dfOut[194]: Investments_Cash Holder0 0 NaN1 0 NaN2 -1 NaNIn [196]: df.Holder = ''In [197]: df.ix[df.Investments_Cash == -1, 'Holder'] += json.dumps(d1)In [198]: df.ix[df.Investments_Cash == -1, 'Holder'] += ' ' + json.dumps(d2)In [199]: dfOut[199]: Investments_Cash Holder0 01 02 -1 {"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}} {"student name": "Peter", "data": {"notes": "Foo", "Course nam...NOTE: it will be really painful to work / parse your Holder column in future, because it's not standard - you won't be able to parse it back without additional preprocessing (for example splitting using complex RegEx'es, etc.)So i would strongly recommend you to convert a list of dicts to JSON - you'll be able to read it back using json.loads() method:In [201]: df.ix[df.Investments_Cash == -1, 'Holder'] = json.dumps([d1, d2])In [202]: dfOut[202]: Investments_Cash Holder0 01 02 -1 [{"student name": "Peter", "data": {"notes": "Gr8 student", "Course name": "Math1"}}, {"student name": "Peter", "data": {"notes": "Foo", "Course n...parse it back:In [204]: lst = json.loads(df.ix[2, 'Holder'])In [205]: lstOut[205]:[{'data': {'Course name': 'Math1', 'notes': 'Gr8 student'}, 'student name': 'Peter'}, {'data': {'Course name': 'Spanish1', 'notes': 'Foo'}, 'student name': 'Peter'}]In [206]: lst[0]Out[206]:{'data': {'Course name': 'Math1', 'notes': 'Gr8 student'}, 'student name': 'Peter'}In [207]: lst[1]Out[207]: {'data': {'Course name': 'Spanish1', 'notes': 'Foo'}, 'student name': 'Peter'}
writing to csv, Python, different data types I'm new to Python and would like to write data of different types to the columns of a csv file.I have two lists and one ndarray. I would like to have these as the three columns with the first row being the variable names.Is there are a way to do this in one line or does one have to first convert to arrays?len(all_docs.data)Out[34]: 19916In [35]: type(all_docs.data)Out[35]: listIn [36]: len(all_docs.target)Out[36]: 19916In [37]: type(all_docs.target)Out[37]: numpy.ndarrayIn [38]: id = range(len(all_docs.target)
You could convert it all over to a numpy array and save it with savetxt, but why not just do it directly? You can iterate through the array just like you'd iterate through a list. Just zip them together.with open('output.csv', 'w') as outfile: outfile.write('Col1name, Col2name, Col3name\n') for row in zip(col1, col2, col3): outfile.write('{}, {}, {}\n'.format(a,b,c))Or, if you'd prefer, you can use the csv module. If you have to worry about escaping ,'s, it's quite useful.import csvwith open('output.csv', 'w') as outfile: writer = csv.writer(outfile) outfile.write('Col1name, Col2name, Col3name\n') for row in zip(col1, col2, col3): writer.writerow(row)
is there any way to stop python to concatanate space delimited strings? Recently we found a couple of bugs in our code based because a developer forgot to add a comma in the middle of a list of strings and python just concatenated the strings. look below:The intended list was:["abc", "def"]Developer wrote:["abc" "def"]and we got: ["abcdef"]now I am concerned over similar mistakes in other part of the code, is this functionality a core part of python? is it possible to disable it?
Yes, this is a core part of python: Multiple adjacent string literals (delimited by whitespace), possibly using different quoting conventions, are allowed, and their meaning is the same as their concatenation. Thus, "hello" 'world' is equivalent to "helloworld".I don't think there is a way to disable it, short of hacking Python itself.However, you could use the script below to tokenize your code and warn you when it finds multiple adjacent strings:import tokenizeimport tokenimport ioimport collectionsclass Token(collections.namedtuple('Token', 'num val start end line')): @property def name(self): return token.tok_name[self.num]def check(codestr): lastname = None for tok in tokenize.generate_tokens(io.BytesIO(codestr).readline): tok = Token(*tok) if lastname == 'STRING' and lastname == tok.name: print('ADJACENT STRINGS: {}'.format(tok.line.rstrip())) else: lastname = tok.namecodestr = ''''hello'\'world'for z in ('foo' 'bar', 'baz'): x = ["abc" "def"] y = [1, 2, 3]'''check(codestr)yieldsADJACENT STRINGS: 'hello''world'ADJACENT STRINGS: for z in ('foo' 'bar', 'baz'):ADJACENT STRINGS: x = ["abc" "def"]
Python file manipulation Assume I have such folders rootfolder | / \ \ 01 02 03 .... | 13_itemname.xmlSo under my rootfolder, each directory represents a month like 01 02 03 and under these directories I have items with their create hour and item name such as 16_item1.xml, 24_item1.xml etc, as you may guess there are several items and each xml created every hour.Now I want to do two things:I need to generate a list of item names for a month, ie for 01 I have item1, item2 and item3 inside.I need to filter each item, such as for item1: i want to read each from 01_item1.xml to 24_item1.xml. How can I achieve these in Python in an easy way?
Here are two methods doing what you ask (if I understood it properly). One with regex, one without. You choose which one you prefer ;)One bit which may seem like magic is the "setdefault" line. For an explanation, see the docs. I leave it as "an exercise to the reader" to understand how it works ;)from os import listdirfrom os.path import joinDATA_ROOT = "testdata"def folder_items_no_regex(month_name): # dict holding the items (assuming ordering is irrelevant) items = {} # 1. Loop through all filenames in said folder for file in listdir( join( DATA_ROOT, month_name ) ): date, name = file.split( "_", 1 ) # skip files that were not possible to split on "_" if not date or not name: continue # ignore non-.xml files if not name.endswith(".xml"): continue # cut off the ".xml" extension name = name[0:-4] # keep a list of filenames items.setdefault( name, set() ).add( file ) return itemsdef folder_items_regex(month_name): import re # The pattern: # 1. match the beginnning of line "^" # 2. capture 1 or more digits ( \d+ ) # 3. match the "_" # 4. capture any character (as few as possible ): (.*?) # 5. match ".xml" # 6. match the end of line "$" pattern = re.compile( r"^(\d+)_(.*?)\.xml$" ) # dict holding the items (assuming ordering is irrelevant) items = {} # 1. Loop through all filenames in said folder for file in listdir( join( DATA_ROOT, month_name ) ): match = pattern.match( file ) if not match: continue date, name = match.groups() # keep a list of filenames items.setdefault( name, set() ).add( file ) return itemsif __name__ == "__main__": from pprint import pprint data = folder_items_no_regex( "02" ) print "--- The dict ---------------" pprint( data ) print "--- The items --------------" pprint( sorted( data.keys() ) ) print "--- The files for item1 ---- " pprint( sorted( data["item1"] ) ) data = folder_items_regex( "02" ) print "--- The dict ---------------" pprint( data ) print "--- The items --------------" pprint( sorted( data.keys() ) ) print "--- The files for item1 ---- " pprint( sorted( data["item1"] ) )
Find nearest neighbors I have a large dataframe of the form: user_id time_interval A B C D E F G H ... Z0 12166 2.0 3.0 1.0 1.0 1.0 3.0 1.0 1.0 1.0 ... 0.01 12167 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 ... 0.02 12168 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 ... 0.03 12169 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 ... 0.04 12170 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 ... 0.0... ... ... ... ... ... ... ... ... ... ... ... ...I would like to find, for each user_id, based on the columns A-Z as coordinates,the closest neighbors within a 'radius' distance r. The output should look like, for example, for r=0.1:user_id neighbors12166 [12251,12345, ...]12167 [12168, 12169,12170, ...]... ...I tried for-looping throughout the user_id list but it takes ages.I did something like this:import scipyneighbors = []for i in range(len(dataframe)): user_neighbors = [dataframe["user_id"][j] for j in range(i+1,len(dataframe)) if scipy.spatial.distance.euclidean(dataframe.values[i][2:],dataframe.values[j][2:])<0.1] neighbors.append([dataframe["user_id"][i],user_neighbors])and I have been waiting for hours.Is there a pythonic way to improve this?
Here's how I've done it using apply method.The dummy data consisting of columns A-D with an added column for neighbors:print(df)user_id time_interval A B C D neighbors0 12166 2 3 2 2 3 NaN1 12167 0 1 4 3 3 NaN2 12168 0 4 3 3 1 NaN3 12169 0 2 2 3 2 NaN4 12170 0 3 3 1 1 NaNthe custom function:def func(row): r = 2.5 # the threshold out = df[(((df.iloc[:, 2:-1] - row[2:-1])**2).sum(axis=1)**0.5).le(r)]['user_id'].to_list() out.remove(row['user_id']) df.loc[row.name, ['neighbors']] = str(out)df.apply(func, axis=1)the output: print(df): user_id time_interval A B C D neighbors 0 12166 2 3 2 2 3 [12169, 12170] 1 12167 0 1 4 3 3 [12169] 2 12168 0 4 3 3 1 [12169, 12170] 3 12169 0 2 2 3 2 [12166, 12167, 12168] 4 12170 0 3 3 1 1 [12166, 12168]Let me know if it outperforms the for-loop approach.
Create a list copy having distanced duplicate elements I have a list containing integers, I would like to create a copy of it such that duplicate elements are at least some distance apart. I am aware that it is necessary to have "enough" different elements and a sufficiently "long" starting list but I would like to create that copy or return a message that it is not possible (for that distance).Here is a python "possible" implementation but sometimes this program creates an infinite loop.import randomout = []pbs = [1, 2, 3, 1, 2, 3, 5, 8]l = len(pbs)step = 3while l > 0: pb = random.choice(pbs) if pb in out: lastindex = out[::-1].index(pb) if (len(out) - lastindex) < step: continue pbs.remove(pb) out.append(pb) l += -1print(out)Thank you for your help.
Based on my current understanding of your problem:(Not sure how to name the operation yet;using 'takeEvery' as a placeholder.Probably there is an algorithm for that somewhere.)def takeEvery(list, step): out = [] for pb in pbs: if pb in out: lastindex = out.index(pb) # keeping 'out' in reverse if lastindex < step: continue out.insert(0, pb) # insert in reverse (easier search) out.reverse() return outpbs = [1, 2, 3, 1, 2, 3, 5, 8]step = 3out = takeEvery(pbs, step)print('Step:', step)print('Original:', pbs)print('Result: ', out)
swig python interfacing to function using void ** BACKGROUND. I have an API (third party provided) consisting of C header files and a shared library. I have managed to create a shell script for the build environment, along with a simple interface file for swig. I am trying to make this API accessible to an IPython environment such that I don't have to compile C code all the time to communicate with the associated hardware that leverages this API for I/O. PROBLEM. The first function call I need to do creates a board handle (some arbitrary "object" that is used for all other function calls in the C-side. The function accepts a void **, assuming the underlying function is probably malloc-ing memory, has some sort of internal structure, and allows accessing this memory by some of the other functions. Anyhow, I can't seem to properly interface to this from Python due to the lack of support for void * and receive a typeError. The offending C code snippet, with typedef's/defines extracted from the underlying header files is: #define WD_PVOID void* typedef WD_PVOID WD_BOARD; typedef WD_UINT32 WD_RetCode; #define WD_EXPORT extern #define WD_CHAR8 char #define WD_UINT32 unsigned int #--------------------------------------- //prototype WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl ); //interpreted prototype //extern unsigned int wd_CreateBoardHandle( void* *pBoardHandle, const char *pUrl );A third party provided provided example (written in C) uses the function as so (I removed superfluous stuff) : int main(int argc, char *argv []) { WD_RetCode rc; Vhdl_Example_Opts cmdOpts = VHDL_EXAMPLE_DEFAULTS; char urlBoard[VHDL_SHORT_STRING_LENGTH]; WD_BOARD BoardHandle; sprintf(urlBoard, "/%s/%s/wildstar7/board%d", cmdOpts.hostVal, cmdOpts.boardDomain, cmdOpts.boardSlot); rc = wd_CreateBoardHandle(&BoardHandle,urlBoard); }and lastly, my watered down swig interface file (I have been trying swig typedef's and *OUTPUT with no success): %module wdapi %{ #include "wd_linux_pci.h" #include "wd_types.h" #include "wd_errors.h" %} %import "wd_linux_pci.h" %import "wd_types.h" %import "wd_errors.h" %include <typemaps.i> WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl ); WD_EXPORT WD_RetCode wd_OpenBoard( WD_BOARD BoardHandle );What I would like to be able to do is to call that function in python as so: rslt,boardHandle = wdapi.wd_CreateBoardHandle("/foo/bar/etc")Please let me know if I can provide any other information and I greatly appreciate your help/guidance towards a solution! I have spent days trying to review other similar issues posted.EDIT. I manipulated some typedefs from other posts with similar issues. I am able to now call the functions and receive both a value in rslt and boardHandle as an object; however, it appears the rslt value is gibberish. Here is the new swig interface file (any thoughts as to the problem?): %module wdapi %{ #include "wd_linux_pci.h" #include "wd_types.h" #include "wd_errors.h" %} %import "wd_linux_pci.h" %import "wd_types.h" %import "wd_errors.h" %include <python/typemaps.i> %typemap(argout) WD_BOARD *pBoardHandle { PyObject *obj = PyCObject_FromVoidPtr( *$1, NULL ); $result = PyTuple_Pack(2, $result, obj); } %typemap(in,numinputs=0) WD_BOARD *pBoardHandle (WD_BOARD temp) { $1 = &temp; } %typemap(in) WD_BOARD { $1 = PyCObject_AsVoidPtr($input); } WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl ); WD_EXPORT WD_RetCode wd_OpenBoard( WD_BOARD BoardHandle ); WD_EXPORT WD_RetCode wd_DeleteBoardHandle( WD_BOARD BoardHandle ); WD_EXPORT WD_RetCode wd_IsBoardPresent( const WD_CHAR8 *pUrl, WD_BOOL *OUTPUT );
I resolved my own question. The edited swig interface file, listed above in my original post, turned out to correct my issue. Turns out that somewhere along the way, I mangled the input to my function call in python and the error code returned was "undefined" from the API. On another note, while investigating other options, I also found "ctypes" which brought me to a solution first. Rather than dealing with wrapper code and building a 2nd shared library (that calls another), ctypes allowed me to access it directly and was much easier. I will still evaluate which I will move forward with. ctypes python code is listed below for comparison (look at the c-code example I listed in the original post) : from ctypes import cdll from ctypes import CDLL from ctypes import c_void_p from ctypes import addressof from ctypes import byref import sys #Update Library Path for shared API library sys.path.append('/usr/local/lib'); #Load the API and make accessible to Python cdll.LoadLibrary("libwdapi.so") wdapi = CDLL("libwdapi.so") #Create the url for the board urlBoard='/<server>/<boardType>/<FGPAType>/<processingElement>' #Lets create a void pointer for boardHandle object pBoardHandle=c_void_p() #now create & open the board rtn = wdapi.wd_CreateBoardHandle(byref(pBoardHandle),urlBoard) if (rtn) : print "Error" else : print "Success"
monitoring file update using c++ In c++ using windows api how do i monitor file change eventlike: "this_program.py" is updating a text file.outfile.open("some_file_1.txt",ios::out);then edit "some_file_1.txt", "some_file_1.txt" triggers some window event,I want to monitor and log who is updating "some_file_1.txt", like I am updating from "this_program.py", etc....so I can monitor file changes on filesystem, using winapi or mfc in c++ or python.what event to monitor, like how to set the hook to get the source file.outfile.close();
There is no specific MFC option for this (as far as I know). You can use FindFirstChangeNotification to monitor the entire folder for changes. If change is detected then your file is possibly changed (or maybe it was another file that was changed). Read the date/time stamp on your file to see if change occured. Another function is ReadDirectoryChanges which has more options. It doesn't tell you who changed the file.HWND hMainWnd;FILETIME SaveFileTime;DWORD WINAPI checkfolder(void* arg){ wchar_t folder[MAX_PATH]; lstrcpy(folder, (const wchar_t*)arg); for (;;) { HANDLE hfolder = FindFirstChangeNotification(folder, FALSE, FILE_NOTIFY_CHANGE_LAST_WRITE | FILE_NOTIFY_CHANGE_FILE_NAME); WaitForSingleObject(hfolder, INFINITE); if (!::IsWindow(hMainWnd)) break; PostMessage(hMainWnd, WM_COMMAND, ID_MY_MESSAGE, 0); FindCloseChangeNotification(hfolder); } return 0;}int main(...){ //save last write time WIN32_FIND_DATA data; HANDLE h = FindFirstFile(L"c:\\test\\file.txt", &data); if (h != INVALID_HANDLE_VALUE) SaveFileTime = data.ftLastWriteTime; FindClose(h); //watch for changes CreateThread(NULL, 0, checkfolder, L"c:\\test", 0, NULL);}void OnMyMessage(){ WIN32_FIND_DATA data; HANDLE handle = FindFirstFile(L"c:\\test\\file.txt", &data); if (handle != INVALID_HANDLE_VALUE) { FindClose(handle); if (CompareFileTime(&data.ftLastWriteTime, &SaveFileTime) != 0) OutputDebugStringA("file.txt was modified\n"); else OutputDebugStringA("Another file in the same directory was modified\n"); } else { OutputDebugStringA("file.txt was deleted, or directory was removed/renamed\n"); }}
Calculating Manhattan distance in Python without result I have these two data frames in python and I'm trying to calculate the Manhattan distance and later on the Euclidean distance, but I'm stuck in this Manhattan distance and can't figure it out what is going wrong. Here is what I have tried so far:ratings = pd.read_csv("toy_ratings.csv", ",")person1 = ratings[ratings['Person'] == 1]['Rating']person2 = ratings[ratings['Person'] == 2]['Rating']ratings.head() Person Movie Rating0 1 11 2.51 1 12 3.52 1 15 2.53 3 14 3.54 2 12 3.5Here is data inside the person1 and person2print("*****person1*****")print(person1)*****person1*****0 2.51 3.52 2.55 3.022 3.523 3.036 5.0print("*****person2*****")print(person2)*****person2*****4 3.56 3.08 1.59 5.011 3.024 3.5This was the function that I have tried to build without any luck:def ManhattanDist(person1, person2): distance = 0 for rating in person1: if rating in person2: distance += abs(person1[rating] - person2[rating]) return distanceThe thing is that the function gives 0 back and this is not correct, when I debug I can see that it never enters the second loop. How can I perform a check to see the both rows has a value and loop?
I think the function should give back (= return) the distance in any case: either the distance is zero as initiated, or it is is somethhing else. So the function should look likedef ManhattanDist(person1, person2): distance = 0 for rating in person1: if rating in person2: distance += abs(person1[rating] - person2[rating]) return distanceI think the distance should be built by two vectors of the same length (at least I cannot imagine any thing else). If this is the case you can do (without your function)import numpy as npp1 = np.array(person1)p2 = np.array(person2)#--- scalar product as similarity indicatordist1 = np.dot(p1,p2)#--- Euclidean distancedist2 = np.linalg.norm(p1-p2)#--- manhatten distancedist3 = np.sum(np.abs(p1-p2))
Flask add DB entry using a form I'm a beginner learning FLASK. I'm making an app and for it I've created a DB model User, and an HTML/ JS form that takes input. What I want is to use the form information to create a new entry in the database but I am unsure on how to do it. I tried to do this @app.route('/add_to_db')def add_to_db(): email = request.form['email'] activated = 0; user = models.User(email= email, activated = 0) db.session.add(user) db.session.commit()HTML Code:<form onsubmit="return validateEmail(document.getElementById('email').value)" action="{{ url_for("add_to_db") }}" method="post"> Please input your email adress: <input id="email"> <input type="submit"></form><script> function validateEmail(email) { var re = /^([\w-]+(?:\.[\w-]+)*)@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$/i; return(re.test(email)); }</script>But this gave me a 405 Method not allowed error.
A 405 error means "method not allowed". As you are sending form data, you are using a POST request and need to allow POST requests. By default only GET requests are allowed. Change the line @app.route('/add_to_db') to @app.route('/add_to_db', methods=['POST']).
Python smtplib sometimes fails sending I wrote a simple "POP3S to Secure SMTP over TLS" MRA script in Python (see below).It works fine, but sometimes it returns "Connection unexpectedly closed" while trying to send via SMTP. Running the script again will deliver that message successfully.Please give me some suggestions why it would fail to deliver a message sometimes but at the next run it delivers exactly this message successfully!?#! /usr/bin/env pythonimport poplibimport emaildef forward_pop3_smtp( smtp_credentials, pop3_credentials, forward_address): pop3_server = pop3_credentials[0] pop3_port = pop3_credentials[1] pop3_user = pop3_credentials[2] pop3_password = pop3_credentials[3] message_recipient = forward_address server = poplib.POP3_SSL( pop3_server, pop3_port) server.user( pop3_user) server.pass_( pop3_password) for messages_iterator in range( len( server.list()[1])): message_list = server.retr( messages_iterator + 1)[1] message_string = '' for message_line in message_list: message_string += message_line + '\n' message_message = email.message_from_string( message_string) message_message_as_string = message_message.as_string() message_sender = message_message[ 'From'] print( 'message_sender = ' + message_sender) smtp_return = send_smtp( smtp_credentials, message_sender, message_recipient, message_message_as_string) print( 'smtp_return = ' + str(smtp_return)) if smtp_return == 0: print( 'Deleting message ' + message_message[ 'Subject'] + ':\n') return_delete = server.dele( messages_iterator + 1) print( 'return_delete = \n' + str(return_delete)) print( '\n') server.quit()def send_smtp( smtp_credentials, message_sender, message_recipient, message_message_as_string): smtp_server = smtp_credentials[0] smtp_port = smtp_credentials[1] smtp_user = smtp_credentials[2] smtp_password = smtp_credentials[3] import smtplib exception = 0 try: server = smtplib.SMTP( smtp_server) server.starttls() server.login( smtp_user, smtp_password) smtp_sendmail_return = server.sendmail( message_sender, message_recipient, message_message_as_string) server.quit() except Exception, e: exception = 'SMTP Exception:\n' + str( e) + '\n' + str( smtp_sendmail_return) return exceptionif __name__ == '__main_': print( 'This module needs to be imported!\n') quit()
Use Port 587 for TLS. I don't see the script use smtp_portUse like,server = smtplib.SMTP( smtp_server, int(smtp_port)For Secure SMTP (SMTP + SSL), use smtplib.SMTP_SSL
python list to dataframe object When I try to convert the list to pandas dataframe, I get the entire line as a single cell.pdlist=['From: 2012-11-07 19:16:07, To: 2012-11-07 19:21:07, Downtime: 0h 05m 00s', 'From: 2012-11-13 06:16:07, To: 2012-11-13 06:21:07, Downtime: 0h 05m 00s', 'From: 201=4-10-19 18:10:57, To: 2014-10-19 18:25:57, Downtime: 0h 15m ']import pandas as pdpd.DataFrame(pdlist)Expected output would be 3 columns with the first 2 being date-time.
You need to split the items on the basis of commas . Here's a method: pdlist2=[] for item in pdlist: pdlist2.append(item.split(',')) pd.DataFrame(pdlist2) Using list comprehensions : pdlist2 = [item.split(',') for item in pdlist]my_dataframe = pd.DataFrame(pdlist2) Update:Since you need 3 different columns without "from:" "To:" and "Downtime:" , this should work. This isn't the best method but does the job. import reimport pandas as pddict2={'From':[],'To':[],'Downtime':[]} #initialize dictionary with keys and empty valuesfor item in pdlist2: a=re.sub('From: ','',item[0]) #remove From: dict2['From'].append(a) b = re.sub('To: ','',item[1]) #remove To: dict2['To'].append(b) c = re.sub('Downtime: ','',item[2]) #remove Downtime dict2['Downtime'].append(c)my_dataframe = pd.DataFrame(dict2) #Convert dict to dataframe with dict keys as column names. Note: The re.sub expression will work if all observations start the same way .In case you want it in the order "From" ,"To" ,"Downtime" , you can do:my_dataframe_new= my_dataframe[['From','To','Downtime']]
How do I fix this syntax error? I am confused Here is all my coding that I did but I keep getting this syntax error. It will be explained more at the bottom.def main(): ActualValue() AssessedValue() printResult()def ActualValue() global actual_value actual_value = float(input("Enter actual value:\t"))def AssessedValue() global assessed_value global property_tax assessed_value = 0.6 * actual_value property_tax = assessed_value / 100 * 0.64def printResult(): print "n\For a property valued at $", actual_value print "The assessed value is $", assessed_value print "The property tax is $", property_taxactual_value = Noneassessed_value = Noneproperty_tax = Nonemain()That is my code:It keeps saying that I have a syntax error:def printResult(): print "n\For a property valued at $", actual_value print "The assessed value is $", assessed_value print "The property tax is $", property_tax
You have the \n escape sequence backwards.Also, you need to make sure all your function definitions have a colon on the end of the line.Also, print is a function in Python 3.
nginx+uwsgi+django, there seems to be some strange cache in uwsgi, help me This is uwsgi config:[uwsgi] uid = 500listen=200master = true profiler = true processes = 8 logdate = true socket = 127.0.0.1:8000 module = www.wsgi pythonpath = /root/www/pythonpath = /root/www/www pidfile = /root/www/www.pid daemonize = /root/www/www.log enable-threads = truememory-report = truelimit-as = 6048This is Nginx config:server{ listen 80; server_name 119.254.35.221; location / { uwsgi_pass 127.0.0.1:8000; include uwsgi_params; } }The django works ok, but modifed pages can't be seen unless i restart uwsgi.(what's more, as i config 8 worker process, i can see the modified page when i press on ctrl+f5 for a while, seems that only certain worker can read and response the modified page, but others just shows the old one, who caches the old page? i didn't config anything about cache)I didn't config the django, and it works well with "python manager runserver ...", but havfe this problem when working with nginx+uwsgi. (the nginx and uwsgi are both new installation, i'm sure nothing else is configed here..)
uwsgi does not reload your code automatically, only development server doesrunserver is for debug purposes, uwsgi and nginx for productionin production you can restart uwsgi by service uwsgi restart or via init.d scriptthere is even better way to reload uwsg by using touch-reloadusually there is no need to cleanup .pyc files, it happens only when timestamps on files are wrong (I've seen it only couple times at my entire carieer)
Moving a Python environment over to a new OS install I have reinstalled my operating system (moved from windows XP to Windows 7).I have reinstalled Python 2.7.But i had a lot of packages installed in my old environment.(Django, sciPy, jinja2, matplotlib, numpy, networkx, to name just a view)I still have my old Python installation lying around on a data partition, so i wondered if i can just copy-paste the old Python library folders onto the new installation?Or do i need to reinstall every package?Do the packages keep any information in registry, system variables or similar?Does it depend on the package?
That's the point where you must be able to layout your project, thus having special tools for that.Normally, Python packages do not do such wierd things as dealing with registry (unless they are packaged via MSI installer). The problems may start with packages that contain C extensions, so moving to another version of OS or from 32 to 64-bit architecture will require recompiling/rebuilding of those. So, it would be much better to reinstall all packages to new system as written below.Your demands may vary, but you definitely must choose the way of building your environment. If you don't have and plan to have a large variety of projects you may consider the first approach as follows below, the second approach is more likely for setting up development environment for different projects or for different versions of the same project.Global environment (your Python installation in your system along with installed packages).Here you can consider using pip. In this case your project can have requirements file containing all needed packages for your project. Basically, requirements file is a text file containing package names (on PyPI and their versions).Isolated environment. It can be achieved using special tools or specially organized path.Here where pip can be gracefully combined with virtualenv. This way is highly recommended by a lot of developers (I must remind that Python 3.3 that will soon be released contains virtualenv as a part of standard library). This approach assumes creating virtual shell with its own instance of Python interpreter and installed packages.Another popular tool for achieving isolated environment is called buildout. It lays out your project source and dependencies in one path so you achieve the same effect as virtualenv creates. The great advantage of buildout that it's built upon an idea of pluggable recipes (pieces of code implementing different common project deployment tasks) and there are hundreds of stable and reliable recipes over Internet.Both virtualenv and buildout help you to remove head-ache when installing dependencies and solve the problem of different versions of the same package kept on a single machine.Choose your destiny...
Access the first column in each row in a list Python I have the following code in which the comments explain the output and desired output. I don't seem to be able to access (or understand the logic) behind how to access different fields in the list. def viewrecs(username): username = (username + ".txt") with open(username,"r") as f: fReader=csv.reader(f) for row in fReader: for field in row: #print(field) #this prints everything in the file #print(row) #this also prints everything in the file #print(row[0]) #this also prints everything!!!!!! #print(field[0]) #this prints the first ELEMENT in each row, namely the "[" #How do I access the first field of each column (the genre, like SCI-FI, or Romance)The file in question is a text file based on the user's username. The contents are:"['Sci-Fi', 'Blade Runner']""['Sci-Fi', 'Star Trek']""['Sci-Fi', 'Solaris']""['Sci-Fi', 'Cosmos']""['Drama', ' The English Patient']""['Sci-Fi', 'Out of the Silent Planet']""['Drama', ' The Pursuit of Happiness']""['Drama', ' The English Patient']""['Drama', ' Benhur']""['Drama', ' Benhur']"An answer as to how to access the column field (e.g. Sci-Fi, Drama etc) AND an explanation would be much appreciated.I want to print the entire first column ....i.e SCI-FI, SCI-FI, SCI-FI, DRAMA etc.....I need these to be read into a list ideally so they can be manipulated inside of the listBased on one of the answers below, IF the problem is regarding the way the data has been written to the file in the first place, the code /function that writes this data to the file is below:def viewfilmfunction(x,username): #open the file as student file (variable) print(username, ":You are about to view Film:", x, "Enter the selection ID number of the film again to confirm viewing") with open("films.txt","r") as filmsfile: #prompt the user to enter the ID number they require idnumber=input("Enter the ID number you require:") #call upon our reader (this allows us to work with our file) filmsfileReader=csv.reader(filmsfile) #for each row that is read by the Reader for row in filmsfileReader: #and for each field in that row (this does it automatically for us) for field in row: #if the field is equal to the id number that is being searched for if field ==idnumber: #print the row fields (genre and title) corresponding to that ID number #create a list which contains the relevant fields in the row. viewedlist=[row[1],row[2]] print("You have viewed:", viewedlist) with open("fakeflixfile.txt","r")as membersfile: #Open Reader membersfileReader=csv.reader(membersfile) for row in membersfileReader: for field in row: if field==username: #Open Writer to append to file -this time it looks for the file stored by that username with open("%s.txt" % username,"a") as membersfile: membersfileWriter=csv.writer(membersfile) #Use the writer to append the viewedlist to the appropriate member's user file. membersfileWriter.writerow([viewedlist])FAKEFLIXFILE.txt contentsusername13022,Username@123,user,name1,2/02/3022,user Road,US455P,Mle,Nothing,username1@hotmail.com AugustineSalins1900.txt,Augstine@123,Augustine,Salins,5/02/1900,Hutchins Road,CRBBAAA,Male,Theology,augustine@hotmail.com JosieBletchway3333,Bb@123,Josie,Bletchway,29/02/3333,Bletch Park,BB44AA,Female,Encryption,blecth@hotmail.com JoeBloggs.0.0,Joe@Bloggs12,Joe,Bloggs,0.0.0.0,0 Road,00000,Male,Joe Blogs,joe@hotmail.comUPDATE:I have now changed the txt file (based on username) so that the output generated doesn't produce a list:Drama, The English PatientDrama, BenhurSci-Fi,CosmosHowever, on running:def viewrecs(username): #pass username = (username + ".txt") with open(username,"r") as f: fReader=csv.reader(f) for row in fReader: print(eval(row)[0])The error persists: print(eval(row)[0]) TypeError: eval() arg 1 must be a string, bytes or code object
Because your file is not a valid CSV file. It looks more like a series of JSON objects. Each line in your file is enclosed in double quotes. So CSV reader treats it as a single column. That's why what you get in row[0]This is caused by the way you are writing your file. The line below tells CSV writer to write one object that is a list containing two valuesmembersfileWriter.writerow([viewedlist])what you really want is to tell CSV writer to write multiple objects. Change your line to this, no need to enclose it square brackets:membersfileWriter.writerow(viewedlist)Then you can just use this to read your file:with open(username,"r") as f: fReader=csv.reader(f) for row in fReader: print (row) # print entire row print (row[0]) # print just the first field
Create and call Variables In a for loop i am trying to make it so that a user inputs a number, and that number of buttons are created Using TKinter, I have tried doing it by using the following, Where the Buttons are successfully created, however i am struggling with calling them in order to place them / display them on the grid (Added randint to simulate user input (User Input not limited to 9 and may be as high as 40))from tkinter import *from random import randintinputValue = randint(3,9)print(inputValue)root = Tk()while inputValue > 0: # for every number in inputted value inputValue = int(inputValue) - 1 # take one globals()['Sailor%s' % inputValue] = Button(root, text="Lap :" + str(inputValue), command=lambda: retrieve_input()) # Create the button function in the format 'Sailors{Inputnumber}' ('Sailors%s' % inputValue).grid(row=inputValue, column=1, columnspan=2) # Place the button (Doesn't work)root.mainloop() # Does work (required)Howerver the following does not work (It is meant to place the button),('Sailors%s' % inputValue).grid(row=inputValue, column=1, columnspan=2) # Place the button (Doesn't work)Can you think of a method i can use in order to create and place Amount of buttons? Thanks in advance
You should never create dynamic variable names like you are attempting to do. It adds a lot of complexity, reduces clarity, and provides no real benefit.Instead, use a dictionary or list to keep track of the buttons. In your case, however, since you're never using the buttons anywhere but in the loop you can just use a local variable.Example using a local variable, in case you never need to access the button in code after you create it:for count in range(inputValue): button = Button(...) button.grid(...)Here's how you do it if you need to access the buttons later in your code:buttons = []for count in range(inputValue): button = Button(...) button.grid(...) buttons.append(button)With the above you can iterate over all of the buttons in buttons:for button in buttons: button.configure(state='disabled')If you need to configure a single button, use its index:button[0].configure(...)
python calculate slope raster from DEM I need the slope for 4 points in austria. I have the coordinates and the DEM (from opendata Austria). I found this tutorial (https://www.earthdatascience.org/tutorials/get-slope-aspect-from-digital-elevation-model/) and tried the code. But the results that I get are values like 30000.788 ... that cannot be degree? do I miss a transformation into meter?I use this code:import richdem as rdshasta_dem = rd.LoadGDAL(f'{project_data}dem_au.tif')slope = rd.TerrainAttribute(shasta_dem, attrib='slope_riserun')rd.SaveGDAL(f'{project_data}dem_slope.tif', slope)the picture looks great, but the values... I do not understand them!The Dem has values from 105 to 3736, if that would be meter it is correct for Austria, but in the describtion it says the DEM scale units are degrees?Can someone explain that to me,please?
I managed to calculate the slope! The problem was the DEM. It had strange dimensions, with another, more precise DEM from open data it worked fine : )
Asyncify string joining in Python I have the following code snippet which I want to transform into asynchronous code (data tends to be a large Iterable):transformed_data = (do_some_transformation(d) for d in data)stacked_jsons = "\n\n".join(json.dumps(t, separators=(",", ":")) for t in transformed_data)I managed to rewrite the do_some_transformation-function to be async so I can do the following:transformed_data = (await do_some_transformation(d) for d in data)async_generator = (json.dumps(event, separators=(",", ":")) async for t in transformed_data)stacked_jsons = ???What's the best way to incrementally join the jsons produced by the async generator so that the joining process is also asynchronous?This snippet is part of a larger I/O-bound-application which and has many asynchronous components and thus would profit from asynchifying everything.
The point of str.join is to transform an entire list at once.1 If items arrive incrementally, it can be advantageous to accumulate them one by one.async def join(by: str, _items: 'AsyncIterable[str]') -> str: """Asynchronously joins items with some string""" result = "" async for item in _items: if result and by: # only add the separator between items result += by result += item return resultThe async for loop is sufficient to let the async iterable suspend between items so that other tasks may run. The primary advantage of this approach is that even for very many items, this never stalls the event loop for longer than adding the next item.This utility can directly digest the async generator:stacked_jsons = join("\n\n", (json.dumps(event, separators=(",", ":")) async for t in transformed_data))When it is know that the data is small enough that str.join runs in adequate time, one can directly convert the data to a list instead and use str.join:stacked_jsons = "\n\n".join([json.dumps(event, separators=(",", ":")) async for t in transformed_data])The [... async for ...] construct is an asynchronous list comprehension. This internally works asynchronously to iterate, but produces a regular list once all items are fetched – only this resulting list is passed to str.join and can be processed synchronously.1 Even when joining an iterable, str.join will internally turn it into a list first.
No handlers could be found for logger "elasticsearch.trace" Updated:Turns out, this is not a function of cron. I get the same behavior when running the script from the command line, if it in fact has a record to process and communicates with ElasticSearch.I have a cron job that runs a python script which uses pyelasticsearch to index some documents in an ElasticSearch instance. The script works fine from the command line, but when run via cron, it results in this error: No handlers could be found for logger "elasticsearch.trace"Clearly there's some logging configuration issue that only crops up when run under cron, but I'm not clear what it is. Any insight?
I solved this by explicitly configuring a handler for the elasticsearch.trace logger, as I saw in examples from the pyelasticsearch repo.After importing pyelasticsearch, set up a handler like so:tracer = logging.getLogger('elasticsearch.trace')tracer.setLevel(logging.INFO)tracer.addHandler(logging.FileHandler('/tmp/es_trace.log'))I'm not interested in keeping the trace logs, so I used the near-at-hand Django NullHandler.from django.utils.log import NullHandlertracer = logging.getLogger('elasticsearch.trace')tracer.setLevel(logging.INFO)tracer.addHandler(NullHandler())
python __init__ vs class attributes I am very new to programming. I just started for couple weeks. I spent hours reading about class but I am still confused. I have a specific question.I am confused on when to use class attributes, and when to use initializer (__init__).I understand that when using __init__, I don't assign any value immediately, but only need to assign value when I create an object using that class. And class attributes are automatically inherent to the object created under that class.But in term of practical use, do they accomplished the same thing? Are they just two different ways to do the same thing? Or is __init__ does something that class attributes can't do?I went some testing with these codes, the results are the same. I am confused when to use which. To me class attribute looks more convenient to use. #use class attributes for class Numbers_1class Numbers_1: one = 1 two = 2 three = 3 six = two * threedef multiply(self): return self.six * self.two * self.three #use initializer for class Numbers_2 class Numbers_2: def __init__(self, num10, num20, num30, num600): self.num10 = num10 self.num20 = num20 self.num30 = num30 self.num600 = num600 def multiply(self): return self.num600 * self.num20 * self.num30 #Now I run some test to compare the two classes... x = Numbers_1() y = Numbers_2(10, 20, 30, 20*30) print(x.one) #print 1 print(y.num10) #print 10 print(x.six) #print 6 print(y.num600) #print 600 #assign attributes to each objects x.eighteen = x.six * x.three y.num18000 = y.num600 * y.num30 print(x.eighteen) #print 18 print(y.num18000) #print 18000 #try printing methods in each object print(x.multiply()) #print 36 print(y.multiply()) #print 360000 #try reassign values to attributes in each object x.one = 100 y.num10 = 1000 print(x.one) #prints 100 print(y.num10) #prints 1000
You got everything right - except that class attributes also function like static variables in python.Note however that everything in the class scope is run immediately upon parsing by the python interpreter.# file1.pydef foo(): print("hello world")class Person: first_name = foo() last_name = None def __init__(self): last_name = "augustus" print("good night")# file2.pyimport file1>>> "hello world"x = Person()>>> "good night"
Building a mutlivariate, multi-task LSTM with Keras PreambleI am currently working on a Machine Learning problem where we are tasked with using past data on product sales in order to predict sales volumes going forward (so that shops can better plan their stocks). We essentially have time series data, where for each and every product we know how many units were sold on which days. We also have information like what the weather was like, whether there was a public holiday, if any of the products were on sales etc. We've been able to model this with some success using an MLP with dense layers, and just using a sliding window approach to include sales volumes from the surrounding days. However, we believe we'll be able to get much better results with a time-series approach such as an LSTM.DataThe data we have essentially is as follows:(EDIT: for clarity the "Time" column in the picture above is not correct. We have inputs once per day, not once per month. But otherwise the structure is the same!)So the X data is of shape:(numProducts, numTimesteps, numFeatures) = (50 products, 1096 days, 90 features)And the Y data is of shape:(numProducts, numTimesteps, numTargets) = (50 products, 1096 days, 3 binary targets)So we have data for three years (2014, 2015, 2016) and want to train on this in order to make predictions for 2017. (That's of course not 100% true, since we actually have data up to Oct 2017, but let's just ignore that for now)ProblemI would like to build an LSTM in Keras that allows me to make these predictions. There are a few places where I am getting stuck though. So I have six concrete questions (I know one is supposed to try to limit a Stackoverflow post to one question, but these are all intertwined).Firstly, how would I slice up my data for the batches? Since I have three full years, does it make sense to simply push through three batches, each time of size one year? Or does it make more sense to make smaller batches (say 30 days) and also to using sliding windows? I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30 days each, each time sliding with 5 days? Or is this not really the way LSTMs should be used? (Note that there is quite a bit of seasonality in the data, to I need to catch that kind of long-term trend as well).Secondly, does it make sense to use return_sequences=True here? In other words, I keep my Y data as is (50, 1096, 3) so that (as far as I've understood it) there is a prediction at every time step for which a loss can be calculated against the target data? Or would I be better off with return_sequences=False, so that only the final value of each batch is used to evaluate the loss (i.e. if using yearly batches, then in 2016 for product 1, we evaluate against the Dec 2016 value of (1,1,1)).Thirdly how should I deal with the 50 different products? They are different, but still strongly correlated and we've seen with other approaches (for example an MLP with simple time-windows) that the results are better when all products are considered in the same model. Some ideas that are currently on the table are:change the target variable to be not just 3 variables, but 3 * 50 = 150; i.e. for each product there are three targets, all of which are trained simultaneously. split up the results after the LSTM layer into 50 dense networks, which take as input the ouputs from the LSTM, plus some features that are specific to each product - i.e. we get a multi-task network with 50 loss functions, which we then optimise together. Would that be crazy?consider a product as a single observation, and include product specific features already at the LSTM layer. Use just this one layer followed by an ouput layer of size 3 (for the three targets). Push through each product in a separate batch.Fourthly, how do I deal with validation data? Normally I would just keep out a randomly selected sample to validate against, but here we need to keep the time ordering in place. So I guess the best is to just keep a few months aside?Fifthly, and this is the part that is probably the most unclear to me - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three batches (each time up to Nov) with the goal of training the model to predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to use these results in 2017, how does this actually work? If I understood it correctly, the only thing I can do in this instance is to then feed the model all the data points for Jan to Nov 2017 and it will give me back a prediction for Dec 2017. Is that correct? However, if I were to use return_sequences=True, then trained on all data up to Dec 2016, would I then be able to get a prediction for Jan 2017 just by giving the model the features observed at Jan 2017? Or do I need to also give it the 12 months before Jan 2017? What about Feb 2017, do I in addition need to give the value for 2017, plus a further 11 months before that? (If it sounds like I'm confused, it's because I am!)Lastly, depending on what structure I should use, how do I do this in Keras? What I have in mind at the moment is something along the following lines: (though this would be for only one product, so doesn't solve having all products in the same model):Keras codetrainX = trainingDataReshaped #Data for Product 1, Jan 2014 to Dec 2016trainY = trainingTargetReshapedvalidX = validDataReshaped #Data for Product 1, for ??? Maybe for a few months?validY = validTargetReshaped numSequences = trainX.shape[0]numTimeSteps = trainX.shape[1]numFeatures = trainX.shape[2]numTargets = trainY.shape[2]model = Sequential()model.add(LSTM(100, input_shape=(None, numFeatures), return_sequences=True)) model.add(Dense(numTargets, activation="softmax")) model.compile(loss=stackEntry.params["loss"], optimizer="adam", metrics=['accuracy'])history = model.fit(trainX, trainY, batch_size=30, epochs=20, verbose=1, validation_data=(validX, validY)) predictX = predictionDataReshaped #Data for Product 1, Jan 2017 to Dec 2017prediction=model.predict(predictX)
So: Firstly, how would I slice up my data for the batches? Since I have three full years, does it make sense to simply push through three batches, each time of size one year? Or does it make more sense to make smaller batches (say 30 days) and also to using sliding windows? I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30 days each, each time sliding with 5 days? Or is this not really the way LSTMs should be used? (Note that there is quite a bit of seasonality in the data, to I need to catch that kind of long-term trend as well).Honestly - modeling such data is something really hard. First of all - I wouldn't advise you to use LSTMs as they are rather designed to capture a little bit different kind of data (e.g. NLP or speech where it's really important to model long-term dependencies - not seasonality) and they need a lot of data in order to be learned. I would rather advise you to use either GRU or SimpleRNN which are way easier to learn and should be better for your task.When it comes to batching - I would definitely advise you to use a fixed window technique as it will end up in producing way more data points than feeding a whole year or a whole month. Try to set a number of days as meta parameter which will be also optimized by using different values in training and choosing the most suitable one. When it comes to seasonality - of course, this is a case but:You might have way too few data points and years collected to provide a good estimate of season trends,Using any kind of recurrent neural network to capture such seasonalities is a really bad idea.What I advise you to do instead is:try adding seasonal features (e.g. the month variable, day variable, a variable which is set to be true if there is a certain holiday that day or how many days there are to the next important holiday - this is a room where you could be really creative)Use an aggregated last year data as a feature - you could, for example, feed last year results or aggregations of them like running average of the last year's results, maximum, minimum - etc. Secondly, does it make sense to use return_sequences=True here? In other words, I keep my Y data as is (50, 1096, 3) so that (as far as I've understood it) there is a prediction at every time step for which a loss can be calculated against the target data? Or would I be better off with return_sequences=False, so that only the final value of each batch is used to evaluate the loss (i.e. if using yearly batches, then in 2016 for product 1, we evaluate against the Dec 2016 value of (1,1,1)).Using return_sequences=True might be useful but only in following cases:When a given LSTM (or another recurrent layer) will be followed by yet another recurrent layer.In a scenario - when you feed a shifted original series as an output by what you are simultaneously learning a model in different time windows, etc.The way described in a second point might be an interesting approach but keep the mind in mind that it might be a little bit hard to implement as you will need to rewrite your model in order to obtain a production result. What also might be harder is that you'll need to test your model against many types of time instabilities - and such approach might make this totally unfeasible. Thirdly how should I deal with the 50 different products? They are different, but still strongly correlated and we've seen with other approaches (for example an MLP with simple time-windows) that the results are better when all products are considered in the same model. Some ideas that are currently on the table are: change the target variable to be not just 3 variables, but 3 * 50 = 150; i.e. for each product there are three targets, all of which are trained simultaneously. split up the results after the LSTM layer into 50 dense networks, which take as input the ouputs from the LSTM, plus some features that are specific to each product - i.e. we get a multi-task network with 50 loss functions, which we then optimise together. Would that be crazy? consider a product as a single observation, and include product-specific features already at the LSTM layer. Use just this one layer followed by an ouput layer of size 3 (for the three targets). Push through each product in a separate batch. I would definitely go for a first choice but before providing a detailed explanation I will discuss disadvantages of 2nd and 3rd ones:In the second approach: It wouldn't be mad but you will lose a lot of correlations between products targets,In third approach: you'll lose a lot of interesting patterns occuring in dependencies between different time series.Before getting to my choice - let's discuss yet another issue - redundancies in your dataset. I guess that you have 3 kinds of features:product specific ones (let's say that there is 'm' of them)general features - let's say that there is 'n` of them.Now you have table of size (timesteps, m * n, products). I would transform it into table of shape (timesteps, products * m + n) as general features are the same for all products. This will save you a lot of memory and also make it feasible to feed to recurrent network (keep in mind that recurrent layers in keras have only one feature dimension - whereas you had two - product and feature ones).So why the first approach is the best in my opinion? Becasue it takes advantage of many interesting dependencies from data. Of course - this might harm the training process - but there is an easy trick to overcome this: dimensionality reduction. You could e.g. train PCA on your 150 dimensional vector and reduce it size to a much smaller one - thanks to what you have your dependencies modeled by PCA and your output has a much more feasible size. Fourthly, how do I deal with validation data? Normally I would just keep out a randomly selected sample to validate against, but here we need to keep the time ordering in place. So I guess the best is to just keep a few months aside?This is a really important question. From my experience - you need to test your solution against many types of instabilities in order to be sure that it works fine. So a few rules which you should keep in mind:There should be no overlap between your training sequences and test sequences. If there would be such - you will have a valid values from a test set fed to a model while training,You need to test model time stability against many kinds of time dependencies.The last point might be a little bit vague - so to provide you some examples:year stability - validate your model by training it using each possible combination of two years and test it on a hold out one (e.g. 2015, 2016 against 2017, 2015, 2017 against 2016, etc.) - this will show you how year changes affects your model,future prediction stability - train your model on a subset of weeks/months/years and test it using a following week/month/year result (e.g. train it on January 2015, January 2016 and January 2017 and test it using Feburary 2015, Feburary 2016, Feburary 2017 data, etc.)month stability - train model when keeping a certain month in a test set.Of course - you could try yet another hold outs. Fifthly, and this is the part that is probably the most unclear to me - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three batches (each time up to Nov) with the goal of training the model to predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to use these results in 2017, how does this actually work? If I understood it correctly, the only thing I can do in this instance is to then feed the model all the data points for Jan to Nov 2017 and it will give me back a prediction for Dec 2017. Is that correct? However, if I were to use return_sequences=True, then trained on all data up to Dec 2016, would I then be able to get a prediction for Jan 2017 just by giving the model the features observed at Jan 2017? Or do I need to also give it the 12 months before Jan 2017? What about Feb 2017, do I in addition need to give the value for 2017, plus a further 11 months before that? (If it sounds like I'm confused, it's because I am!)This depends on how you've built your model:if you used return_sequences=True you need to rewrite it to have return_sequence=False or just taking the output and considering only the last step from result,if you used a fixed-window - then you need to just feed a window before prediction to model,if you used a varying length - you could feed any timesteps proceding your prediction period you want (but I advice you to feed at least 7 proceding days). Lastly, depending on what structure I should use, how do I do this in Keras? What I have in mind at the moment is something along the following lines: (though this would be for only one product, so doesn't solve having all products in the same model)Here - more info on what kind of model you've choosed is needed.
Comparing same index in 2 lists I don't have a code for this because I have no idea how to do it, and couldn't find much help on Google.Is there a way to find if the same indexes on 2 lists are the same?For example:x_list = [1, 2, 3, 4, 5]y_list = [1, 2, A, B, 5]I want to know whether the first index of X is the same as the first index of Y, the second index of X is the same as the second index of Y, etc. I could do it like:x_list[0] == y_list[0]But need an infinite solution.
zip the lists and return the test (which returns a boolean as outcome):[i == j for i, j in zip(x_list, y_list)]You could use any to quickly check for the existence of a False (means the items are not the same) if you don't need the values:any(i != j for i, j in zip(x_list, y_list))The any version would break once a False is found meaning you may not have to traverse the whole lists except in the worst case.
PyOpenCL | Fail to launch a Kernel When I'm using PyOpenCL to run the kernel "SIMPLE_JOIN" it fails.HEADER OF THE KERNEL IN .CL FILEvoid SIMPLE_JOIN(__global const int* a, int a_col, __global const int* a_valuesPic, __global const int* b, int b_col, __global const int* b_valuesPic, __global const int* join_valuesPic, __global int* current, const int maxVars, __global int* buffer, int joinVar)THE EXECUTION IN PyOpenCLprogram.SIMPLE_JOIN(context, (a_col, b_col), None, \ buffer_a, np.int32(a_col), buffer_a_valPic, \ buffer_b, np.int32(b_col), buffer_b_valPic, \ buffer_join_valPic, buffer_current, np.int32(maxVars), \ buffer_result, np.int32(joinList[0]))THE ERROR IN THE COMMAND LINETraceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 4.0.2\helpers\pydev\pydevd.py", line 2199, in <module> globals = debugger.run(setup['file'], None, None) File "C:\Program Files (x86)\JetBrains\PyCharm Community Edition 4.0.2\helpers\pydev\pydevd.py", line 1638, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:/Users/οΏ½οΏ½/Documents/οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½/GAP+/gapQueryTree.py", line 213, in <module> res1_array, res1_ValsPic = gpu.gpu_join(p[0], p1_ValsPic, friend[0], friend1_ValsPic) File "C:/Users/οΏ½οΏ½/Documents/οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½/GAP+\gapPyOpenCl.py", line 107, in gpu_join buffer_result, np.int32(joinList[0])) File "C:\Python27\lib\site-packages\pyopencl\__init__.py", line 515, in kernel_call global_offset, wait_for, g_times_l=g_times_l)Boost.Python.ArgumentError: Python argument types in pyopencl._cl.enqueue_nd_range_kernel(Context, Kernel, tuple, NoneType, NoneType, NoneType)did not match C++ signature: enqueue_nd_range_kernel(class pyopencl::command_queue {lvalue} queue, class pyopencl::kernel {lvalue} kernel, class boost::python::api::object global_work_size, class boost::python::api::object local_work_size, class boost::python::api::object global_work_offset=None, class boost::python::api::object wait_for=None, bool g_times_l=False)Process finished with exit code -1
The first argument to your kernel invocation should be the command queue, not the context:program.SIMPLE_JOIN(queue, (a_col, b_col), None, \...
local postgresql setup problems occurring when running python file Traceback (most recent call last):File "app.py", line 14, in <module>app.config.from_object(os.environ['APP_SETTINGS'])File "/Users/nihit/Desktop/flask-intro/venv/lib/python2.7/UserDict.py", line 23, in __getitem__raise KeyError(key)KeyError: 'APP_SETTINGS'I get this errors when I try to run my app.py, my config.py I learning from an online tutorial using flask this is the first few lines of codes in my app folder.# import the Flask class from the flask modulefrom flask import Flask, render_template, redirect, \url_for, request, session, flashfrom functools import wrapsfrom flask.ext.sqlalchemy import SQLAlchemy# create the application objectapp = Flask(__name__)# configimport osapp.config.from_object(os.environ['APP_SETTINGS'])# create the sqlalchemy objectdb = SQLAlchemy(app)# import db schemafrom models import *my config.py import os# default configclass BaseConfig(object):DEBUG = False# shortened for readabilitySECRET_KEY = 'secret key'SQLALCHEMY_DATABASE_URI = os.environ['DATABASE_URL']class DevelopmentConfig(BaseConfig):DEBUG = Trueclass ProductionConfig(BaseConfig):DEBUG = FalseI hope someone can help me to identify what I am doing wrong, I am following this online tutorial to learn , so most of the things are there https://www.youtube.com/watch?v=Up3p20rgWCw&list=PLLjmbh6XPGK5e0IbpMccp7NmJHnN8O1ng&index=28
You are missing the APP_SETTINGS environment variable. Define that and you'll be good to go.export APP_SETTINGS=config.DevelopmentConfigI'm not sure where this is first discussed in the tutorial, but you can see it about 4 minutes into the video you linked to when the author discusses adding the DATABASE_URL environment variable.
re.sub on a match.group for element in f: galcode_scan = re.search(ur'blah\.blah\.blah\(\'\w{5,10}', element)If I try to perform re.sub and remove the blahs with something else and keep the last bit, the \w{5,10} becomes literal. How do I retain the characters that are taken up by that chunk of the regular expression?EDIT:Here is the complete codefor element in f: galcode_scan = re.search(ur'Imgur\.Util\.triggerView\(\'\w{5,10}', element) galcode_scan = re.sub(r'Imgur\.Util\.triggerView\(\'\w{5,10}', 'blah\.\w{5,10}', ur"galcode_scan\.\w{5,10}") print galcode_scan
You can use positive lookahead ((?=...)) to not to match when replacing but matching as a whole pattern:re.sub("blah\.blah\.blah\(\'(?=\w{5,10})", "", "blah.blah.blah('qwertyu")'qwertyu'If you want to replace you match, just add it to the replacement parameter:re.sub("blah\.blah\.blah\(\'(?=\w{5,10})", "pref:", "blah.blah.blah('qwertyu")'pref:qwertyu'You can also do it by capturing the pattern ((..)) and back-referencing it (\1 .. \9):re.sub("blah\.blah\.blah\(\'(\w{5,10})", "pref:\\1", "blah.blah.blah('qwertyu")'pref:qwertyu'UpdateA more precise pattern for the provided exmples:re.sub("Imgur\.Util\.triggerView'(?=\w{5,10})", "imgurl.com/", "Imgur.Util.triggerView'B1ahblA4")'imgurl.com/B1ahblA4'The pattern here is a simple string, so whatever you need to make dynamic you can use a variable for it. For example to use different mappings:map = { 'Imgur\.Util\.triggerView\'': 'imgurl.com/', 'Example\.Util\.triggerView\'': 'example.com/'}items = [ "Imgur.Util.triggerView'B1ahblA4", "Example.Util.triggerView'FooBar"]for item in items: for old, new in map.iteritems(): pattern = old + '(?=\w{5,10})' if re.match(pattern, item): print re.sub(pattern, new, item)imgurl.com/B1ahblA4example.com/FooBar
Convert numpy.nd array to json I've a data frame genre_rail in which one column contains numpy.ndarray. The dataframe looks like as given below The array in it looks like this : ['SINGTEL_movie_22906' 'SINGTEL_movie_22943' 'SINGTEL_movie_24404' 'SINGTEL_movie_22924' 'SINGTEL_movie_22937' 'SINGTEL_movie_22900' 'SINGTEL_movie_24416' 'SINGTEL_movie_24422']I tried with the following codeimport jsonjson_content = json.dumps({'mydata': [genre_rail.iloc[i]['content_id'] for i in range(len(genre_rail))] })But got an error TypeError: array is not JSON serializable I need output as{"Rail2_contend_id":["SINGTEL_movie_22894","SINGTEL_movie_22898","SINGTEL_movie_22896","SINGTEL_movie_24609","SINGTEL_movie_2455","SINGTEL_movie_24550","SINGTEL_movie_24548","SINGTEL_movie_24546"]}
How about you convert the array to json using the .tolist method.Then you can write it to json like :np_array_to_list = np_array.tolist()json_file = "file.json" json.dump(b, codecs.open(json_file, 'w', encoding='utf-8'), sort_keys=True, indent=4)
aggregate by group and subgroup I have a dataframe that looks like this:Id Country amount1 AT 102 BE 203 DE 301 AT 101 BE 203 DK 30What I want to do is aggregate amount by ID, country,So my df should look like:Id Country amount AT_amount BE_amount DE_amount DK_amount1 AT 10 20 20 0 02 BE 20 0 20 0 03 DE 30 0 0 30 301 AT 10 20 20 0 01 BE 20 20 20 0 03 DK 30 0 0 30 30I tried working with groupby, but using:df['AT_amount'] = df.groupby(['Id', 'Country').sum(amount)will not work, since then I will not get the values for all Id==1, but only for ID==1 and will give me a value regardless of the country.I could first do this, set the values to 0 if country!=AT and then take a groupby maximum, but this seems a bit of a long way around.To get these values for all countries it seems I will have to write a loop, or is there a quick way to create a new variable for all subgroup countries ?
I think you can use pivot_table, add_suffix and last merge:df1 = df.pivot_table(index='Id', columns='Country', values='amount', fill_value='0', aggfunc=sum).add_suffix('_amount').reset_index()print df1 Country Id AT_amount BE_amount DE_amount DK_amount0 1 20 20 0 01 2 0 20 0 02 3 0 0 30 30print pd.merge(df,df1, on='Id', how='left') Id Country amount AT_amount BE_amount DE_amount DK_amount0 1 AT 10 20 20 0 01 2 BE 20 0 20 0 02 3 DE 30 0 0 30 303 1 AT 10 20 20 0 04 1 BE 20 20 20 0 05 3 DK 30 0 0 30 30
count occurences of number entered by the user I need to count the occurrence of the min number from input that has been entered by the user. this is what i have so far, it is displaying the max and min numbers but i don't know how to count the occurrences using ELIF e.g the smallest number occurs 'x' times' only a beginner at Python, please help max = 0min = 0count_l = 0 count_s = 0while True: inp = raw_input ("Enter a number\n: ") if inp == '0': break try: num = float (inp) except: print 'Please enter a valid number' continue if min == 0 or num < min: min = num if max == 0 or num > max: max = numdef result (max, min): print ('Largest Number Entered\n:') , max print ('Smallest Number Entered\n:'), min print ('Occurence of largest number is: '), count_l print ('Occurence of smallest number is: '), count_sresult (max, min)
Well, you change the smallest number dynamically. This means that the count should be reset every time you change the number. The same goes for the maximum number.Examplemax = float("-inf")min = float("inf")count_l = 0count_s = 0def safecast(cast_type, value, default=None): try: default = cast_type(value) finally: return defaultdef input_until(prompt, cast_type, value, default=None): while True: ret = safecast(cast_type, raw_input(prompt), default=default) if ret != value: yield ret else: breakfor num in input_until("Enter a number\n:", float, 0): if num != 0: if num < min: min = num count_s = 1 elif num > max: max = num count_l = 1 elif num == min: count_s += 1 elif num == max: count_l += 1def result(max, min): print('Largest Number Entered:') , max print('Smallest Number Entered:'), min print('Occurance of largest number is: '), count_l print('Occurance of smallest number is: '), count_sresult (max, min)Also, in the future, don't use variables called "max" or "min" because these are python builtin functions.
Charge Account Validation Python project Design a program that asks the user to enter a charge account number.The program should determine whether the number is valid bycomparing it to the following list of valid charge account numbers:5658845 4520125 7895122 8777541 845127713028508080152 4562555 5552012 5050552 782587712502551005231 6545231 3852085 7576651 78812004581002These numbers should be stored in an array. Use the sequential searchalgorithm to locate the number entered by the user. If the number is inthe array, the program should display a message indicating the numberis valid. If the number is not in the array, the program should display amessage indicating the number is invalid.Create a data file, valid_numbers.txt, containing the valid charge account numbers as listed in the book.Create a data file, possible_valid numbers, containing a list of possible valid numbers (such as those entered by the user). You will create this file. Include at least 10 numbers, with approximately half valid and half invalid.Compare each charge account number from the file, possible valid numbers, to see if it is listed as a valid number in the file valid_numbers.txt.Create an output file, results.txt which lists the possible valid numbers and the result of validity checking. Create a list of numbers, followed by "VALID", or "INVALID". Space and align neatly.Place your name and student ID at the top of the output file.The output should look similar to:What output should look like****Below is my code****ValidNumbers = open("possible_valid numbers.txt", "r")Account_Number = int(input("Please enter your charge acount number "))flag = 0with open('valid_numbers.txt') as f: lines = (f.read().splitlines())numbers =[int(e.strip()) for e in lines]for eachelement in numbers : if eachelement==Account_Number : print ('The number is valid') flag =1 break;if (flag ==0) : print ('The number is invalid')ValidNumbers.close()I don't know how to complete part 4
Your code is a bit off from the homework assignment, I suggest that you use search engines to research code sniplets on how to complete the tasks of the assignment:Step 1 -- this is a manual process, no code requiredStep 2 -- this is a manual process, no code requiredStep 3 -- you need to write code to: read file created in Step 1 read file created in Step 2 then compare the two, keep the valid checksStep 4 -- you need to:create a new filewrite a header (which is specified in Step 5)write the results from Step 3 Step 5 -- actually done in Step 4Step 6 -- this is a manual step, no code requiredGood Luck !
Problems with writing and reading files in python i need to write and read multi variables in one text filemyfile = open ("bob.txt","w")myfile.write(user1strength)myfile.write("\n")myfile.write(user1skill)myfile.write("\n")myfile.write(user2strength)myfile.write("\n")myfile.write(user2skill)myfile.close()at the moment it come's up with this error:Traceback (most recent call last):File "D:\python\project2\project2.py", line 70, in <module>myfile.write(user1strength)TypeError: must be str, not float
If you are using python3 use the print function instead.with open("bob.txt", "w") as myfile: print(user1strength, file=myfile) print(user1skill, file=myfile) print(user2strength, file=myfile) print(user2skill, file=myfile)The print function takes care of converting to str for you, and automatically adds the \n for you as well. I also used a with block which will automatically close the file for you.If you are on python2.6 or python2.7, you can get access to the print function with from __future__ import print_function.
Azure Python SDK retrieve Key Vault secret for storage account I have the following code for a key vault to retrieve the secret and be able to use them in a storage account backup. The following code for the key vault is the followingkeyvault_name = f'keyvault-link'KeyVaultName = "name"credential = DefaultAzureCredential()client = SecretClient(vault_url=keyvault_name, credential=credential)And on the other hand I have a storage account code as follow:connection_string = client.get_secret("GATEWAY-Connection-String") # The connection string for the source containeraccount_key = client.get_secret("GATEWAY-Account-Key") # The account key for the source container# source_container_name = 'newblob' # Name of container which has blob to be copiedtable_service_out = TableService(account_name=client.get_secret("GATEWAY-Account-Name-out"), account_key=client.get_secret("GATEWAY-Account-Key-out"))table_service_in = TableService(account_name=client.get_secret("GATEWAY-Account-Name-in"), account_key=client.get_secret("GATEWAY-Account-Key-in"))# Create clientclient = BlobServiceClient.from_connection_string(connection_string) client = BlobServiceClient.from_connection_string(connection_string)all_containers = client.list_containers(include_metadata=True)for container in all_containers: # Create sas token for blob sas_token = generate_account_sas( account_name = client.account_name, account_key = account_key, resource_types = ResourceTypes(object=True, container=True), permission= AccountSasPermissions(read=True,list=True), # start = datetime.now(), expiry = datetime.utcnow() + timedelta(hours=24) # Token valid for 4 hours ) print("==========================") print(container['name'], container['metadata']) # print("==========================") container_client = client.get_container_client(container.name) # print(container_client) blobs_list = container_client.list_blobs() for blob in blobs_list: # Create blob client for source blob source_blob = BlobClient( client.url, container_name = container['name'], blob_name = blob.name, credential = sas_token ) target_connection_string = client.get_secret("GATEWAY-Target-Connection-String") target_account_key = client.get_secret("GATEWAY-Target-Account-Key") source_container_name = container['name'] target_blob_name = blob.name target_destination_blob = container['name'] + today print(target_blob_name) # print(blob.name) target_client = BlobServiceClient.from_connection_string(target_connection_string) container_client = target_client.get_container_client(target_destination_blob) if not container_client.exists(): container_client.create_container() new_blob = target_client.get_blob_client(target_destination_blob, target_blob_name) new_blob.start_copy_from_url(source_blob.url) print("COPY TO: " + target_connection_string) print(f"TRY: saving blob {target_blob_name} into {target_destination_blob} ") # except: # # Create new blob and start copy operation. # new_blob = target_client.get_blob_client(target_destination_blob, target_blob_name) # new_blob.start_copy_from_url(source_blob.url) # print("COPY TO: " + target_connection_string) # print(f"EXCEPT: saving blob {target_blob_name} into {target_destination_blob} ") #query 100 items per request, in case of consuming too much menory load all data in one timequery_size = 1000#save data to storage2 and check if there is lefted data in current table,if yes recurrence#save data to storage2 and check if there is lefted data in current table,if yes recurrencedef queryAndSaveAllDataBySize(source_table_name, target_table_name,resp_data:ListGenerator ,table_out:TableService,table_in:TableService,query_size:int): for item in resp_data: tb_name = source_table_name + today #remove etag and Timestamp appended by table service del item.etag del item.Timestamp print("INSERT data:" + str(item) + "into TABLE:"+ tb_name) table_in.insert_or_replace_entity(target_table_name,item) if resp_data.next_marker: data = table_out.query_entities(table_name=source_table_name,num_results=query_size,marker=resp_data.next_marker) queryAndSaveAllDataBySize(source_table_name, target_table_name, data,table_out,table_in,query_size)tbs_out = table_service_out.list_tables()print(tbs_out)for tb in tbs_out: table = tb.name + today #create table with same name in storage2 table_service_in.create_table(table_name=table, fail_on_exist=False) #first query data = table_service_out.query_entities(tb.name,num_results=query_size) queryAndSaveAllDataBySize(tb.name, table,data,table_service_out,table_service_in,query_size)Normally in this linetable_service_out = TableService(account_name=client.get_secret("GATEWAY-Account-Name-out"), account_key=client.get_secret("GATEWAY-Account-Key-out"))the params take the value as a string, and the secret retrieve it comes back as a string so I thought that would have worked, but But when I run the code, I get this following errorTraceback (most recent call last): File "/Users/user/Desktop/AzCopy/blob.py", line 1581, in <module> table_service_out = TableService(account_name=table_out, account_key=table_out_key) File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/tableservice.py", line 173, in __init__ service_params = _TableServiceParameters.get_service_parameters( File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/common/_connection.py", line 116, in get_service_parameters params = _ServiceParameters(service, File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/common/_connection.py", line 70, in __init__ self.account_key = self.account_key.strip()AttributeError: 'KeyVaultSecret' object has no attribute 'strip'
Thank you Gaurav Mantri. Posting your suggestion as an answer to help other community members.You can add value client.get_secret(β€œyour-key”).value
Axios blocked by CORS but HTMLParser isn't? (Web Browser Scraper) I have a Python web scraper using the HTMLParser module. The website it scraps is http://consulta.siiau.udg.mx/wco/sspseca.consulta_oferta?ciclop=202120&cup=D&mostrarp=100000&ordenp=2Now I need to do the same but web browser based using javascript, so I tried fetching the raw HTML using axios but I keep getting 'Access to XMLHttpRequest has been blocked by CORS policy'.What I have tried isaxios.post('http://consulta.siiau.udg.mx/wco/sspseca.consulta_oferta', { ciclop: '202120', cup: 'D', mostrarp: 10000, ordennp: 2, }) .then((response) => { console.log(response) })Andaxios.get('http://consulta.siiau.udg.mx/wco/sspseca.consulta_oferta?ciclop=202120&cup=D&mostrarp=100000&ordenp=2', { crossdomain: true } ) .then((response) => { console.log(response) })I am aware that in javascript they normally use a headless browser like the one inside Puppeteer, but since this project is a websiteI can't use Node.js modules.Right now the solution I implemented is to have a server running a Flask API that handles the html fetching and then sends it back to the client for processing, but it would be a relieve for my server performance if the client could do this on his side.
In short, you can't use the fetch API or XMLHTTPRequest to access resources that aren't allowed by the browser's cross-origin policy.For security reasons, browsers restrict HTTP requests initiated from scripts. A web application can only request resources from the same origin the application was loaded from unless the response from other origins includes the right CORS headers.You can use a headless browser (puppeteer) to access a web page, for example using: await page.goto('https://example.com');This is equivalent to putting http://example.com in the browser bar and pressing ENTER. Any scripts on that page are subject to the same cross-origin restrictions as before. However, telling the browser to visit http://example.com isn't a cross-origin request in itself.
Image pre-processing parameters for tensorflow models I have a basic question about how to determine the image pre-processing parameters like - "IMAGE_MEAN", "IMAGE_STD" for various tensorflow pre-trained models. The Android sample applications for TensorFlow provides these parameters for a certain inception_v3 model in the ClassifierActivity.java (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java) as shown below -"If you want to use a model that's been produced from the TensorFlow for Poets codelab, you'll need to set IMAGE_SIZE = 299, IMAGE_MEAN = 128, IMAGE_STD = 128"How do I determine these parameters for other TF modelsAlso, while converting the TF model to CoreML model, to be used on iOS, there are additional image pre-processing parameters that need to be specified (like - red_bias, green_bias, blue_bias and image_scale) as shown in the code segment below. The below parameters are for inception_v1_2016.pb model. If I want to use another pre-trained model like - ResNet50, MobileNet, etc how do I determine these parameterstf_converter.convert(tf_model_path = 'inception_v1_2016_08_28_frozen.pb', mlmodel_path = 'InceptionV1.mlmodel', output_feature_names = ['InceptionV1/Logits/Predictions/Softmax:0'], image_input_names = 'input:0', class_labels = 'imagenet_slim_labels.txt', red_bias = -1, green_bias = -1, blue_bias = -1, image_scale = 2.0/255.0 )Any help will be greatly appreciated
Unfortunately, the preprocessing requirements of various ImageNet models are still under documented. ResNet and VGG models both use the same preprocessing parameters. You can find biases for each of the color channels here:https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py#L11The preprocessing for Inception_V3, MobileNet, and other models can be found in the individual model files of this repo: https://github.com/fchollet/deep-learning-modelsWhen converting to Core ML you always need to specify preprocessing biases on a per channel basis. So in the case of a VGG-type preprocessing, you can just copy each channel's biases directly from the code linked to above. It's super important to note that the biases are applied (added) BEFORE scaling. You can read more about setting the proper values here: http://machinethink.net/blog/help-core-ml-gives-wrong-output/The conversion code you posted looks good for MobileNet or Inception_V3 models, but would not work for VGG or ResNet. For those you'd need:tf_converter.convert(... red_bias=-123.68, green_bias=-116.78, blue_bias=-103.94)No scaling is required.
Understanding the shape of tensorflow placeholders I am reading this code and I would like to understand about its implementation.One of the first things that I would like to know, is that what is the shape of some tensor objects (placeholders) such as x_init, xs, h_init, y_init, y_sample, etc. I wrote a line of code such as print(xs.shape) but it wont work.How can I understand the shape of these parameters (tensors)? And can I write something like the following in NumPy?The part of code that defines these tensors look like this:x_init = tf.placeholder(tf.float32, shape=(args.init_batch_size,) + obs_shape)xs = [tf.placeholder(tf.float32, shape=(args.batch_size, ) + obs_shape) for i in range(args.nr_gpu)]# if the model is class-conditional we'll set up label placeholders +# one-hot encodings 'h' to condition on if args.class_conditional:num_labels = train_data.get_num_labels()y_init = tf.placeholder(tf.int32, shape=(args.init_batch_size,))h_init = tf.one_hot(y_init, num_labels)y_sample = np.split( np.mod(np.arange(args.batch_size * args.nr_gpu), num_labels), args.nr_gpu)h_sample = [tf.one_hot(tf.Variable( y_sample[i], trainable=False), num_labels) for i in range(args.nr_gpu)]
The shape is assembled from different command line parameters:obs_shape is the shape of the input images, e.g., (32, 32, 3)args.init_batch_size and args.batch_size are the values from command line. It could be for example 30 and 40.Then shape of x_init is the concatenation of init_batch_size and obs_shape: (30, 32, 32, 3). Correspondingly, the shape of each item in xs is (40, 32, 32, 3).You couldn't evaluate xs.shape, because xs is a list of placeholders. You can evaluate xs[0].shape instead.y_sample and h_sample are the lists of tensors as well. The first one contains (batch_size, num_labels) tensors, the second one (num_labels, ).
how to capture effective date in which a value change in dataframe I start with a file in which I have daily data from a group of people, and I would like to captures when one value of one column change if it did changeThe dataframe's structure looks like the one below:idnamestartdatefiledatevalue1Sta10-12-201924-04-202111Sta10-12-201925-04-20210.51Sta10-12-201926-04-20210.51Sta10-12-201927-04-20210.92Danny20-03-202024-04-202112Danny20-03-202025-04-202112Danny20-03-202026-04-20210.32Danny20-03-202027-04-20210.33Elle14-08-202024-04-202113Elle14-08-202025-04-202113Elle14-08-202026-04-202113Elle14-08-202027-04-20211my goal is to set the first effective date of a person to the startdate and then set the effective date the filedate when the value change.getting a dataframe like this one:idnameeffective datevalue1Sta10-12-201911Sta25-04-20210.51Sta27-04-20210.92Danny20-03-202012Danny26-04-20210.33Elle14-08-20201
Comapre for not equal values per groups by DataFrameGroupBy.shift, filter by boolean indexing and replace first values per names by Series.mask with DataFrame.duplicated, last rename and remove column:df = df[df['value'].ne(df.groupby('name')['value'].shift())].copy()df['startdate'] = df['startdate'].mask(df.duplicated('name'), df['filedate'])df = df.rename(columns={'startdate':'effective date'}).drop('filedate', axis=1)print (df) id name effective date value0 1 Sta 10-12-2019 1.01 1 Sta 25-04-2021 0.53 1 Sta 27-04-2021 0.94 2 Danny 20-03-2020 1.06 2 Danny 26-04-2021 0.38 3 Elle 14-08-2020 1.0
Sqlite database backup and restore in flask sqlalchemy I am using flask sqlalchemy to create db which in turns create a app.db files to store tables and data. Now for the backup it should be simple to just take a copy of app.db somewhere in the server. But suppose while the app is writing data to app.db and we make a copy at that time then we might have inconsistent app.db file.How do we maintain the consistency of the backup. I can implement locks to do so. But I was wondering on standard and good solutions for database backup and how is this implemented in python.
SQLite has the backup API for this, but it is not available in the built-in Python driver.You coulduse the APSW library for the backup; orexecute the .backup command in the sqlite3 command-line shell; orrun BEGIN IMMEDIATE to prevent other connections from writing to the DB, and copy the file(s).
Django Static files URL,ROOT,DIR confusion I am using Django v1.11.In the setting file I have set like thisSTATIC_URL = '/static/'STATIC_ROOT = os.path.join(BASE_DIR, "e","static","static_root")STATICFILES_DIRS = [os.path.join(BASE_DIR, "e","static","static_dir"), ]Firstly I copied all my css,js,img file in static_dir folder.Then I run the command python manage.py collectstaticWhich copied all the files from static_dir to static_root.As I can understand now all my css files should be loaded from static_root. But I can see that css files are being loaded from static_dir. So can anyone please explain it to me what is happening ? Why should I use static_root ? I can not find any use of static_root
The whole explanation can be found thereSTATIC_ROOT provides a convenience management command for gathering static files in a single directory so you can serve them easily. When DEBUG is False, set a path to it before you use collectstaticIn addition to using a static/ directory inside your apps, you can define a list of directories (STATICFILES_DIRS) in your settings file where Django will also look for static files. For example:STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), '/var/www/static/',]
Deeplab test with a local image file I am trying to run deeplab on my local computer. I installed deeplab on my computer and defined paths. Deeplab is running successfully on my local computer. This is the last block of deeplab_demo.ipynbSAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']IMAGE_URL = '' #@param {type:"string"}_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/' 'deeplab/g3doc/img/%s.jpg?raw=true')def run_visualization(url): """Inferences DeepLab model and visualizes result.""" try: f = urllib.request.urlopen(url) jpeg_str = f.read() original_im = Image.open(BytesIO(jpeg_str)) except IOError: print('Cannot retrieve image. Please check url: ' + url) return print('running deeplab on image %s...' % url) resized_im, seg_map = MODEL.run(original_im) vis_segmentation(resized_im, seg_map)image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGErun_visualization(image_url)I am running it jupyter notebook. It works fine for '_SAMPLE_URL' variable or any link that includes an image file. I would like to test deeplab with a local image file.Note: Environment variables have been defined. So i can access my local files from this notebook. All libraries are working. But i don't want to test an image at URL, just local image files.
change the local path_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/''deeplab/g3doc/img/%s.jpg?raw=true')--->_SAMPLE_URL = ('file:///path to/tensorflow/models/blob/master/research/''deeplab/g3doc/img/%s.jpg')
How to define my own continuous wavelet by using Python? As the title shows, I want to define my own continuous wavelet by Python. However, I don't know how to achieve this exactly.The formula of my wavelet mother function is belowIt looks something like the Mexican hat wavelet, but they are different.So how can I define such a custom wavelet by Python, then CWT can be performed using this wavelet?
Per this you need a function that takes a number of points and a scale to provide as a wavelet argumentSo we define it as such:import mathimport numpy as npfrom scipy import signalimport matplotlib.pyplot as pltmother_wavelet = lambda z : np.exp(-z*z/4)*(2-z*z)/(4*math.sqrt(math.pi))def mexican_hat_like(n,scale): x = np.linspace(-n/2,n/2,n) return mother_wavelet(x/scale)Let's test it. We note that in fact something that looks very similar to yours is available. The difference is in scaling a and also the constant is front looks slightly different. Note math.sqrt(2) scaling for the Ricker waveletpoints = 100a = 4.0vec_ours = mexican_hat_like(points, a)vec_theirs = signal.ricker(points, a*math.sqrt(2))plt.plot(vec_ours, label = 'ours')plt.plot(vec_theirs, label = 'ricker')plt.legend(loc = 'best')plt.show()Here is the graph:
Dimension Problem in Keras Multilabel Classification with Word Embeddings I am currently solving an exercise which involves reading in TED talks, labelling them according to the topics they are about, and training a Feed Forward NN in Keras that can label new talks accordingly, using pre-trained word embeddings.Depending on what the talk is about (technology, education or design or multiple of those topics), it can have one of the following labels:labels_dict = {'txx': 0, 'xex': 1, 'xxd': 2, 'tex': 3, 'txd': 4, 'xed': 5, 'ted': 6, 'xxx': 7}I load the talks like this:def load_talks(path):tree = et.parse(path)root = tree.getroot()for file in root: label = '' keywords = file.find('head').find('keywords').text.lower() if 'technology' in keywords: label += 't' else: label += 'x' if 'education' in keywords: label += 'e' else: label += 'x' if 'design' in keywords: label += 'd' else: label += 'x' talk = file.find('content').text talk = process_text(talk) texts.append(talk) labels.append(labels_dict[label])I then calculate TF-IDF scores for the tokens in the texts:tf_idf_vect = TfidfVectorizer()tf_idf_vect.fit_transform(texts)tf_idf_vectorizer_tokens = tf_idf_vect.get_feature_names()Then I use a tokenizer to assign the tokens in the texts to indexes:t = Tokenizer()t.fit_on_texts(texts)vocab_size = len(t.word_index) + 1encoded_texts = t.texts_to_sequences(texts)print('Padding the docs')# pad documents to a max length of 4 wordsmax_length = max(len(d) for d in encoded_texts)padded_docs = pad_sequences(encoded_texts, maxlen=max_length, padding='post')Next, I compute the embedding matrix:def compute_embedding_matrix(word_index, embedding_dim):embedding_matrix = np.zeros((len(word_index) + 1, embedding_dim))for word, i in word_index.items(): embedding_vector = word_embeddings.get(word) if embedding_vector is not None and get_tf_idf_score(word) > TF_IDF_THRESHOLD: # words not found in embedding index and with a too low tf-idf score will be all-zeros. embedding_matrix[i] = embedding_vectorreturn embedding_matrixembedding_dim = load_word_embeddings('word_embeddings/de/de.tsv') + 1embedding_matrix = compute_embedding_matrix(t.word_index, embedding_dim)I then prepare the labels and split the data in training and testing:labels = to_categorical(np.array(labels))X_train, X_test, y_train, y_test = train_test_split(padded_docs, labels, test_size=0.1, random_state=0)The following prints output this:print(X_train.shape)print(X_test.shape)print(y_train.shape)print(y_test.shape)(1647, 6204)(184, 6204)(1647, 8)(184, 8)I then prepare my model like this:e = Embedding(input_dim=vocab_size, weights=[embedding_matrix], input_length=max_length, output_dim=embedding_dim, trainable=False)print('Preparing the network')model = models.Sequential()model.add(e)model.add(layers.Flatten())model.add(layers.Dense(units=100, input_dim=embedding_dim, activation='relu'))model.add(layers.Dense(len(labels), activation='softmax'))# compile the modelmodel.compile(loss='binary_crossentropy', metrics=['acc'])# summarize the modelprint(model.summary())# fit the modelprint('Fitting the model')model.fit(X_train, y_train, epochs=10, verbose=0, batch_size=500)# evaluate the modelloss, accuracy = model.evaluate(X_test, y_test, verbose=0)print('Accuracy: %f' % (accuracy*100))This outputs the following error:embedding_1 (Embedding) (None, 6204, 301) 47951106 _________________________________________________________________flatten_1 (Flatten) (None, 1867404) 0 _________________________________________________________________dense_1 (Dense) (None, 100) 186740500 _________________________________________________________________dense_2 (Dense) (None, 1831) 184931 =================================================================Total params: 234,876,537Trainable params: 186,925,431Non-trainable params: 47,951,106_________________________________________________________________NoneFitting the model batch_size=batch_size) File "/Users/tim/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 789, in _standardize_user_data exception_prefix='target') File "/Users/tim/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 138, in standardize_input_data str(data_shape))ValueError: Error when checking target: expected dense_2 to have shape (1831,) but got array with shape (8,)Process finished with exit code 1Can someone point me in the right direction about how to go about fitting the dimensions of this model?
I found the issue:The last Dense layer should have 8 units, as I have 8 labels.
How can I run Bokeh(version 0.13) server as backgroud service in linux? Currently I am running bokeh server using this command in linux bokeh serve DashboardDCD/ --port 5007 --allow-websocket-origin=52.171.38.120:5007 In this case i have to keep the terminal open. I want to run it in background as daemon. How can we do that? Are there any workarounds?
To keep Linux Process running after exiting terminal, must we use disown command, it is used after the a process has been launched and put in the background, it’s work is to remove a shell job from the shell’s active list jobs.In your case:$ sudo bokeh serve DashboardDCD/ --port 5007 --allow-websocket-origin=52.172.38.117:5007 &$ jobs$ disown -h %1$ jobsThe output should be something like this: $ sudo bokeh serve DashboardDCD/ --port 5007 --allow-websocket-origin=52.172.38.117:5007 & $ [1] Some ID number $ jobs $ [1] Running bokeh serve DashboardDCD/ --port 5007 --allow-websocket-origin=52.172.38.117:5007 & $ disown -h %1 $ jobs $ [1] Running bokeh serve DashboardDCD/ --port 5007 --allow-websocket-origin=52.172.38.117:5007 &Keep in mind this will make the process running in the background, but it wont make it restart if it crashes.
Tensorflow Lite on MLKit giving this error: : #vk Got 1 class(es) for output index 0, expected 2 according to the label map After adding metadata to my tflite file to use it in ML Kit, I get the error Calculator::Open() for node "ClassifierClientCalculator" failed: #vk Got 1 class(es) for output index 0, expected 2 according to the label map. I have edited the number of classes in the metadata as well as the number of classes in the label file, and I still get the same error. My model is a binary image classification model, so there should be 2 classes in the first place.
Based on https://developers.google.com/ml-kit/custom-models#model-compatibility, the output should be (1 * 2) or (1 * 1 * 1 * 2) if the output contains two classes. Could you double check for your output layer?
Seaborn regplot fit line does not match calculated fit from stats.linregress or stats model I am trying to fit a xlog-linear regression. I used Seaborn regplot to plot the fit, which looks like a good fit (green line). Then, because regplot does not provide the coefficients. I used stats.linregress to find the coefficients. However, that plotted line (purple) does not match the fit from Seaborn regplot. I also used stats model to get the coefficients which matched the lineregress output. Is there a better way to get the coefficients that match the regplot line. I am unable to reproduce the Seaborn regplot line. I need the coefficients to report the fit for the model.import seaborn as snsimport matplotlib.pyplot as pltfrom scipy import statssns.regplot(x, y,x_bins=100, logx=True,n_boot=2000, scatter_kws={"color": "black"}, ci=None,label='logfit',line_kws={"color": "green"})#Find the coefficients slope and interceptslope, intercept, r_value, pv, se = stats.linregress(y, np.log10(x))yy= np.linspace(-.01, 0.05, 400)xx = 10**(slope*yy+intercept)plt.plot(xx,yy,marker='.',color='purple')#Label Figureplt.tick_params(labelsize=18)plt.xlabel('insitu', fontsize=22)plt.ylabel('CI', fontsize=22)I also used stats model for the fit and got the same results as stats.linregress for the coefficients. I'm unable to reproduce Seaborn regplot line. import statsmodels as sm import statsmodels.formula.api as smf results = smf.ols('np.log10(x) ~ (y)', data=df_data).fit() # Inspect the results print(results.summary())
There are two issues with your attempt to recreate what seaborn is doing:you have the arguments to stats.linregress backwardsthat's not how yhat is computedHere's how you could recreate the seaborn logx regression line:diamonds = sns.load_dataset("diamonds").sample(500, random_state=0)x = diamonds["price"]y = diamonds["carat"]ax = sns.regplot(x=x, y=y, logx=True, line_kws=dict(color="g", lw=10))fit = stats.linregress(np.log(x), y)grid = np.linspace(x.min(), x.max())ax.plot(grid, fit.intercept + fit.slope * np.log(grid), color="r", lw=5)
ModuleNotFoundError: No module named 'flask' in VS code When I run it in VS Code, it works with no errorsfrom werkzeug.wrappers import Request, Responsefrom flask import Flaskapp = Flask(__name__)@app.route("/")def hello(): return "Hello World!"if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 9000, app)See here how's it working on my terminal.PS C:\Users\ASUS\t81_558_deep_learning> & C:/Users/ASUS/Anaconda3/Anaconda/python.exe c:/Users/ASUS/t81_558_deep_learning/py/vs.py * Running on http://localhost:9000/ (Press CTRL+C to quit)127.0.0.1 - - [21/May/2020 17:45:41] "GET / HTTP/1.1" 200 -127.0.0.1 - - [21/May/2020 17:45:42] "GET /favicon.ico HTTP/1.1" 404 -However, when I wrote it in VSCode terminal...PS C:\Users\ASUS\t81_558_deep_learning> pythonPython 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)] on win32Type "help", "copyright", "credits" or "license" for more information.>>> import flaskTraceback (most recent call last): File "<stdin>", line 1, in <module>ModuleNotFoundError: No module named 'flask'>>> from flask import FlaskTraceback (most recent call last): File "<stdin>", line 1, in <module>ModuleNotFoundError: No module named 'flask'Same command in Conda:(base) C:\Windows\system32>pythonPython 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32Type "help", "copyright", "credits" or "license" for more information.>>> from flask import Flask, request, jsonify>>> import flask>>>Why is this happening and how can I solve this?
If you're using Visual Studio too, maybe VSCode is not using the correct python interpreter. You can try choosing conda interpreter at the bottom left of the screen in VSCode.
Python - Create plot with percentage of occurence I have a dataframe which contains orders. Each product has a color. I want to create a (line) plot of monthly data and show the occurrence of colors throughout the month.A snippet of the current dataframe: Color 2021-08-25 17:43:30 Blue2021-08-25 17:26:34 Blue2021-08-25 17:15:51 Green2021-09-02 14:23:19 Blue2021-09-04 18:11:17 YellowI thought I needed to create an extra column with the percentage of occurrence throughout the month first. I tried using:df.groupby(['Color']).Color.agg([('Color_count', 'count')]).reset_index()Which gave me: Color Color_count0 Blue 21 Green 1The desired output should give me columns with all the colors and the percentage of occurrence per month, something like: Blue Green Yellow 2021-08-31 0.73 0.24 0.002021-09-30 0.66 0.29 0.01With those percentages I can make a plot to show monthly data of the colors.Thank you in advance.
Use Grouper with SeriesGroupBy.value_counts and Series.unstack:df1 = (df.groupby(pd.Grouper(freq='M'))['Color'] .value_counts(normalize=True) .unstack(fill_value=0) .rename_axis(None, axis=1))print (df1) Blue Green Yellow2021-08-31 0.666667 0.333333 0.02021-09-30 0.500000 0.000000 0.5
How to extract table value from using BeautifulSoup I am trying to extract the Solar Longitude value from this tableI am using this code to look at the structure of the table:import requestsfrom bs4 import BeautifulSoupURL = 'https://viewer.mars.asu.edu/viewer/themis#P=V77388006&T=2'page = requests.get(URL)soup = BeautifulSoup(page.content, 'html.parser')print(soup.prettify())However, when I look at the output and try to find the Solar Longitude it is not there. I even tried to saved the output of the code as a .txt file and got the same result. I did notice that my output is a lot shorter than the actual HTML code I see in the browser.Am I missing something?
You may get not all content back with requests, cause it is served dynamically by the website, but you can use selenium to fix that.Examplefrom selenium import webdriverfrom bs4 import BeautifulSoupdriver = webdriver.Chrome(executable_path=r'C:\Program Files\ChromeDriver\chromedriver.exe')url = 'https://viewer.mars.asu.edu/viewer/themis#P=V77388006&T=2'driver.get(url)soup = BeautifulSoup(driver.page_source, 'html.parser')driver.close()soup.select_one('[data-field="Solar Longitude"]').parent.nextSibling.get_text()Output30.528906Β°
Why does installing xlwings over the terminal produce an error? I am relatively new to Python and I am just doing a private project right now. For that I want to install xlwing to be able to run a python code from Excel. However it seems I can not install it. I try to install via:C:\Users\Rafi>python -m pip install --user xlwingsas I installed all my other stuff.Now when I put that into the terminal it shows me the (long) error following. However, I cant find a solution, whatever I try, maybe you guys have some suggestions?***Running setup.py install for comtypes ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\Rafi\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\Rafi\AppData\Local\Temp\pip-install-xfdrq0ph\comtypes\setup.py'"'"'; __file='"'"'C:\Users\Rafi\AppData\Local\Temp\pip-install-xfdrq0ph\comtypes\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file, '"'"'exec'"'"'))' install --record 'C:\Users\Rafi\AppData\Local\Temp\pip-record-qtr4pqm3\install-record.txt' --single-version-externally-managed --compile --user --prefix= cwd: C:\Users\Rafi\AppData\Local\Temp\pip-install-xfdrq0ph\comtypes\ Complete output (276 lines): running install running build running build_py creating build creating build\lib creating build\lib\comtypes copying comtypes\automation.py -> build\lib\comtypes copying comtypes\connectionpoints.py -> build\lib\comtypes copying<br/>.<br/>.<br/>.<br/>error: error in setup script: command 'bdist_wininst' has no such option 'install_script'ERROR: Command errored out with exit status 1: 'C:\Users\Rafi\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\Rafi\AppData\Local\Temp\pip-install-xfdrq0ph\comtypes\setup.py'"'"'; _file='"'"'C:\Users\Rafi\AppData\Local\Temp\pip-install-xfdrq0ph\comtypes\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file, '"'"'exec'"'"'))' install --record 'C:\Users\Rafi\AppData\Local\Temp\pip-record-qtr4pqm3\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.***
This error comes from installing a dependency of xlwings: comtypes.A quick google search revealed that your issue could be caused by an old version of wheels. Upgrade wheels like this and try again:pip install --upgrade wheel
Python scrapy with authentication or cookies I have the following web crawler script which is work correctly,What I need is a way to integrate authentication or sending cookies in each requestsimport scrapyfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorclass TheFriendlyNeighbourhoodSpider(CrawlSpider): name = 'TheFriendlyNeighbourhoodSpider' allowed_domains = ['one.google.com'] start_urls = ['https://one.google.com/about'] custom_settings = { 'LOG_LEVEL': 'INFO' } rules = ( Rule(LinkExtractor(), callback='parse_item', follow=True), ) def parse_item(self, response): print(response.url)
request_with_cookies = Request(url="http://www.example.com", cookies=[{'name': 'currency', 'value': 'USD', 'domain': 'example.com', 'path': '/currency'}])For more info: https://docs.scrapy.org/en/latest/topics/request-response.html
How to replace values of an array with another array using numpy in Python I want to place the array B (without loops) on the array A with starting index A[0,0] A=np.empty((3,3))A[:] = np.nanB=np.ones((2,2))The result should be:array([[ 1., 1., nan], [ 1., 1., nan], [ nan, nan, nan]])I tried numpy.place(arr, mask, vals) and numpy.put(a, ind, v, mode='raise') but I have to find the mask or all indexes. How to do that?
Assign it where you want using indexingimport numpy as npA = np.empty((3,3))a[:] = np.nanB = np.ones((2,2))A[:B.shape[0], :B.shape[1]] = Barray([[1.00000000e+000, 1.00000000e+000, nan], [1.00000000e+000, 1.00000000e+000, nan], [nan, nan, nan]])
Debugging Django in VSCode fails on imp.py I am unable to debug my Django app. I am using virtualenv and have configured my VSCode workspace to point to the absolute path within my virtual environment for python."python.pythonPath": "/Users/Me/PyProjs/proj_env/bin/python"When trying to debug, however, the editor jumps to the imp.py file (which is located at ~/proj_env/lib/python3.4) and fails at the new_module() method.def new_module(name): """**DEPRECATED** Create a new module. The module is not entered into sys.modules. """ return types.ModuleType(name) #Editor breaks here.Inspecting the name variable, I see it is set to "__main__". When stepping through, the editor exits debug mode and no errors or exceptions are logged in the Debug Console.Anybody know what my issue could possibly be? I just want to debug my application!
You probably have stopOnEntry set to true in your launch.json. Try setting it to false:{ "name": "Python: Django", "type": "python", "request": "launch", "stopOnEntry": false, "pythonPath": "${config:python.pythonPath}", "program": "${workspaceFolder}/manage.py", "cwd": "${workspaceFolder}", "args": [ "runserver", "--noreload", "--nothreading" ], "env": {}, "envFile": "${workspaceFolder}/.env", "debugOptions": [ "RedirectOutput", "DjangoDebugging" ] },
How to determine which instance of an object I'm dealing with? I'm making a text based dungeon crawler type game. In this game I have an enemy class and a player class. If I instantiate several enemies and place them in a grid structure along with my player, how can I detect which instance of the enemy class the player has encountered so that I can pass the instance to various functions as a parameter. so for example: class Enemy: def __init__(self, name, x, y): self.name = name self.x = x self.y = y e = Enemy('goblin', 0, 0) e1 = Enemy('troll',0, 1) enemies = (e, e1) grid = [[' '] * 10 for i in range(10)] grid[e.x][e.y] = e.name grid[e1.x][e1.y] = e1.name someFucntion(player, enemy)As you can see, for the function to have the enemy instance at this point, I would have to manually pass in the instance. But that would mean writing if elif statements galore to determine which instance it is. Which is obviously not efficient or good, considering I would like to have 20 + instances of enemy.so is there a method or function that would allow me to detect/ get hold of the instance to pass around? Thanks in advance for your time and expertise. Hopefully this question isn't off topic or too vague.
well I did some messing around outside of my project on the online python tutor, I found that if i iterated over my list 'enemies' I could check their coordinates against the player's and and if they matched I could pass them on as a variable in this case 'currentEnemy' and it seems to work. Here's the code: class Player: def __init__(self, name, x, y): self.name = name self.x = x self.y = y class Enemy: def __init__(self, name, x, y): self.name = name self.x = x self.y = y def combat(player, enemy): print('%s dies.' % (enemy.name)) p = Player('tom', 0, 1) e = Enemy('goblin', 0, 0) e1 = Enemy('troll', 0, 1) currentEnemy = () enemies = (e, e1) grid = [[' '] * 10 for i in range(10)] grid[e.x][e.y] = e.name grid[e1.x][e1.y] = e1.name for i in enemies: if i.x == p.x and i.y == p.y: currentEnemy = i combat(p, currentEnemy)Which left me with the output i expected, i.e 'troll dies'Hope this is helpful to anyone else with this problem.
Return an element in Pandas DataFrame with original data type I have a DataFrame with one column as int and one column as float:In [79]: data = pd.DataFrame(dict(a = np.arange(100), b = np.arange(100.1,200.0)))In [80]: data.head()Out[80]: a b0 0 100.11 1 101.12 2 102.13 3 103.14 4 104.1I want to return the 3rd row, column a as an integer. I need the native python integer because it needs to be hashable. I have tried the following and they don't work:Gets casted to float because .iloc returns a series:In [82]: data.iloc[3]['a']Out[82]: 3.0 #Returns a Series:In [85]: d.iloc[[3]]['a']Out[85]: 3 3Name: a, dtype: int64This is what I want, but it's really ugly:In [88]: int(d.iloc[[3]]['a'].values)Out[88]: 3Is there a smarter way?
I think if you index the column first then the row you'll get what you want.In [6]: data.iloc[3]['a']Out[6]: 3.0In [7]: data['a'].iloc[3]Out[7]: 3
Flask-SQLAlchemy many-to-many ordered relationship in hybrid_property I am trying to get the first object out of an ordered many-to-many relationship using Flask-SQLAlchemy.I would like to accomplish this using hybrid properties, so I can reuse my code in a clean way.Here is the code, with some comment:class PrimaryModel2Comparator(Comparator): def __eq__(self, other): return self.__clause_element__().model2s.is_first(other)class model2s_comparator_factory(RelationshipProperty.Comparator): def is_first(self, other, **kwargs): return isinstance(other, Model2) & \ (db.session.execute(select([self.prop.table])).first().id == other.id)model1_model2_association_table = db.Table('model1_model2_association', db.Column('model1_id', db.Integer, db.ForeignKey('model1s.id')), db.Column('model2_id', db.Integer, db.ForeignKey('model2s.id')), )class Model1(db.Model): __tablename__ = 'model1s' id = db.Column(db.Integer, primary_key=True, autoincrement=True) model2s = db.relationship('Model2', order_by=desc('Model2.weight'), comparator_factory=model2s_comparator_factory, secondary=model1_model2_association_table, backref=db.backref('model1s', lazy='dynamic'), lazy='dynamic' ) @hybrid_property def primary_model2(self): return self.model2s.order_by('weight desc').limit(1).one() @primary_model2.comparator def primary_model2(cls): return PrimaryModel2Comparator(cls)class Model2(db.Model): __tablename__ = 'model2s' id = db.Column(db.Integer, primary_key=True, autoincrement=True) weight = db.Column(db.Integer, nullable=False, default=0)And the usage:Model1.query.filter(Model1.primary_model2 == Model2.query.get(1))The problems are: In my comparator factory is_first method I can't get the actual instance, so I don't know which are the Model2s associatedIn the same method I want to order my select against the weight attribute of Model2, and then take the firstSomething is not clear in my head, maybe there's a simpler solution?
Perhaps looking at a working many-to-many example might help. Flask-Security is a great one.https://pythonhosted.org/Flask-Security/quickstart.html#sqlalchemy-application
Extracting data from two dataframes to create a third I am using Python Pandas for the following. I have three dataframes, df1, df2 and df3. Each has the same dimensions, index and column labels. I would like to create a fourth dataframe that takes elements from df1 or df2 depending on the values in df3:df1 = pd.DataFrame(np.random.randn(4, 2), index=list('0123'), columns=['A', 'B'])df1Out[67]: A B0 1.335314 1.8889831 1.000579 -0.3002712 -0.280658 0.4488293 0.977791 0.804459df2 = pd.DataFrame(np.random.randn(4, 2), index=list('0123'), columns=['A', 'B'])df2Out[68]: A B0 0.689721 0.8710651 0.699274 -1.0618222 0.634909 1.0442843 0.166307 -0.699048df3 = pd.DataFrame({'A': [1, 0, 0, 1], 'B': [1, 0, 1, 0]})df3Out[69]: A B0 1 11 0 02 0 13 1 0The new dataframe, df4, has the same index and column labels and takes an element from df1 if the corresponding value in df3 is 1. It takes an element from df2 if the corresponding value in df3 is a 0.I need a solution that uses generic references (e.g. ix or iloc) rather than actual column labels and index values because my dataset has fifty columns and four hundred rows.
df4 = df1.where(df3.astype(bool), df2) should do it.import pandas as pdimport numpy as npdf1 = pd.DataFrame(np.random.randint(10, size = (4,2)))df2 = pd.DataFrame(np.random.randint(10, size = (4,2)))df3 = pd.DataFrame(np.random.randint(2, size = (4,2)))df4 = df1.where(df3.astype(bool), df2)print df1, '\n'print df2, '\n'print df3, '\n'print df4, '\n'Output: 0 10 0 31 8 82 7 43 1 2 0 10 7 91 4 42 0 53 7 2 0 10 0 01 1 02 1 13 1 0 0 10 7 91 8 42 7 43 1 2
Flask setting cookies I try to set cookies in Flask, but I don't get what I want to. Instead of getting username I get an respone attached to my URL. My routes.py@app.route('/login', methods=['GET', 'POST'])def login(): if current_user.is_authenticated: return redirect(url_for('index')) form = LoginForm() if form.validate_on_submit(): user = User.query.filter_by(username=form.username.data).first() if user is None or not user.check_password(form.password.data): flash('Invalid username or password') return redirect(url_for('login')) login_user(user, remember=form.remember_me.data) userCookie = request.form['username'] resp = make_response(render_template('index.html')) resp.set_cookie('user', userCookie) next_page = request.args.get('next') if not next_page or url_parse(next_page).netloc != '': next_page = url_for('index', resp=resp) return redirect(next_page) return render_template('login.html', title='Sign In', form=form)And I want to display the content of cooki in index.html{% for r in resp %} {{ r }}{% endfor %} Instead I get: index?resp<Response+1250+bytes+[200+OK]>What am I doing wrong?[EDIT - logout method]That's my method before adding cookies@app.route('/logout')def logout(): logout_user() return redirect(url_for('index'))So if I added cookies:@app.route('/logout')def logout(): resp = make_response(redirect('/login')) resp.delete_cookie('user')And if I rester server, login, the cookie is created, but after logout I can even go to the endpoint /login return resp
Cookies are set in one request and can be used in another request. To overcome this, use redirect in make_response. I have attached an example of login/logout functionalities using cookies:app.py:from flask import Flask, render_template, request, make_response, flash, redirectapp = Flask(__name__)app.config['SECRET_KEY'] = 'SUPER SECRET'@app.route('/', methods = ['GET'])def home(): username = request.cookies.get('username') if username: return render_template('home.html', username=username) return render_template('home.html')@app.route('/login', methods = ['GET','POST'])def login(): username = request.cookies.get('username') if username: return render_template('login.html', username=username) if request.method=='POST': username = request.form.get('username') password = request.form.get('password') if username=='admin' and password=='admin': flash("Successful login", "success") resp = make_response(redirect('/')) resp.set_cookie('username', username) return resp else: flash("Wrong username or password", "danger") return render_template('login.html')@app.route('/logout', methods = ['GET'])def logout(): resp = make_response(redirect('/')) resp.delete_cookie('username') return respapp.run(debug=True)home.html:<html> <head> <title>Home</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% if username %} Welcome {{ username }}. <a href="{{ url_for('logout') }}">Click here</a> to logout. {% else %} You are not logged in. <a href="{{ url_for('login') }}">Click here</a> to login. {% endif %} </body></html>login.html:<html> <head> <title>Login</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% if username %} You are already logged in as{{ username }}. <a href="{{ url_for('home') }}">Click here</a> to go to home. <a href="{{ url_for('logout') }}">Click here</a> to logout. {% else %} <form method="post" action=""> <label for="username">Username</label> <input type="text" name="username" id="username"/> <br/> <label for="password">Password</label> <input type="password" name="password" id="password"/> <br/> <input type="submit" name="submit" id="submit" value="Login"/> </form> {% endif %} </body></html>Screenshots:1. Before login (no cookie): 2. Login (no cookie):3. After login (received cookie):4. After Logout (no cookie):
convert multiple dictionaries to single dictionary in python I have multiple dictionaries with its keys and values and I want to assign(transfer- all of them to a new-empty- dictionary with keeping all keys and values.note: other question that i checked have dictionaries with same sizen = {}x = {'six':6,'thirteen':13,'fifty five':55}y = {'two': 2, 'four': 4, 'three': 3, 'one': 1, 'zero': 0,'ten': 10}z = {'nine': 9, 'four': 4, 'three': 3, 'eleven': 11, 'zero': 0, 'seven':7}
ChainMapFor many use cases, collections.ChainMap suffices and is efficient (assumes Python 3.x):from collections import ChainMapn = ChainMap(x, y, z)n['two'] # 2n['thirteen'] # 13If you need a dictionary, just call dict on the ChainMap object:d = dict(n)Dictionary unpackingWith Python 3.x, (PEP448), you can unpack your dictionaries as you define a new dictionary:d = {**x, **y, **z}Related: How to merge two dictionaries in a single expression?
Python - Printing duplicates after getting previous and next element in a list Sorry for the beginner question -- In running this code, it prints the output twice rather than printing once and then continuing on to the next iteration of the loop. I'm sure this is simply a formatting error, but I can't seem to spot it... Thanks!myList = [1, 1, 1, 0.5, 1, 1, 2, 1, 0.5, 1, 3]for thisThing in myList: baseIndex = myList.index(thisThing) if thisThing == 0.5: get_previous = myList[baseIndex - 1] get_next = myList[baseIndex + 1] T2 = thisThing * 2 if T2 == get_previous and T2 == get_next: print("Success at index " + str(myList.index(thisThing))) continueOUTPUT:Success at index 3Success at index 3
index returns the index of first occurrence of the given item, in this case 3. You could fix the code by changing it to iterate index instead:myList = [1, 1, 1, 0.5, 1, 1, 2, 1, 0.5, 1, 3]for baseIndex in range(1, len(myList) - 1): thisThing = myList[baseIndex] if thisThing == 0.5: get_previous = myList[baseIndex - 1] get_next = myList[baseIndex + 1] T2 = thisThing * 2 if T2 == get_previous and T2 == get_next: print("Success at index " + str(baseIndex))Output:Success at index 3Success at index 8Since the above will only iterate the indexes 1 - 8 it will also work in case that 0.5 is the first or last element.You could also use enumerate and zip to iterate over tuples (prev, current, next):for i, (prev, cur, nxt) in enumerate(zip(myList, myList[1:], myList[2:]), 1): t2 = cur * 2 if cur == 0.5 and prev == t2 and nxt == t2: print("Success at index " + str(i))
Trying to select element inside webelement, but i get object is not callable error I have been trying to select element inside of web element, and i get: "TypeError: 'WebElement' object is not callable"def get_engagmet(driver, time, a): engagment = {} body_element = driver.find_elements_by_xpath("//div[@class='_5pcr userContentWrapper']") link = body_element[a].find_element_by_xpath(".//a[@rel='theater']") print("this is link") print(link("href")) time.sleep(3)By all accounts this should work.
Replaceprint(link("href"))withprint(link.get_attribute("href"))P.S. Also you might share the URL you are trying to scrape to check if your XPath matches correctly.
How to print dictionary key and values using for loop in python #!/usr/bin/pythondict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}for items in dict: print items print value
To iterate over both the key and the value of a dictionary, you can use the items() method, or for Python 2.x iteritems()So the code you are looking for will be as followed:d = {'Name' : 'Zara', 'Age' : '7', 'Class' : 'First'}#dict is a key word, so you shouldn't write a variable to itfor key, value in d.items(): print(key, value)And if you were to use Python 2.x rather than 3.x, you would use this line in the for loop:for key, value in d.iteritems():I hope this has answered your question, just next time try to do a little more research, as there is probably another question answering this available.
Accounting for 'i' and 'j' dots in OCR python I am trying to create an OCR system in python - the first part involves extracting all characters from an image. This works fine and all characters are separated into their own bounding boxes. Code attached below:import numpy as npimport matplotlib.pyplot as pltimport matplotlib.patches as mpatchesfrom scipy.misc import imread,imresizefrom skimage.segmentation import clear_borderfrom skimage.morphology import labelfrom skimage.measure import regionpropsimage = imread('./ocr/testing/adobe.png',1)bw = image < 120cleared = bw.copy()clear_border(cleared)label_image = label(cleared,neighbors=8)borders = np.logical_xor(bw, cleared)label_image[borders] = -1print label_image.max()fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(6, 6))ax.imshow(bw, cmap='jet')for region in regionprops(label_image): if region.area > 20: minr, minc, maxr, maxc = region.bbox rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr, fill=False, edgecolor='red', linewidth=2) ax.add_patch(rect)plt.show()However, since the letters i and j have 'dots' on top of them, the code takes the dots as separate bounding boxes. I am using the regionprops library. Would it also be a good idea to resize and normalise each bounding box?How would i modify this code to account for i and j? My understanding would be that I would need to merge the bounding boxes that are closeby? Tried with no luck... Thanks.
Yes, you generally want to normalize the content of your bounding boxes to fit your character classifier's input dimensions (assuming you are working on character classifiers with explicit segmentation and not with sequence classifiers segmenting implicitly).For merging vertically isolated CCs of the same letter, e.g. i and j, I'd try an anisotropic Gaussian filter (very small sigma in x-direction, larger in y-direction). The exact parameterization will depend on your input data, but it should be easy to find a suitable value experimentally such that all letters result in exactly one CC.An alternative would be to analyze CCs which exhibit horizontal overlap with other CCs and merge those pairs where the overlap exceeds a certain relative threshold.Illustrating on the given example:# Anisotropic Gaussianfrom scipy.ndimage.filters import gaussian_filterfiltered = gaussian_filter(f, (2,0))plt.imshow(filtered, cmap=plt.cm.gray)# Now thresholdbin = filtered < 1plt.imshow(bin, cmap=plt.cm.gray)It's easy to see that each character is now represented by exactly one CC. Now we pretty much only have to apply each mask and crop the white areas to end up with the bounding boxes for each character. After normalizing their size we can directly feed them to the classifier (consider that we lose ascender/descender line information as well as width/height ratio though and those may be useful as a feature for the classifier; so it should help feeding them explicitly to the classifier in addition to the normalized bounding box content).
How can jupyter access a new tensorflow module installed in the right path? Where should I stick the model folder? I'm confused because python imports modules from somewhere in anaconda (e.g. import numpy), but I can also import data (e.g. file.csv) from the folder in which my jupyter notebook is saved in.The TF-Slim image models library is not part of the core TF library. So I checked out the tensorflow/models repository as: cd $HOME/workspace git clone https://github.com/tensorflow/models/I'm not sure what $HOME/workspace is. I'm running a ipython/jupyter notebook from users/me/workspace/ so I saved it to:users/me/workspace/modelsIn jupyter, I'll write:import tensorflow as tffrom datasets import dataset_utils# Main slim libraryslim = tf.contrib.slimBut I get an error: ImportError: No module named datasetsAny tips? I understand that my tensorflow code is stored in '/Users/me/anaconda/lib/python2.7/site-packages/tensorflow/init.pyc' so maybe I should save the new models folder (which contains models/datasets) there?
From the error "ImportError: No module named datasets" It seems that no package named datasets is present.You need to install datasets package and then run your script. Once you install it, then you can find the package present in location "/Users/me/anaconda/lib/python2.7/site-packages/" or at the location "/Users/me/anaconda/lib/python2.7/"Download the package from https://pypi.python.org/pypi/dataset and install it.This should work
Adding a Root Node to Json output I am trying to add a top-level element to a JSON output that I get from an API.The following example shows the JSON output that I get from the API:[{ "Id": 1, "FirstName": "Ken", "LastName": "SΓ‘nchez", "Info": { "MiddleName": "J" }}, { "Id": 2, "FirstName": "Terri", "LastName": "Duffy", "Info": { "MiddleName": "Lee" }}, { "Id": 3, "FirstName": "Roberto", "LastName": "Tamburello"}, { "Id": 4, "FirstName": "Rob", "LastName": "Walters"}, { "Id": 5, "FirstName": "Gail", "LastName": "Erickson", "Info": { "Title": "Ms.", "MiddleName": "A" }}]And the following example shows the desired JSON output that contains the root element "info":{ "info": [{ "Id": 1, "FirstName": "Ken", "LastName": "SΓ‘nchez", "Info": { "MiddleName": "J" } }, { "Id": 2, "FirstName": "Terri", "LastName": "Duffy", "Info": { "MiddleName": "Lee" } }, { "Id": 3, "FirstName": "Roberto", "LastName": "Tamburello" }, { "Id": 4, "FirstName": "Rob", "LastName": "Walters" }, { "Id": 5, "FirstName": "Gail", "LastName": "Erickson", "Info": { "Title": "Ms.", "MiddleName": "A" } }]}I tried to append it, however, this is included as another element at the end of each array.Could someone give me a hint on this?I also tried with the following code, but breaks the json format with open(file_name, 'w') as f: data2= '{"info":'+str(data)+'}]' json.dump(data2, f)Thanks!
Is confusing why you use data as the name of the file, and the data itself.Asuming data is already a dictionary returned by the api, you can do: with open(file_name, 'w') as f: json.dump({'info': data}, f)
How can i automatically fill fields for new users that register django? i have got a problem with automatically filling fields for new users that register in django. I don't know how can i get some informations.so this is my Profile Modelclass Profile(models.Model): user = models.ForeignKey(MyUser, null=True, on_delete=models.CASCADE) name = models.CharField(max_length=45) last_name = models.CharField(max_length=60) profile_picture = models.ImageField(default='profile_pics/Default.jpg', upload_to='profile_pics') def __str__(self): return f'{self.user.first_name} {self.user.last_name}'this is the function that automatically creates Profile.@receiver(post_save, sender=MyUser)def create_profile(sender, instance, created, **kwargs): if created: Profile.objects.create(user=instance)and this is MyUser model that i use.class CustomUserManager(BaseUserManager): def _create_user(self, email, password, first_name, last_name, mobile, **extra_fields): if not email: raise ValueError("Email must be provided") if not password: raise ValueError("Password is not provided") user = self.model( email= self.normalize_email(email), first_name = first_name, last_name = last_name, mobile = mobile, **extra_fields ) user.set_password(password) user.save(using=self._db) return user def create_user(self, email, password, first_name, last_name, mobile, **extra_fields): extra_fields.setdefault('is_staff', True) extra_fields.setdefault('is_active', True) extra_fields.setdefault('is_superuser', False) return self._create_user(email, password,first_name, last_name, mobile, **extra_fields) def create_superuser(self, email, password, first_name, last_name, mobile, **extra_fields): extra_fields.setdefault('is_staff', True) extra_fields.setdefault('is_active', True) extra_fields.setdefault('is_superuser', True) return self._create_user(email, password, first_name, last_name, mobile, **extra_fields)class MyUser(AbstractBaseUser, PermissionsMixin): email = models.EmailField(db_index=True, unique=True, max_length=254) first_name = models.CharField(max_length=240) last_name = models.CharField(max_length=240) mobile = models.CharField(max_length=50) is_staff = models.BooleanField(default=True) is_active = models.BooleanField(default=True) is_superuser = models.BooleanField(default=False) objects= CustomUserManager() USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['first_name', 'last_name', 'mobile']I was wondering how can i get first_name and second_name from the user who registers and send it into create_profile function so it will add first_name and second_name automatically to the Profile.
You can send first_name and last_name to Profile via your signal like this:@receiver(post_save, sender=MyUser)def create_profile(sender, instance, created, **kwargs): if created: Profile.objects.create(user=instance, name=instance.first_name, last_name=instance.last_name)Also, does Profile model has ForeignKey relationship on purpose? If not, I would consider changing it to models.OneToOneField, otherwise, it implies that one user can have multiple profilesclass Profile(models.Model): user = models.OneToOneField(MyUser, null=True, on_delete=models.CASCADE) ...
I am unable to label the data points on the graph using matplotlib This is the code that I have been writing, but unable to add labels to the data points. Have tried multiple ways but getting error one after the other!!The data set in 9th line: 'country' is to be used as labelling. I want to label the 1st and last data point.Please Help!```pythonimport pandas as pdimport numpy as npfrom sklearn.cluster import KMeansimport matplotlib.pyplot as pltdata = pd.read_csv('happy_income1.csv')happy = data['happyScore']satis = data['avg_satisfaction']country = data['country']# Zapping 2 arrays togethersatis_happy = np.column_stack((satis,happy))# Sortingdata.sort_values('avg_satisfaction', inplace=True) #Sorting Data Column# Filteringsatisfied = data[data['avg_satisfaction']>4] #Making Section as per requirementprint(satisfied)# Making clusters as requiredk_res = KMeans(n_clusters=3).fit(satis_happy)cluster = k_res.cluster_centers_print(cluster)# Plottingfig, week4 = plt.subplots()week4.scatter(x=happy, y=satis)week4.scatter(x=cluster[:,0], y=cluster[:,1], s=9999, alpha=0.25)week4.set_xlabel('Happiness')week4.set_ylabel('Satisfaction')week4.set_title('Happiness versus Satisfaction')# Labelling# ----------------------------------------------plt.show()```CSV File Link: Click Here
You can add these two additional lines after plotting the scatter plots. They will add the text to the first and last entries.You can do additional things like background box, etc. if required. You can check matplotlib documentation and examples hereoffset=0.05week4.annotate(country[0], (happy[0]+offset, satis[0]+offset), color='red', weight='bold')week4.annotate(country.iat[-1], (happy.iat[-1]+offset, satis.iat[-1]+offset), color='blue', weight='bold')Output graph
why my zip returned item can't be applied set twice in Python? To illustrate my question, here is a minimal code.x = [1,2,3]y = ['a','b','c']m = zip(x,y)print(set(m))print(set(m))I expect the second print(set(m)) will generate the same result as the first print(set(m)), but here is what we got...{(1, 'a'), (2, 'b'), (3, 'c')}set()If we replace the set to list, the result is the same. Any suggestion on why this happen?
Since Python 3, zip now returns a iterator, not a list.Your first set call 'unwinds' the iterator referenced by m (consumes it).It is empty for the next call.To solve this, usem=list(zip(x,y)).
Python error: "TypeError: Object of type 'NoneType' has no len()" How can I fix this error? I am attempting to eliminate the number of names the user chooses.names = []def eliminate(): votes = int(input("How many people would you like voted off? ")) popped = random.shuffle(names) for i in range(votes): names.pop(len(popped)) print("The remaining players are" + names)for i in range(0,6): name = input("Give me a name: ") names.append(name)eliminate()
random.shuffle() returns None, not the shuffled list which is actually shuffled in place. Since you want to pop the last item in the list you do not need to provide an index to pop():random.shuffle(names)for i in range(votes): names.pop()
Barrier in Multiprocessing Python I am working on multiprocessing in Python and I am stuck at this point.I want to put the a barrier so that all previous activities are performed first. This pseudocode will be better for understandingdef func1: #does something and returns somethingdef func2: #does something and returns something if __name__=='__main__': p1 = Process(target = func1) result = [] result.append(p1.start()) p2 = Process(target = func2) result.append(p2.start()) # A Barrier here so that above all statments must be executed before the next one print(result)
What you're looking for is Process.join()You would use it like this:def func1: #does something and returns somethingdef func2: #does something and returns somethingif __name__=='__main__': p1 = Process(target = func1) result = [] result.append(p1.start()) p2 = Process(target = func2) result.append(p2.start()) p1.join() p2.join() print(result)Also the way you're capturing the results from the function is incorrect. Look at this answer for details on how to retrieve results from a process: How can I recover the return value of a function passed to multiprocessing.Process?
Pivoting a repeating Time Series Data I am trying to pivot this data in such a way that I get columns like eg: AK_positive AK_probableCases, AK_negative, AL_positive.. and so on.You can get the data here, df = pd.read_csv('https://covidtracking.com/api/states/daily.csv')
Just flatten the original MultiIndex column into tuples using .to_flat_index(), and rearrange tuple elements into a new column name.df_pivoted.columns = [f"{i[1]}_{i[0]}" for i in df_pivoted.columns.to_flat_index()]Result:# start from Aprildf_pivoted[df_pivoted.index >= 20200401].head(5) AK_positive AL_positive AR_positive ... WI_grade WV_grade WY_gradedate ... 20200401 133.0 1077.0 584.0 ... NaN NaN NaN20200402 143.0 1233.0 643.0 ... NaN NaN NaN20200403 157.0 1432.0 704.0 ... NaN NaN NaN20200404 171.0 1580.0 743.0 ... NaN NaN NaN20200405 185.0 1796.0 830.0 ... NaN NaN NaN
Python pygame sprites collision detection. How to define which sprite withing group collided and effect it's attributes by reducing a point What am trying to do is to create an Arkanoid game, where the bricks have 3 points of strength each and then they die. The issue is that instead of just the particular brick that gets hit, to lose the points, the whole brick_sprite group is loosing the points. And when one suppose to die, all the previous within the list up to that one dies. I think the issue is that it loops considering the update on line #240. Please check line 65 at def collision(self): under Brick Class. The issue is somewhere there."""This is a simple version of arkanoid game"""import sysimport pygameimport random# Set colors R G Bwhite = (255, 255, 255)black = (0, 0, 0)orange = (255, 100, 10)light_blue = (0, 144, 255)shadow = (192, 192, 192)purple = (152, 0, 152)# Displaydisplay_height = 999display_width = 444pygame.display.set_caption = ("Arkanoid 1.0")FPS = 60# Movement speedspeed = display_width // 60# Movementsleft = (-speed, 0)right = (speed, 0)up = (0, speed)diagonal_left = [-speed, -speed]diagonal_right = [speed, -speed]# Game objects dimentionsbase_dimentions = (display_width // 5, display_height // 100)[brick_width, brick_height] = [display_width // 20 * 2, display_height // 100]brick_dimentions = [brick_width, brick_height] ball_dimentions = (display_height // 100, display_height // 100)# Initializing text fontpygame.font.init()txt_font = pygame.font.SysFont("Score: ", display_height//44)# Initializing sprite listsall_sprites = pygame.sprite.Group()brick_sprites = pygame.sprite.Group()class Brick(pygame.sprite.Sprite): def __init__(self, point_value, center): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface(brick_dimentions) self.image.fill(purple) self.rect = self.image.get_rect() self.rect.center = center self.point_value = point_value def update(self): self.collision() def collision1(self): #This works no issue. # If brick is hit, loses a point collision = pygame.sprite.spritecollide(ball, brick_sprites, True) return collision def collision(self): #Here is the issue. # If brick is hit, loses a point collision = pygame.sprite.spritecollide(ball, brick_sprites, False) if collision: self.point_value -= 1 if self.point_value == 0: self.kill() ## BUGGISH ##"""class Ball(pygame.sprite.Sprite): """Initiates a moving ball and its' attributes""" def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface(ball_dimentions) self.image.fill(light_blue) self.rect = self.image.get_rect() self.rect.center = self.init_position() self.direction = random.choice([diagonal_left, diagonal_right]) self.score = 0 def update(self): self.movement() def init_position(self): # Initialize position of the ball init_position = (board.rect.center[0], (board.rect.center[1] - (base_dimentions[1] / 2) - (ball_dimentions[1] / 2))) return init_position def collision(self): # If hit bricks collision = pygame.sprite.spritecollideany(ball, brick_sprites) if collision: self.direction[1] *= -1 self.score += 1 enter code here def movement(self): self.containment() self.rect[1] += self.direction[1] self.rect[0] += self.direction[0] self.deflect() self.ball_loss() self.collision() def containment(self): if self.rect.right >= display_width or self.rect.left <= 0: self.direction[0] *= -1 if self.rect.top <= 0: self.direction[1] *= -1 def ball_loss(self): if self.rect.bottom >= display_height: self.reset() bricks_reset() def reset(self): self.rect.center = self.init_position() self.direction[1] *= -1 self.score = 0 def deflect(self): # If hit base_board, deflect if (self.rect.bottom == board.rect.top and (board.rect.left <= self.rect.left <= board.rect.right or board.rect.left <= self.rect.right <= board.rect.right)): self.direction[1] *= -1 self.board_ball_interaction() def board_ball_interaction(self): # When board is moving, effects balls direction/speed keystate = pygame.key.get_pressed() if keystate[pygame.K_LEFT] and board.rect.left > 0: self.direction[0] -= speed // 2 elif keystate[pygame.K_RIGHT] and board.rect.right < display_width: self.direction[0] += speed // 2class Base_board(pygame.sprite.Sprite): """Initiates base_board class and it's attributes""" def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface(base_dimentions) self.image.fill(orange) self.rect = self.image.get_rect() self.rect.center = (display_width // 2, display_height - 2 * base_dimentions[1]) self.x_direction = 0 def update(self): # Up-dates classes' position according to user's imput self.x_direction = 0 self.movement() self.rect.x += self.x_direction def movement(self): # Creates movement and constrains object within screen dimentions keystate = pygame.key.get_pressed() if keystate[pygame.K_LEFT]: if not self.rect.left <= 0: self.x_direction = -speed elif keystate[pygame.K_RIGHT]: if not self.rect.right >= display_width: self.x_direction = speed def shoot(self): pass def enlogate(self): passdef control(): for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit()# and adding all sprites on listsboard = Base_board()ball = Ball()all_sprites.add(board)all_sprites.add(ball)def bricks_list_creator(): # Creates and adds bricks into a list i = 9 point_value = 2 #### coordinates = [display_width // 20 + brick_width / 6, display_height // 20] while i > 0: brick = Brick(point_value, (coordinates)) #### coordinates[0] += brick_width * 1.1 brick_sprites.add(brick) i -= 1 return brick_spritesdef bricks_reset(): # Reset brick list brick_sprites.empty() bricks_list_creator() return brick_spritesdef render_text(screen): text = txt_font.render("Score: {0}".format(ball.score), 1, (0, 0, 0)) return screen.blit(text, (5, 10))def render_main(screen): all_sprites.draw(screen) brick_sprites.draw(screen) render_text(screen)# Game maindef main(): pygame.init() clock = pygame.time.Clock() screen = pygame.display.set_mode((display_width, display_height)) bricks_list_creator() while True: # Events clock.tick(FPS) control() # Update brick_sprites.update() all_sprites.update() # Render screen.fill(shadow) render_main(screen) pygame.display.flip() pygame.display.update()main()
I think the issue is in the update() of your Brick class calling the collision.The sprite update function is typically used for changing the position or look of your sprite, and is called every frame. So it's not a good place to check for collisions.A Brick only needs to know its point_value, it doesn't move (AFAIK).class Brick(pygame.sprite.Sprite): def __init__(self, point_value, center): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface(brick_dimentions) self.image.fill(purple) self.rect = self.image.get_rect() self.rect.center = center self.point_value = point_value def takeHit( self, ball_sprite ): # the ball has collided with *this* brick self.point_value -= 1 if self.point_value == 0: self.kill() Then in Ball.collision() use the pygame.sprite.spritecollide() to get the list of Bricks the Ball has collided with, and reduce their hit points:class Ball: # [...] def collision(self): # calculate the list of bricks hit hit_list = pygame.sprite.spritecollide( self, brick_sprites, False ) for brick in hit_list: brick.takeHit() # may (or may not) kill the brickMost of the time the hit_list is going to be a single Brick, but depending on the size of the ball, perhaps occasionally it's two bricks.
Failed pycurl install on macos using pip I hope someone can help.I am currently building my python environment on my 2015 MacBook Pro which is running on Sierra 10.12.6.I have stumbled accrossed many issues downloading modules in order to run my scripts needed to automate tasks for my job (such as automated emails etc) but I have managed to overcome such things, however, PyCurl will not allow me to overcome.the command, along with various variants, i am using is essentially -sudo pip install pycurlwhich returns the following - Collecting pycurlDownloading https://files.pythonhosted.org/packages/e8/e4/0dbb8735407189f00b33d84122b9be52c790c7c3b25286826f4e1bdb7bde/pycurl-7.43.0.2.tar.gz (214kB)100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 215kB 5.7MB/sComplete output from command python setup.py egg_info:Using curl-config (libcurl 7.54.0)Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/tmp/pip-install-rSkgA_/pycurl/setup.py", line 913, in <module> ext = get_extension(sys.argv, split_extension_source=split_extension_source) File "/private/tmp/pip-install-rSkgA_/pycurl/setup.py", line 582, in get_extension ext_config = ExtensionConfiguration(argv) File "/private/tmp/pip-install-rSkgA_/pycurl/setup.py", line 99, in __init__ self.configure() File "/private/tmp/pip-install-rSkgA_/pycurl/setup.py", line 316, in configure_unix specify the SSL backend manually.''')__main__.ConfigurationError: Curl is configured to use SSL, but we have not been able to determine which SSL backend it is using. Please see PycURL documentation for how to specify the SSL backend manually.----------------------------------------Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-install-rSkgA_/pycurl/the error at the end is really stumping me and the devs that work in my team, I really hope someone can helps as I have exhausted the resources in my office!EDIT: the SSL backend issue is what i believe to be the overarching issue
It seems that Apple stopped including OpenSSL headers since OS X 10.11 El Capitan.To fix this, lets install OpenSSL via Homebrew:If openssl is not installed install as below. Else if openssl is already installed on your mac, you can skip this. brew install openssl You are getting ssl backend errors. In order to help pycurl find the OpenSSL headers, we need to tell setup.py which SSL backend to use and where OpenSSL can be foundNote: check for openssl-dir location on your mac and change as needed. pip uninstall pycurlpip install --install-option="--with-openssl" --install-option="--openssl-dir=/usr/local/opt/openssl" pycurlUse sudo if needed.Hope this helps.