Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Tue, Apr 6 Thu, Mar 25 Wed, Mar 17 I don't think we really have a tool for that. Mon, Mar 15 Feb 28 2021 Feb 25 2021 The problem came from old corrupted file (File page was still there but not the files themselves). Once these file were deleted, the error message disappeared. Maybe there is still some lost files somewhere but I think we can close this task. Feb 21 2021 Sadly I know the problem, not the solution... @Yug create lexemes would be very nice but quite difficult. "language + form" is not enough, at least the lexical category is mandatory to create a Lexeme. Other data are needed to to determine if the lexeme exists and is the same (for instance for cases like "fils" - threads L10371- and "fils" - son L15917 - or "tour" L2330 and "tour" L2331). How could we solve these problems? Jan 29 2021 I also think a community consultation is not needed: this is an old historic file that make no sense for most Wikisource. Jan 28 2021 Jan 24 2021 Yes, the problem is still here and open. Jan 23 2021 This is a great idea. Jan 13 2021 Jan 5 2021 I can confirm that it seems to be fixed. Nov 20 2020 Sep 30 2020 Sep 10 2020 Just a quick update, we definitely need the distinction between "ñ" and "n". It's quite rare but there is some pair of word where the tilde is the only distinction, for instance mañ ("this") and man ("he/she stays" but also "moss"). Sep 4 2020 Aug 26 2020 Aug 21 2020 Maybe we can store the list somewhere online (on a wiki page ? on a pad ? either is good for me) and do a collective review (strike the wrods that are not really stopwords, for instance all the numbers are not really stopwords, there is adverb too I'm not sure we should keep). Jun 12 2020 Jun 4 2020 I strongly agree. May 7 2020 Hi @Aklapper go on any page on any Wikimedia projects, you'll see that some label are not retrieve and the Wikidata Q identifier (or P identifier) is displayed instead. Apr 27 2020 Thanks for the merge @Charlotte Mar 12 2020 Feb 19 2020 @Gehel very true. That said, it won't hurt performances either and what about all the other problems of not being able to stop a query? (I hate when I have to restart my browser and/or computer who froze just because I dumbly forgot to remove a wdt:P279*) Feb 5 2020 Sorry about the ping then. Hello, is someone working on this ? Thanks. Jan 11 2020 Dec 10 2019 More exactly: the code fr-ca works fine (everywhere AFAIK) but for some reason the name "français canadien" doesn't appears on the list (everywhere AFAIK). I think this is a different and separate bug and a new ticket would be more appropriate. Nov 24 2019 Nov 12 2019 Some precision: apparently (from what I've seen) it appears only in English and French and only for the P195 property. Nov 5 2019 Oct 7 2019 Sep 20 2019 Sep 18 2019 Aug 31 2019 Aug 15 2019 FYI, there is a table at the hackathon working on it right now, at least looking at the first possible obstacle. Aug 14 2019 And now it seems that http://wikidata.rawgraphs.io/ doesn't work at all any more :/ If it's not temporary then we should probably remove the link altogether :( Aug 7 2019 Jul 31 2019 It seems to be fixed, isn't it? Jul 25 2019 Ok, I understand. I was just suggested this as "incubated" wikis is the step just before "small" wikis. Could I suggest that some people look/work on T212881 ? (not "small wiki" per se, but pre-small wiki ;) ) May 13 2019 May 3 2019 Apr 16 2019 Apr 15 2019 Hi, I just tested this new dashboard. The visualisations are great, but I'm more a number cruncher myself. Feb 24 2019 For the record, last December after https://www.wikidata.org/wiki/Wikidata:Property_proposal/Astronomical_coordinates, I added celestial coordinates on M31: https://www.wikidata.org/wiki/Q2469#P625 (2 months later nobody seems to complain) Feb 10 2019 Did someone did something? Jan 12 2019 Indeed, it's a bit strange, without two brackets, it could be - pizza (Italian: Italian dish) - pizza (Italian / Italian dish) - pizza (Italian dish)<sup>Italian</sup> Jan 11 2019 Jan 6 2019 I think the most important is the lemma, so I would put it first. But I'm not sure on how not mix the gloss and the language: - Mutter (German, female parent) Or maybe better: - Mutter (female parent) (German) Jan 4 2019 Adding the language would not solve entirely the problem but I think it would be a good thing nonetheless. Dec 24 2018 I'm guessing that when Lea says Lexemes (with an uppercase L) she thinks of Lexemes in Wikidata (which has been done partially already). Dec 11 2018 Oh that's strange, it was apparently just a temporary bug. Now It works. Nov 18 2018 Some of it happened for sure, but probably not all. Aug 31 2018 Other idea: pass the cursor other and get translations (for people learning a language) Aug 10 2018 Jun 21 2018 A small story to show why this is important (at least to me) and should be fixed quickly (in my opinion). Jun 14 2018 On Firefox 60: broken On Chromium: everything works as expected Jun 4 2018 Yes, I didn't thought about it but commas should be i18ned "،" for Arabic and Persian, "、" for Chinese and Japanese (and maybe others for other languages but I don't know them, is there a list for i18n comma somewhere?). Jun 1 2018 As explained in my prevuious message, I agree we need to specify at least language and script. For the rest (country, orthography reform, ...), I think the best way to store this kind of information is to use property in the lexeme itself. The advantages of the property is it is really flexible and so we can decide a psoteriori what kind of information we want to store in one lexeme. May 30 2018 See too the broader related ticket : T195740 May 28 2018 May 26 2018 Yes for a link (I forget about it, shame on me) and yes too, for multiple lexemes (especially "tour"@fr will probably have 3 lexemes with the same features, for the equivalent of "tower"@en, "round"@en and "pottery wheel"@en). Here is a proposal of what the warning message could look like: May 25 2018 May 24 2018 May 23 2018 Ideally we should have : label, language code and lexical category. For instance « gwez (br, noun, L62) » for Lexeme:L62 (or any other presentation and order as long as the informations are here). This is important because the same lemma often exist in multiple languages and even inside one language, the same lemma can have different category. For instance, "best" in English could be an adjective, an adverb, a noun, and a verb, so at least 4 lexemes, if I understand correctly). Same problem in other languages : - French https://www.wikidata.org/wiki/Lexeme:L1?uselang=fr "sumérien" instead of "sumérien" - Spanish https://www.wikidata.org/wiki/Lexeme:L2?uselang=es "inglés" for "Inglés" - Catalan https://www.wikidata.org/wiki/Lexeme:L2?uselang=ca "anglès" for "anglès" May 22 2018 May 21 2018 May 15 2018 Apr 27 2018 It shoud be but can someone check all the points on the checklist? Apr 19 2018 Not sure if it's already there, the gerrit link seems to be for eu.wikiquote.org not eu.wikisource.org (or am I reading it wrong?) Mar 3 2018 Mar 2 2018 I agree this should be removed. Links, label and description maybe could be kept, but at least doing a interwiki sitelink to a wiki that doesn't exists should not be possible. Jan 27 2018 Jan 24 2018 Here my thoughts on you thoughts ;) @Theklan: I totally agree that the country criteria is not a good idea (especially as 'country' can be quite polysemic). And I agree that the Basque UG did a good job during WLM 2017 and that WLM should grow on that.
OPCFW_CODE
My Xbox Gamertag Continuing the Cabinet projection effort from the last post let’s see if we can close out issue #1 on github. We need to add sides and make sure things look good under different circumstances. Continue reading I’ve copied several of the issues from the last post into the github project’s issues list. Herein, I work on issue #1: A Cabinet (Projection). Read on to see how we can make these little squares a little more cubey. Continue reading In the first article, I described the basics of wannabe: a simple graphics engine. I haven’t been letting it sit, I’ve added some small features to it and have been making plans. Here’s my list, from best to worst: - The Muppet Movie — It’s hard to argue with the first one. Lots of heart, fun introductions for all the characters. This was my introduction to the Muppets, I hadn’t really seen much of the series yet. - The Muppet Christmas Carol — Aside from the great story, a great juggling of traditional Muppet roles. Gonzo and Rizzo as narrators, lots of fun songs. Except “The Love Is Gone”, ugh, that one’s tedious (and only involves humans anyway). - Muppets From Space — Mega-funk soundtrack, and lots of chances for the new Muppets Tonight characters to really shine. Much love for Bobo. - Muppet Treasure Island — Tim Curry is always gold. Great songs and wackiness, but it just didn’t rise to the level of the Christmas Carol. - Muppets Most Wanted — a pretty good return to form after the disappointing The Muppets. Action, goofiness, good songs. Ty Burrell and Sam the Eagle had a fun subplot and one of the betters songs. - The Muppets — while it was great to see a return to the big screen, this film felt a little whiny and angsty. It seemed to have very few musical numbers, too. The opening bit was pretty good, though. And here’s the Miasma. I just don’t remember anything remarkable about these Muppet films; I should re-watch so I can insert in the above list. And isn’t there another one? I’m almost positive that in the opening act of Muppets Most Wanted they said that this was the eighth sequel. - The Muppets Take Manhattan - The Great Muppet Caper Three interesting (but not short) economic documents came across my desk over the last couple of months. Sort of natural, I guess, given that I work for Square, but I think they are all worth reading. That said, I’m about halfway through each. 😉 - A simple explanation of how money moves around the banking system — Ever Move money electronically? How does that work? - How the Bitcoin protocol actually works — pretty applicable to Dogecoin, too, a far more impressive currency. - The Economics of Star Trek — I often give some thought to this. This is a fun exploration of really doing it. The debate over gendered pronouns popped up in computer-science-land over a couple of commits for node.js. There are serious problems with the male-dominated culture in CS (and video games, for that matter), and this blog post by the corporate maintainers of node.js was really the wrong reaction on a couple of points. Continue reading After recently getting an Ouya device, I got inspired to make a game. Then I realized two things: I don’t have any skill with 3d, and I learn by doing. I have a large mix of games, many of the recent additions from yon Humble Indie Bundle. The great thing about Humble is you always get access to game soundtracks, so there’s a lot of top-quality music available. Here’s some of my favorites. Continue reading Tried to boot up my macbook after a long weekend. Got the inscrutable flashing folder with a question mark. Tried a couple of tips from the internet, like holding option or ‘c’ down, neither of which worked. Then, hooked up my TimeMachine backup and the recovery console came up on its own, and allowed me to fix the disk (partition table reported the wrong size). Everything was happy. Kudos, then, Mr. Mac. If you have a time machine handy, it works pretty well. The opposite of kudos for not really telling me that before or after, and for crashing in the first place! This was way harder than it needs to be, because MacOS X Lion ships with git 18.104.22.168 in /usr/bin. Here’s a quick sequence of steps: - Download and install latest version from http://git-scm.com/. This appears to work but won’t change anything from the command line. The new version is installed in /usr/local/git. - Reconfigure your path to put /usr/local/git/bin in front of /usr/bin. Something like this in your ~/.bash_profile: Several utilities that make life bearable on the mac: - KeyRemap4MacBook — awesome way to make sensible keyboard changes. Recommend the following settings: - Change Eject Key / Eject to Forward Delete - Change Fn Key / Fn+letter to Control_L+Letter (note: I use MacOS’s keyboard changes to change Control keys to Command keys) - Custom Shortcuts / Hold Command+Q to Quit Application - And my own private.xml file, with Change Cmd+H to Ctrl+H (For Eclipse), and remap Alt-F4 to Command+Q (not that I love windows, but I don’t want quitting to be easy) - Stay — $15, but worth it if you move between different monitor configurations. - Airfoil — $25, if you want to use AirPlay with external programs like Pandora or Spotify. - Jumpcut — clipboard history - Disk Inventory X — find out where your disk space is going - Bigger names, all cross-platform: Songbird, Firefox, Chrome, Steam, etc. - Finally, VirtualBox, so I can use real operating systems when I need them. No real editorial here, just capturing some metrics. Number of keys lost when coming out of screensaver or sleep to the unlock password prompt: - Windows XP: 1..n — You have to hit ctrl-alt-del anyway. And all keys between CAD and when the prompt shows are lost. - MacOS X (Snow, Lion): 0..1 — sometimes works, sometimes doesn’t. - Linux (Ubuntu 6.6 +): 0 keys lost. Always works just like you want. You can type your password before your screen powers on. This gallery contains 9 photos. As promised, some photos of the fun at the Supercross: And for your viewing pleasure, a moving picture, starting 10 seconds before the final race. Here you can see the insane fireballs and what-not. Git tries its darndest to be as hard as possible to use. More than anything, it suffers from too-many-options-itis and correspondingly confusing options, often a warning flag for a programming language or tool. But git’s core is good, and it has a lot of features that are quite endearing. I’m not an expert, but I’ll share what I know. Continue reading As mentioned in my Third Day post, I’m using a MacBook now, every day, as my work computer. And, well, I’ve not died nor been struck by lightning or anything. But am I a convert? Read my my ongoing discussion of my experiences with the platform. This week: the rest of the hardware. Continue reading As mentioned in my Third Day post, I’m using a MacBook now, every day, as my work computer. And, well, I’ve not died nor been struck by lightning or anything. But am I a convert? Read my ongoing discussion of my experiences with the platform. This week: the keyboard hardware and its use in software. Continue reading
OPCFW_CODE
Show listbox outside of form (winforms) Is it any possible to get my listbox to be shown outside of the form's bounds? One of the solutions is to make the form itself transparent, and add the panel instead of the form for the background. But is there any other, more delightful way to do that? UPD: i need to make a custom autocomplete for textbox, to support wildcards. so i want a listbox to be shown below the textbox. my form's size should be about the size of the textbox. so, stretching the form vertically doesn't work in this case. thx Actually, its possible. Here's the way: public class PopupWindow : System.Windows.Forms.ToolStripDropDown { private System.Windows.Forms.Control _content; private System.Windows.Forms.ToolStripControlHost _host; public PopupWindow(System.Windows.Forms.Control content) { //Basic setup... this.AutoSize = false; this.DoubleBuffered = true; this.ResizeRedraw = true; this._content = content; this._host = new System.Windows.Forms.ToolStripControlHost(content); //Positioning and Sizing this.MinimumSize = content.MinimumSize; this.MaximumSize = content.Size; this.Size = content.Size; content.Location = Point.Empty; //Add the host to the list this.Items.Add(this._host); } } popup = new PopupWindow(listbox1); PopupWindow.show(); Could you use a second form that only contains the listbox? You'd need a little code to move it relative to the main form, but it should work... so, basically, there's no way to fit it all into the one form? No, I think the only way to get a control to be "graphically independent" of the main for is to put it in a owned form, perhaps with a transparent background. I did a bit of playing around with this when I wanted a couple of forms to hav snap-lines to the edges of the screen. I ended up with a transparent form that I place under the windows and draw on them with GDI Not really, no. That runs counter to the fundamental winforms model. You can probably cheat with a lot of manual munging or interop, but that would hardly be worth the cost. Instead, ask yourself why you're doing this. It looks like you're trying to reimplement a combobox for the sole purpose of adding autocomplete. Perhaps you should simply subclass the combobox control to add your autocomplete functionality to the control that does all the hard stuff for you and already exists. In the course of my current job, I've seen at least three different home-grown comboboxes that were all broken in various ways, amounting to a lot of work with no real payoff. My favorite was the combobox whose dropdown listbox stole the owning forms focus. It was really funny watching the broken code cause anything that used it to flicker. Edit: Modifying the combobox to be a search/filter with wildcards is still possible via inheriting from ComboBox, and still easier than rolling your own combobox, but at that point I'd suggest considering a more appropriate ui paradigm. Comboboxes don't filter their drop-down list (unless you use lousy software as inspiration coughSAPcough). well :) it's all clear but the problem is not to add proper values into the textbox autocomplete. the problem is to make it work with wildcards.... if the text starts with "", textbox starts seeking with ""((( that's why i want to create a separate listbox for it...
STACK_EXCHANGE
How to mailing photos from Picasa 3.9 in Thunderbird? Currently, on Ubuntu 16.04 or 14.04, if we want to mailing photos from Picasa 3.9 in Thunderbird 52, new mail pop-up but missing photos attached. This is an old bug. There was a work-around with a script but no longer working with Picasa 3.9 and Thunderbird 52. It's very annoying, because old people using Picasa and Thunderbird really need to send their photos with Picasa and Thunderbird. Picasa 3.9 installed with wine and works fine. How to mailing photos with Picasa 3.9 and Thunderbird 52? Send photos by email with Picasa: I assume that the picasa-hook-email.sh script is no longer used by Picasa 3.9 and is never invoked. Instead it most probably relies on the MAPI interface to send e-mails. The wine implementation of this interface however does not support attachments. It converts any request to send an e-mail into a mailto:-URL and this does not support attachments. I'm now just starting to modify the MAPI in wine to use a direct call to Thunderbird using the -compose option. Let's see whether I'm successful, but stay tuned! I will inform you about my success (or failure) here. For the wine MAPI source code see here: https://source.winehq.org/source/dlls/winemapi/sendmail.c. Look at line 157ff: attachments are explicitly ignored. Two days later: yes, it works! What I did: I patched the sendmail.c source file to directly invoke Thunderbird instead of creating a mailto:-URL I used the openSUSE Build Service to branch the official wine package and added the patch there I downloaded the created package, extracted winemapi.dll.so and put it to the correct location. But step by step. First have a look at https://build.opensuse.org/package/show/home:letsfindaway:branches:openSUSE:Leap:15.0/wine. This is where the branch is located. Everything is untouched, just the sendmail-thunderbird.patch was added and referenced in the wine.spec build file. You may have a look at the patch and apply it to the original source to see what I have changed. The builds themselves can be found when you click on "standard" below "wine" on the right hand side or directly there: https://build.opensuse.org/package/binaries/home:letsfindaway:branches:openSUSE:Leap:15.0/wine/standard. Are you using a 64-bit wine or a 32-bit wine running in a 64-bit environment? Depending on that download one of the following files: wine-3.7-lp150.<n>.1.x86_64.rpm for 64-bit wine wine-32bit-3.7-lp150.<n>.1.x86_64.rpm for 32-bit wine running in a 64-bit environment wine-3.7-lp150.<n>.1.i586.rpm for 32-bit wine running in a 32-bit environment The number <n> is incremented each time I trigger a rebuild. Currently it should be "10". Then extract the file /usr/lib/wine/winemapi.dll.so from the rpm package file. Under Linux, most graphical archivers should be able to open the file. So it does not matter whether you're using openSUSE as I do. Even if you're using Ubuntu or any other distribution you should be able to extract that file. It also (nearly) does not matter which wine version you're using. The sendmail.c source file was not touched since wine 1.6. Now place that file in the corresponding location of your wine installation. Just to be sure rename the original file first, so that you still have it. If you're using PlayOnLinux, then you might have more than one wine installation, located below ~/.PlayOnLinux/wine/. Be sure to do the replacement in the correct location! The patch will not only affect Picasa, but any program using the MAPI to send e-mails. And it will of course never become an official patch, as it only works, when Thunderbird is installed as /usr/bin/thunderbird. Summary: This patch enables the "send e-mail" function in Picasa when running under wine and when using Thunderbird as the mail program. It works for a broad range of wine versions starting from 1.6 and almost any 32-bit or 64-bit Linux installation. Make sure that Thunderbird is installed as /usr/bin/thunderbird. Extract the correct version of winemapi.dll.so from one of the archives mentioned above and use it to substitute the official version. Note: Using the links above you hight have to create an account for the openSUSE Build Service first. The following link however takes you directly to the download area, which is accessible without any authentication: https://download.opensuse.org/repositories/home:/letsfindaway:/branches:/openSUSE:/Leap:/15.0/standard/x86_64/
STACK_EXCHANGE
PC Repair and Driver Updater Recommended for Fixing Issues and Updating Drives on Windows PC. Trusted by Millions. With Windows 11, Microsoft is forcing users to stick with the Edge browser as they have made the process to change the default browser in Windows 11 more complicated as compared to Windows 10. Windows 11 keeps opening links in Microsoft Edge, the default browser that comes preinstalled in Windows 11. There are users who don’t use the Edge browser and want to open links in other browsers. For example, Chrome, Firefox, and others. In this situation, the first thing you need to do is change the default browser of Windows 11 to Chrome, Firefox, and other preferred browsers. Although Windows 11 makes it Hard to Change Your Default Web Browser, we have step-by-step instructions on how to do that. Majority of users report that even after adding a new default browser in Windows 11, there are various components of Windows 11 that still open in the Edge browser. For example, the links that appear in information cards in the Widget menu, Windows Search links, and even more. Apart from that, the company has no plans to offer such an ability in the future. Even they are doing their best to block each and every loophole that third-party tools are using to activate this functionality. EdgeDeflector is software that can prevent this from happening. It redirects links that are force-opened in Microsoft Edge to the default browser you set. With the latest updates for Windows 11, it also stopped working. The developer of this tool also confirmed that no future updates will be released. So the question is how to force Windows 11 to open links in the preferred browser? Fortunately, there are other developers who offer nifty utilities that allow users to launch all the links in the browser of their choice. This post will cover two tools ChrEdgeFkOff and MSEdgeRedirect. How to Force Windows 11 to Open Links in Default Browser Well, we have done the research and figured out that officially it’s not possible but there’s a third-party utility that allows users to launch all the links in the browser of their choice. This post will cover two tools ChrEdgeFkOff and MSEdgeRedirect. Use ChrEdgeFkoff script Run the Windows PowerShell as an administrator. For that, press the Windows key and then search for Windows PowerShell in the Start Menu. When the same option appears in results, click on the Run as Administrator. If the UAC dialog box appears, click Yes to continue. Now, click on this link and this will take you GitHub page. Here, copy the ChrEdgeFkOff command from lines 1-23. After doing so, paste the code you copy into the Windows PowerShell. If you are asked to confirm pasting the code, click on Paste Anyway. Wait for the script to get executed. Instantly a new Windows Powershell window will appear showing you status Installed. You’re done! Note – After executing this command, the Edge browser will not open any links for you even if you open the browser and enter the link manually. Now whenever you search for something in the Windows Search, you will be redirected to the search result in the default browser rather than the Microsoft Edge. At any point, you want to restore the functionality, then you need to again launch Windows Powershell windows and execute the same command. This time, you will get the status Removed. Download the MSEdgeRedirect app from Github using this link and launch the app. To use it, right-click on the app icon in the notification area. On the menu that opens up, click on the Start With Windows option. This will enable Windows 11 to open the search link with the default browser. These were the two ways using which you can open Windows 11 search link in the default browser. If we find other alternate tools to EdgeDeflector or the one listed above, then we will update them in the article. Donate on Paypal or Buy us a coffee or Join Patreon if you find the information shared in this blog post useful. Mention ‘Coffee’ in the Subject. So that I can thank you.
OPCFW_CODE
Error: Secret should be set I ran 'npm install' late last night and now my server won't start with the following console error. It looks like this this may be related to a recent mean.io update. Please help! My colleague is also running into the same issue after running 'npm install' this morning. /app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:299 throw err; ^ Error: secret should be set at module.exports (/app/node_modules/meanio/node_modules/express-jwt/lib/index.js:20:42) You could try to find the a previous version of meanio before the expressJWT addition This should probably be a good place to checkout https://github.com/linnovate/meanio/commit/8dd8773c95710a923a9bfa29dc407d13ab8d15e5 Instead of requiring linnovate/mean - see how you can add to your package json a reference to an older commit look at this for reference... http://stackoverflow.com/questions/14187956/npm-install-from-git-in-a-specific-version I crated a tag (0.4.x) for meanio - but have not tested - try to see if you can reference linnovate/meanio#0.4.x. An alternative way would be toclone it from git in the time being (untill you upgrade to 0.5) Lior On Thu, May 14, 2015 at 8:11 PM, Dave Yen<EMAIL_ADDRESS>wrote: I ran 'npm install' late last night and now my server won't start with the following console error. It looks like this this may be related to a recent mean.io update. Please help! /app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/db.js:299 throw err; ^ Error: secret should be set at module.exports (/app/node_modules/meanio/node_modules/express-jwt/lib/index.js:20:42) — Reply to this email directly or view it on GitHub https://github.com/linnovate/meanio/issues/53. -- Lior Kesos - http://www.linnovate.net Linnovate - Community Infrastructure Care mail<EMAIL_ADDRESS>office: +972 722500881 cell: +972 524305252 skype: liorkesos Am running through the same issue....did anyone find the answer Same issue here. Interestingly, I'm using my own implementation of JSon Web Tokens and have jsonSecret in my config. Probably not related, but would like to use the expressJWT feature soon. Bump, also having this issue. Hey just a heads up, I got through the 'Error: secret should be set' issue by adding a 'secret' key to the config/env/.js file: secret: "something secret". I then got another error about a logs folder not existing. So I created that and I'm back in business. Hope that helps someone! I fixed it. first, add secret:"something you want" sencond. //var logsOpt = accessLog(config); //this.app.use(require('morgan')(logsOpt.format, logsOpt.options)); this.app.use(require('morgan')(config.format || 'dev', config.option || {})); @liorkesos This was a linnovate/mean issue that's been fixed if you want to close this. I am too facing this issue. this issue came when we don't provide a secret. exports.requireSignin = expressJwt({ secret: process.env.JWT_SECRET }); I triple check my .env file, JWT_SECRET was there then I figured out in my main server.js, the order of file execution was incorrect. first, you should import require('dotenv').config(); then you write the route to access the file. const authRoutes = require('./routes/auth'); used 5.3.1 version of express-jwt and thanks me later. Since, express -jwt have updated to 6.0.0, but it's difficult to figure out update. If you can figure out it, I will thank you later. i have same issue here.. and none of the above solutions worked for me... would appreciate if someone could figure this out...
GITHUB_ARCHIVE
Code 39 Full ASCII Reader In None Using Barcode Control SDK for Software Control to generate, create, read, scan barcode image in Software applications. Create Code 39 Extended In None Using Barcode generation for Software Control to generate, create Code 3 of 9 image in Software applications. If you see no change in your desktop after Windows loads, it s possible Adobe Gamma Loader is not loading the file that uses your ICC profiles To correct this, find Adobe Gamma Loaderexe and then put it (or a shortcut) in C:\Documents and Settings\(username)\Start Menu\ Programs\Startup Code 3/9 Reader In None Using Barcode reader for Software Control to read, scan read, scan image in Software applications. Drawing Code 39 Extended In C#.NET Using Barcode encoder for .NET framework Control to generate, create Code-39 image in VS .NET applications. 8 9 10 Code-39 Encoder In .NET Using Barcode encoder for ASP.NET Control to generate, create ANSI/AIM Code 39 image in ASP.NET applications. USS Code 39 Encoder In VS .NET Using Barcode printer for VS .NET Control to generate, create Code 3 of 9 image in .NET framework applications. Photoshop CS4 QuickSteps Code 39 Extended Drawer In VB.NET Using Barcode drawer for VS .NET Control to generate, create Code-39 image in .NET framework applications. EAN13 Printer In None Using Barcode generation for Software Control to generate, create EAN / UCC - 13 image in Software applications. Adjusting Tone and Color in Your Photographs PC QuickSteps Getting to Know Your PC ECC200 Creation In None Using Barcode generator for Software Control to generate, create DataMatrix image in Software applications. Generating UCC-128 In None Using Barcode encoder for Software Control to generate, create UCC.EAN - 128 image in Software applications. Printing Code-128 In None Using Barcode maker for Software Control to generate, create Code 128C image in Software applications. UPC Symbol Maker In None Using Barcode encoder for Software Control to generate, create UPC-A Supplement 5 image in Software applications. Painting UPC - E1 In None Using Barcode maker for Software Control to generate, create GTIN - 12 image in Software applications. Drawing Bar Code In Java Using Barcode maker for Android Control to generate, create barcode image in Android applications. CALIBRATE YOUR HARDWARE Code39 Printer In Java Using Barcode drawer for BIRT Control to generate, create Code39 image in BIRT applications. Encode Barcode In Visual Basic .NET Using Barcode generator for VS .NET Control to generate, create bar code image in .NET applications. Hardware calibration is more expensive, but correspondingly more accurate, than software calibration If your work is color-critical, there is commercial hardware for monitor calibration to suit every wallet size PANTONE offers an active calibrator called huey; the entry-level edition is about $80 and the tiny, unobtrusive hardware unit updates your monitor s settings as light changes throughout the day Also check out www Printing Code128 In Java Using Barcode generator for Java Control to generate, create Code 128 Code Set A image in Java applications. Encoding UPC-A Supplement 2 In Java Using Barcode maker for Eclipse BIRT Control to generate, create UPC-A Supplement 2 image in BIRT applications. Get Photoshop to Use Color Settings Bar Code Scanner In VB.NET Using Barcode Control SDK for VS .NET Control to generate, create, read, scan barcode image in Visual Studio .NET applications. Data Matrix ECC200 Drawer In Java Using Barcode creation for Java Control to generate, create DataMatrix image in Java applications. Now that you ve calibrated your monitor, it s time to tell Photoshop how to use a color space for your imaging work A fair analogy is that a color profile which you just created for your system devices is a set of instructions on how to build a house, whereas a color space determines how much real estate you have upon which to build your house Choose Edit|Color Settings By setting up color consistency and warnings, as described in this section using the Color Settings dialog box shown in Figure 4-1, you ensure color consistency and high-fidelity output when editing images The first thing you might notice is that if you own more than one Adobe product (such as InDesign and Illustrator), you ll see one of two icons at the top left of the dialog box Synchronized Not synchronized drycreekphotocom/Learn/monitor_calibration_toolshtm for more information on different types of hardware and software calibration products If this is your maiden voyage with color management, don t worry if you see the not synchronized icon, telling you that you re not using consistent color management between Adobe applications If you ve used color management in the past with other products, and you see the not synchronized, the following section describes how to get everything in sync Once you ve made and saved color settings, you can synchronize your color settings within the Adobe applications by setting and saving the color settings in one application and then choosing to use them in another Applications made by companies other than Adobe sometimes can read and write color profiles, and many of today s inkjet printers understand ICC profiles Figure 4-1: Use the Color Settings dialog box to allow color profiles a wide enough color space to properly display your images Photoshop CS4 QuickStepsto Know Your PC and Color in Your Photographs Adjusting Tone PC QuickSteps Getting Set Up Working Spaces Photoshop will operate flawlessly if you tell it to use large color spaces in which to edit your images When changing colors or brightness in images, you really need a large working space, because from moment to moment your edits are stepping outside of the default color space for an image file It s a similar theory to mixing a drink in a glass: if you have plenty of room in the glass (color space), you re less likely to spill any liquids outside of the glass while mixing The metaphor of outside the glass is color gamut, the available digital space for color expression in an image The following steps show you how to set up a color space for Photoshop to recognize and use: 1 Choose Edit | Color Settings to open the Color Settings dialog box (see Figure 4-1) 2 In the Working Spaces area, choose Adobe RGB (1998) from the RGB drop-down list Usually, it s best to work with an image and save an embedded color profile using the broadest possible color space The sRGB color space is smaller than Adobe RGB and, as a consequence, some colors are clipped out of range if you work in this space, which is good for web posts but not for hi-fi imaging If you own a wider color space setting than Adobe RGB (such as ProPhoto RGB or Bruce RGB), choose it instead If you re printing to a home inkjet printer, you do not need to, or want to, print a CMYK version of your RGB image Most of today s even moderately priced inkjets take RGB information and perform a better conversion than can be achieved through manual conversion In fact, a CMYK mode image usually prints to inkjet with less color fidelity than an RGB image, even though the ink cartridges are CMY and K
OPCFW_CODE
using memoize function with underscore.js I am trying to cache the result from an ajax call using memoize function from Underscore.js. I am not sure of my implementation. Also how to retrieve back the cached result data using the key. Below is my implementation: Javascript code: var cdata = $http .get(HOST_URL + "/v1/report/states") .success(function(data) { //put the result in the angularJs scope object. $scope.states = data; }); //store the result in the cache. var cachedResult = _.memoize( function() { return cdata; }, "states"); Is my usage of memoize to store the result of ajax is correct. Also once it is put in cache, how to retrieve based on the key. i.e 'states'. Let us understand how _.memoize works, it takes a function which needs to be memoized as first argument and caches the result of the function return for given parameter. Next time if the memoized function is invoked with same argument it will use cached result and the execution time for the function can be avoided. So it is very important to reduce the computation time. As mentioned, the above fibonaci function it memoized works perfectly fine as the argument has a primitive type. The problem occurs when you have to memoize a function which accepts an object. To solve this, _.memoize accepts an optional argument hashFunction which will be used to hash the input. This way you can uniquely identify your objects with your own hash functions. The default implementation of _.memoize (using the default hash function) returns the first argument as it is - in the case of JavaScript it will return [Object object]. So for e.g. var fn = function (obj){ some computation here..} var memoizedFn = _.memoize(fn); memoizedFn({"id":"1"}) // we will get result, and result is cahced now memoizedFn({"id":"2"}) // we will get cached result which is wrong why default has function in _.memoize is function(x) {return x} the problem can be avoided by passing a hash function _.memoize(fn, function(input){return JSON.stringify(input)}); This was a real help for me when I was using _.memoize for a function that was working on arrays arguments. Hope this helps many people in their work. Fixed some grammar mistakes and added some additional information. _.memoize takes a function: var fibonacci = _.memoize(function(n) { return n < 2 ? n: fibonacci(n - 1) + fibonacci(n - 2); }); You should understand that this is just an extra wrapper function that makes function that you pass it as an argument smarter( Adds extra mapping object to it ). In example above function that computes fibonacci number is wrapped around with _.memoize. So on every function call (fibonacci(5) or fibonacci(55555)) passed argument matched to return value so if you need to call one more time fibonacci(55555) it doesn't need to compute it again. It just fetches that value from that mapping object that _.memoize provided internally. WARNING: The example has an infinite recursion! when calling fibonacci(x) If you are using Angular.js's $http, you probably just want to pass {cache : true} as a second parameter to the get method. To store values using key value pairs, you may want to use $cacheFactory, as described in other answers like here. Basically: var cache = $cacheFactory('cacheId'); cache.put('states', 'value'); cache.get('states');
STACK_EXCHANGE
I don’t like writing controversial editorials. Controversy is an effective means to get a lot of accesses: most people seem to enjoy reading controversial articles, maybe because they like torturing themselves. (And yes, I used to read a lot of Maureen O’Gara’s articles myself!). Besides, controversy is a double edged sword: there’s very little chance that I would ever go back to those sites! And yet here I am. First of all: Red Hat was my first love, as far as GNU/Linux distributions are concerned. I was always frustrated by the many tgz files in slackware, and was ecstatic when I installed Red Hat 3.0.3. At that time, Red Hat was a tiny startup with a modem connection to the internet. It was based on RPM, a tool that made me finally feel in control of my system. Now, the key sentence: I became a user of Red Hat Linux for my desktop machine (and yes, it was a bit of a challenge!), and a couple of months later, when I had to choose what distribution I should use for my server, I chose the one I was most accustomed to: Red Hat Linux. A number of things happened in the following years (1997 to 2005). Here are a few of them, in chronological order: the packaged version of Red Hat Linux flopped (why would anybody buy it, if you can download it? Plus, yes, it was overpriced...). Red Hat went public, and started having a number of investors that wanted to see good, realistic plans to make money—which meant focusing more on the corporate market. Then, the split: Fedora came along, but it was underfunded and the “community involvement” was patchy and disorganised. Eventually, Red Hat effectively abandoned its desktop audience, to focus on the more lucrative corporate market. Then, a very smart man called Mark Shuttleworth made 500 million dollars in the .com boom, learned Russian from scratch, went to space, came back in one piece, funded several charities focussing on South Africa, and... oh yes, he created Ubuntu Linux. Mark accomplished three things with his move. First of all, he created tons and tons of work for himself. This isn’t really crucial to my point, but I think it’s important to mention it. He also gathered a community of hackers to create what is, in my humble opinion, the first desktop GNU/Linux done right. And I mean, really right. The third thing he did, was divert tons, and tons, and tons of GNU/Linux users away from Red Hat Linux, and towards Ubuntu Linux. A lot of those people—and this is the crucial piece of information—were system administrators, who in the last 12 months got more and more used to using Ubuntu Linux rather than Red Hat. And—guess what?—now they have Ubuntu Server, which—again, guess what?—is a GNU/Linux server system done right. I am convinced that Red Hat is now starting to realise that losing their desktop users didn’t just mean “losing the suckers who didn’t pay a cent anyway” (this is not a quote, by the way), because a lot of those “suckers” were system administrators, who will soon have to decide between Red Hat Linux and Ubuntu Server. And when you use Ubuntu Server as your home system, the choice really can go either way. By abandoning their desktop users, Red Hat has effectively shot itself in the foot. Funnily enough, they kept on chasing the mirage of thousands of soul-less corporate customers with the real money. However, the bleeding didn’t stop altogether, and behind those faceless corporations there are thousands of system administrators who now use Ubuntu Linux rather than Red Hat Linux. And they will want to continue to do so, as much as possible. Good luck, Red Hat. Thank you Mark for Ubuntu.
OPCFW_CODE
Does this work with static blocks? I have a static block from AT&T and was curious if this will work with static blocks at all. Yes it should. EAP is independent of DHCP. Also, even with a static block, doesn’t AT&T still issue it via DHCP? Surely they don’t expect you to manually configure your RG, do they? So you plug the static config into the RG. It itself doesn't get a static IP (as far as I know, maybe it's the static block's gateway?) and you statics are handed out via DHCP to your clients. DHCP is required for your clients though. You can't statically assign one of these addresses. Without a lease the RG won't pass traffic for it. I'm not entirely sure what exactly is going on there. Yes, you can get it to work. The following sets will use a VLAN to keep the traffic separated. Your network and devices must support this for it to work. You can change this around and not use VLANS but you are taking a security risk unless you configure firewall policies to do proper forwarding. I am using a firewall after the edgerouter, and would recommend doing the same. you may need to look at a subnet calculator to get the proper addresses if you do not know how to calculate. http://jodies.de/ipcalc Create an interface on you LAN with vlan 100 and set static IP address for the interface to the hostmax IP address in your block. Under Firewall -> NAT, you will need two rules. One for source and one for destination. Create a new source rule using the following settings. Outbound interface will be your WAN interface vlan Src Address is your network address/subnet Create a destination NAT rule. Inbound interface will be the WAN interface vlan again. translations address will be the hostmax addres. dest address will be your network address/subnet You can now set a device to the proper VLAN and static assign an ip within your static block. So your static IP has to be behind your street IP. This is opposite of what we normally think of static IP address but its how AT&T is configuring residential service. Routing: A&T Network - Router "Street IP" - Static Default Gateway (hostmax) - Static address. Just a note, you could set each static address as a vlan interface and do port forwarding to a local machine. Outgoing connections will still show as your public IP unless you configure the NAT rules. I have a VLAN capable switch and firewalls that would sit behind the ERL (They currently get statics from the RG) so I should be all set. This is perfect, thanks so much!! Thank you for the assist @thford89. All credit goes to mb300sd, his method doesn't involve using NAT twice. You add the interface (VLAN 300 in my case), then create a basic firewall rule to allow from WAN out to in VLAN300. He tested on pfSense, if anyone on here with Ubiquiti wants to test please do. I'm using Mikrotik, the rule looks like this: I don't mean to be such a bother, but I still don't understand how to apply this to my Edgerouter 4. I followed the instructions to a Tee and yet I'm still having a hard time routing one of my static ips to one of my devices using a VLAN capable switch and assigning a VLAN to it. (Using a TL-SG108E from TP-Link) I don't mean to be such a bother, but I still don't understand how to apply this to my Edgerouter 4. I followed the instructions to a Tee and yet I'm still having a hard time routing one of my static ips to one of my devices using a VLAN capable switch and assigning a VLAN to it. (Using a TL-SG108E from TP-Link) I can help you on discord screenshare or anydesk. anydesk 234 466 603 We got it working on ER4. Had to add /27 with not default gateway but ip before that example: if .75-100 was usable with 101 being gateway, we made VLAN100 71.299.200.100/27 (example IP) then made two exclude NAT rules for 71.299.200.100/27 and put to top of exclude rule lists for SRC and DST. Ramsaso figured out most of it all on his own, but I'm glad I was able to help. No double NATing needed for static IPv4! I read through this issue, however I'm still having some trouble getting my block of static ip's assigned. For reference I'm using a ER4 with eap_proxy in the configuration described in this projects README. Any thoughts and/or help is greatly appreciated! @nikolaishields -- I read through this issue, however I'm still having some trouble getting my block of static ip's assigned. For reference I'm using a ER4 with eap_proxy in the configuration described in this projects README. Any thoughts and/or help is greatly appreciated! I just configured an ER-4 today using the documentation in the README. I've got a /29 of static blocks, and I only had to make a minor change to the configuration to get it working: First, I ensure the RG gets an IP address to make it happy: set interfaces ethernet eth2 description 'AT&T router' set interfaces ethernet eth2 address <IP_ADDRESS>/29 set service dhcp-server disabled false set service dhcp-server hostfile-update disable set service dhcp-server shared-network-name rg_dhcp authoritative enable set service dhcp-server shared-network-name rg_dhcp subnet <IP_ADDRESS>/29 lease 1209600 set service dhcp-server shared-network-name rg_dhcp subnet <IP_ADDRESS>/29 default-router <IP_ADDRESS> set service dhcp-server shared-network-name rg_dhcp subnet <IP_ADDRESS>/29 dns-server <IP_ADDRESS> set service dhcp-server shared-network-name rg_dhcp subnet <IP_ADDRESS>/29 start <IP_ADDRESS> stop <IP_ADDRESS> set service dhcp-server static-arp disable Then you need to change the firewall so that only this network gets masqueraded: set service nat rule 5010 description 'masquerade for WAN' set service nat rule 5010 outbound-interface eth0.0 set service nat rule 5010 protocol all set service nat rule 5010 type masquerade set service nat rule 5010 source address <IP_ADDRESS>/29 Finally, all you need to do then is set eth2 to your /29 "gateway" address: set interfaces ethernet eth1 address 99.69.300.400/29 set interfaces ethernet eth1 description LAN I will note that, yes, you need to know what your public /29 network is. It should be visible in the RG configuration screens, somewhere. There is no NAT'ing at all of the public IPs. AT&T will just route the /29 to your DHCP address. There WOULD be double-NAT if someone used the private network from the RG. There may be a security issue with someone on your RG being able to access your /29, because the ER-4 (in my case) would happily just route between the /29 and the 192.168.3.x address of the RG. You could use VLANs (as suggested above) or a source-route firewall rule to block that. Personally I would go with the source-route rules, only because I don't want to have to reconfigure all my /29 devices to be on a VLAN -- it would make it harder to quickly revert back to using the AT&T RG if things break. PS: Sorry to touching such an old issue. @derekatkins Really appreciate you posting your configuration here. Wondered if I could ask you a couple questions? I've had eap_proxy running for a couple of years now on an ER6P and I just ordered a /29 from ATT today. Why is it necessary to give RG an IP address now? Is it just a coincidence that you created a /29 network for it now or is it related to the static block? Aren't we still only sending EAP packets there? I may have a fundamental misunderstanding of what's happening here too, but I'm confused why you're only Source NAT'ng that new network now. Then I think you have a typo when setting your public block address. This part also confuses me because you're putting the public block on your LAN interface, when I would have expected it on you WAN VIF interface eth0.0 Again, I may be totally misunderstanding how this works. I'd really appreciate any clarification you could give. @brettzink -- Why is it necessary to give RG an IP address now? Is it just a coincidence that you created a /29 network for it now or is it related to the static block? Aren't we still only sending EAP packets there? I give the RG an IP address so it actually shows a green service light. Is it required? Probably not. It's just a coincidence that I gave it a /29. I could have given it a /24, or even a /30. I chose a /29 because there is only one device on that network (the RG), but I did have a SECOND device on that network at one point (my laptop), so just to ensure the RG ALWAYS got an IP I made it a /29, but really there's no reason it can't be a /24. I updated my text above to make that more clear. I may have a fundamental misunderstanding of what's happening here too, but I'm confused why you're only Source NAT'ng that new network now. Because I don't want to NAT my public /29. Basically, if a packet originates on the box, it uses the (DHCP-provided) public IP If a packet comes from my LAN, it's using a public IP from my /29, so I just need to send that out. If a packet comes from the RG (on the private /29, or /24, or whatever), THEN I need to NAT it. Then I think you have a typo when setting your public block address. Yep, that's a typo. Fixed. This part also confuses me because you're putting the public block on your LAN interface, when I would have expected it on you WAN VIF interface eth0.0 Why would I do that? Then my LAN devices wouldn't have access to the Public IPs. The whole point here is that this box is acting as a router for my public /29 -- so my LAN devices ARE the public /29 devices. AT&T knows to route my public /29 to the public IP given by DHCP. Then I use that /29 internally, the same way you'd use it if you were sitting behind the RG. Again, I may be totally misunderstanding how this works. I'd really appreciate any clarification you could give. Hopefully what I said above helps clarify. @derekatkins Thanks for explaining. That makes perfect sense. I wasn't considering the public addresses getting applied to actual devices, I was only thinking about the opportunity to dNAT the addresses on the router. I really appreciate you taking the time to work through that with me.
GITHUB_ARCHIVE
The Cointelegraph YouTube channel had a few of its wilder moments in 2020: from getting hit with the YouTube ban hammer to witnessing firsthand the antics of crypto’s most eccentric personalities. Listed here are a few of our favourite blooper moments from this yr — we hope you take pleasure in them greater than we did! Bitcoin Halving Social gathering livestream YouTube strike 2020 has been a yr of firsts for a lot of: the primary yr of the brand new decade, an unprecedented world pandemic, and Cointelegraph’s first ever YouTube strike. And, truthfully, we’re nonetheless fairly puzzled by it. The strike got here within the ultimate minutes of a seven hour virtual livestream event hosted by Cointelegraph on Could eleventh to have a good time Bitcoin’s third halving. The stream featured trade consultants, high-profile buyers, and well-known personalities from all corners of the crypto sphere. Some notable moments: Every thing was continuing as easily as may very well be anticipated till the ultimate panel of the day: the Crypto Influencers. This panel featured quite a lot of distinguished crypto personalities together with Altcoin Each day, Dangerous Crypto Podcast, Altcoin Buzz, Naomi Brockwell, Bitboy Crypto, Layah Heilpurn, and notably Chico Crypto. After going stay, the panel instantly descended into chaos. Halfway by way of the host’s introduction, Crypto Chico started yelling on the different friends, calling them shills and dropping a number of f-bombs. After ending his tirade, he disappeared from the panel and was by no means heard from once more — a minimum of not on any of our channels. The panel resumed as finest it might, however 20 minutes later YouTube all of a sudden and completely eliminated the livestream and the Cointelegraph YouTube channel its first ever strike. In response to the warning, the livestream was eliminated for violation of YouTube’s neighborhood pointers. The precise violation was for producing ‘dangerous or harmful content material’. Cointelegraph’s enchantment to the strike was rejected and no additional clarification was obtained as to the precise cause behind the strike. Because of the antics of Crypto Chico and their proximity to the issuance of the strike, it’s doable his habits could have performed a big function in YouTube’s choice. However it’s unattainable to know for sure. Cringe and Craig Wright No yr is full with no substantial dose of cringe worthy content material…and sadly Cointelegraph isn’t any exception. The Cointelegraph YouTube channel had a powerful begin on this division with the publication of a video interview with Satoshi Nakamoto claimant Craig Wright. The video garnered over a thousand feedback, with many remarking on the awkward interplay between the host (me) and Craig Wright. One of the appreciated feedback on the video got here from a small-time YouTuber named River: This interview was awkward. There isn’t a contenting that. Because the host, I felt it fairly strongly throughout the recording. Nevertheless, the rationale for the awkwardness was not what most individuals appeared to assume. I don’t despise Craig Wright or really feel any animosity in the direction of him regardless of his daring claims. Actually, the interview was really performed utilizing a widely known approach known as mirroring. In an effort to pry extra info and particulars out of him, I typically paused after his solutions or repeated again to him his most up-to-date speaking level. The end result was an interview that seemed to be cringey and probably even hostile, however was fairly efficient at revealing Wright’s character traits and exposing a number of of his bolder claims. John McAfee poses along with his AK-47 for our former head of reports John McAfee is notorious world wide for his eccentric hobbies and bigger than life character. Within the crypto sphere, the American laptop scientist is finest recognized for his a number of presidential candidacies, outrageous worth predictions, and a wager (now nicknamed ‘The D*ckening’) to eat his personal genitalia on stay tv if Bitcoin fails to succeed in $500K by the tip of 2020. He has even managed to stay energetic on Twitter regardless of at the moment residing in a Spanish jail, the place he faces extradition to the US over tax evasion expenses. Cointelegraph is subsequently no stranger to masking McAfee’s wild facet. That’s why nobody on the video staff was shocked when McAfee (upon request) brandished his AK-47 throughout a digital video interview with Cointelegraph’s former head of reports, Dylan Love. In the identical interview, he claimed he’s ‘99% sure’ of Satoshi Nakamoto’s id and in addition described how he traumatized a visiting journalist by faking a recreation of Russian roulette. McAfee could also be getting older, however his tales by no means do. Thanks, to you! To spherical off the yr, the Cointelegraph video staff wish to prolong an enormous thanks to you, the viewers, for supporting the channel and watching our content material. In 2020, the Cointelegraph YouTube channel… - Printed 140 movies - Had practically 2 million distinctive views - Gained 30,000 new subscribers - Had virtually 250,000 hours of watch time From all of us on the Cointelegraph video staff, we will’t wait to see what subsequent yr brings and we hope you’ll stick round to hitch us. As all the time, don’t overlook to love, subscribe, and hodl!
OPCFW_CODE
Advanced apologies if this question was addressed before. How is Yjs integrated with editors (say SlateJS) to enable collaborative editing? I assume the desired approach is to make Yjs the “system of record” and editors act as UI clients that accept input from local users. Specifically, if there are two nodes, the editor of the first node will be connected to one instance of Yjs doc, and the editor of the second node to the second instance of Yjs doc. Local operations will be accepted by each editor and conveyed to its connected Yjs doc. Remote operations received by the Yjs doc are conveyed back to the editor. Assuming the above setup is right, how is an editor and its corresponding Yjs doc kept in sync at all times? It is conceivable for a (human) user to be inserting characters into the editor at a certain position in the sequence while its Yjs doc is receiving concurrent operations from a remote node that affect the position where the new characters are being inserted. In other words, the editor also becomes a source of concurrent operations. I can imagine a few solutions: Pause merges of remote operations at Yjs doc when local operations are flowing in. This will prioritize local operations, but other than occasional jitteriness, positions implied in local operations are always meaningful because all remote operations seen by Yjs doc are also seen by the editor (and therefore the human user). I believe this approach is the most practical, and one could pause merges by “going offline” or find a way to somehow tell Yjs to not accept remote operations temporarily. Replace the editor’s internal data model with Yjs doc’s data model. This way, both user inputs and remote operations are changing the Yjs doc directly. And Yjs is already capable of handling concurrent operations. However, it is usually impractical to replace editor’s data model unless we have access to and willing to change the editor’s source code. Allow users to enter characters via the editor, but intercept those events and send them directly to Yjs doc instead of to editor’s internal data model. The output from the Yjs doc is what is fed to the editor model for purposes of displaying the text back to the user. This approach can be implemented in two ways and both seem farfetched. One implementation is to update the whole text in the editor by reading back the full string from Yjs doc for each character change. The other implementation is to calculate the positions where text got affected from the recent local/remote operations at Yjs doc, and transform those Yjs operations into editor-specific operations and apply them on the editor. The second implementation sounds a lot like a mini-OT solution, which is not preferable. How is the solution implemented in practice? I appreciate your thoughts.
OPCFW_CODE
we've been an experienced website style and development firm in bangalore. we have been specialised in developing enterprise portfolio websites. we provide services like web development, online marketing, premum seo and a number of other services to modest and medium enterprises to aid them increase their business enterprise in the market, Website design business in bangalore, Net development business in bangalore, web developer in bangalore, World-wide-web developers in bangalore, Website designer in bangalore, website developer in bangalore, finest World-wide-web developer in bangalore, greatest website designers in bangalore, ideal website designer in india, best web design corporation in bangalore. Info : This displays your web pages IP address and its areas (Actual physical place/s wherever is is based). You may see a more specific view in the map. HTML Facts / For World-wide-web development projects of any size - We can provide all the things you require for on-line results. With a long time of economic experience we are able to swiftly have an understanding of the demands of one's organisation. *The quantity is just for bidding precise amount of money are going to be confirmed right after correct job necessity and client conversation. sj online options is A cost-effective website planning, internet marketing and graphic planning corporation in delhi. we offer ideal on the web and offline advertising and marketing services. We certainly really are a 1 cease shop for your IT needs. We are dedicated to grow and excel from the up-to-date systems for the advantage of our client foundation. focused website brokers specializing in advertising on line website enterprises & area names. Permit website Attributes assist you promote your website organization. 35. FlightAware – Get all the information you may ever want regarding your future flight. I utilize it to locate the incoming flight and better estimate my delays. sixteen. Trello – A simple challenge administration procedure. Deal with oneself or a group having an convenient to use board of playing cards. hawk Website design is a professional website company specializing in creating and advertising and marketing websites for a wide array of company gurus. our listing of glad customers include all business types and business owners. Information : This reveals the information regarding the date that you obtain your domain title and its expiry date. / Archive.org Information and facts Producing websites can be a passion in which I have broad knowledge. It's going to be a privilege to take care of this venture and by liaising carefully While using the requester I should be able to provide you with precisely what you need. Also, I'll keep on to tweak and evaluate until you might be glad. A sample of my modern operate might be seen at . I stay up website for dealing with you to build a thing special to help choose your organization to the subsequent amount. Oven Creative is branding, print & web design studio with enthusiasm. We add worth to our customer’s models through thoroughly clean, intelligent & memorable style. Cost-free quotations
OPCFW_CODE
I warped off the site to get a probing ship ready to resolve the new wormhole. Jayne brought in an Osprey to boost up everyone's shields for storage, and then fleetwarped everyone back home. Oops. Um, did anyone happen to bookmark the site? No. Well, we didn't want that ISK anyway. Since the last time when I really wanted to get to a site which I had no way to get to, I have been stashing away personal bookmarks that are relatively distant from their nearby celestial and not in the direction of any other celestial. The idea here is to get a set of points that would allow me to construct all game-accessible locations in my system. I don't have total coverage even yet, but my coverage is very good. So, I figured that now, with 100m ISK or so of loot and salvage on the line, I might try it. The idea here is to get a set of bookmarks that, when viewed as vertices, define a convex polyhedron with the desired point on the interior. You can then warp between any two vertices on an edge, making a new bookmark along that edge. Consider the two new polyhedra which include the new vertex while removing one of the two vertices that it is on the line between. The two combined contain the same volume as the original, and so one of them must necessarily include the desired grid. So, determine which of the two it is, discard the superfluous point, and iterate. Each time you do this, you cut down on the search volume. If you do this long enough, you'll get on grid. |2D Analog to Search Geometry| Long story short, it's slow and very tedious. I started out with my surrounding points on average maybe 5 AU from the site. I got it down to about 1/10 AU or so, but that took about an hour. So 50x reduction in search space per hour. At that rate, to get the error down to grid size -- call it 200km -- it would take roughly another three hours. And since wrecks only last two hours, it was pretty clear that the whole project was going to fail. I gave up. So, right now the process of constructing an arbitrary grid seems pretty hard. I cannot do it in the timeframe of a wreck. On the other hand, it does seem like I am not that far off. If I could double the rate at which I refine the volume, I'd be fast enough. Having written this up, I've been forced to think about the mathematics of what I was trying to do. I've already gotten a better idea of what I should have been doing. I was using perhaps 6 points; I now think a tetrahedron is superior. And I was taking small slices off each iteration, whereas I now think that shooting for the midpoint would be faster even though it would often make it impossible to simply eyeball which polyhedron contains the target. So, until next time a juicy site is lost in plain sight... I am going to keep adding to my corpus of points.
OPCFW_CODE
STL Vectors, pointers and classes Let's say i have 2 classes: class Class1 { public: std::vector<CustomClass3*> mVec; public: Class1(); ~Class1() { //iterate over all the members of the vector and delete the objects } }; class InitializerClass2 { private: Class1 * mPtrToClass1; public: InitializerClass2(); void Initialize() { mPtrToClass1->mVec.push_back(new CustomClass3(bla bla parameters)); } }; Will this work? Or the memory allocated in the InitializerClass2::Initialize() method might get corrupted after the method terminates? Thanks! In short this will work fine. The memory being allocated in Initialize is on the heap. This means that changes in the stack do not affect the contents of this memory. so, this: void Initialize() { mPtrToClass1->mVec.push_back(new CustomClass3(bla bla parameters)); } is different from this: void Initialize() { CustomClass3* tempPtr= new CustomClass3(bla bla); mPtrToClass1->mVec.push_back(tempPtr)); } Because in the second case, i think, "tempPtr" get's deleted after method's termination and the memory might get overwritten. Right? @anubis9: wrong. In c++ there is no automated lifetime management of raw pointers. In neither of the cases the object will be deleted when the method exits. One issue I see with Class1 is that it is not copy safe yet the copy and assignment constructors have not been suppressed. This can cause a problem because the destructor of Class1 is noted as freeing the memory for all items in mVec. Using the implicit operator this means that you'd end up with 2 instances of Class1 pointing to the same CustomClass3 instances and the second destructor would be double deleting memory. For example Class c1; c1.mVec.push_back(new CustomClass3(...)); Class c2 = c1; In this case the second destructor to run (c1) will be freeing an already deleted CustomClass3 instance. You should disable copy construction and assignment for Class1 to prevent this class Class1 { ... private: Class1(const Class1&); Class1& operator=(const Class1&); }; It should work (provided mPtrClass1 is a valid pointer of course). May I suggest that in your InitializerClass2 that you change the constructor to the following: InitializerClass2() : mPtrToClass1(NULL){} ~InitializerClass2(){ if( mPtrToClass1 != NULL) delete mPtrToClass1; } void Initialize(){ if( mPtrToClass1 == NULL){ mPtrToClass1 = new InitializerClass1(); } mPtrToClass1->mVec.push_back(new CustomClass3(bla bla parameters) ); } if you're not going to use RAII, so that you don't get issues with checking the destructor. As to your question, see where I added in the new operator. YOu're not initializing your variable. Your suggestions are valid. However, the mPtrToClass1 initialization should probably happen in the constructor. The poster might be assuming that it does. Creating a new instance of Class1 in initialize is a little strange. Calling Initialize repeatedly will cause a memory leak. The NULL check on mPtrToClass1 is unnecessary. delete is specified to do that. actually yes, the constructors are all ok, i just did not write it here, because i'm interested just in the Initialize() method where i allocate the memory and push it into the vector of mPtrToClass1. @Steve Fallows - depends on which compiler you're using. you mean a standards compliant one or not? :)
STACK_EXCHANGE
from __future__ import annotations import asyncio from functools import cached_property import typing as t __VERSION__ = "1.0.0" class CompoundException(Exception): """ Is used to aggregate several exceptions into a single exception, with a combined message. It contains a reference to the constituent exceptions. """ def __init__(self, exceptions: t.List[Exception]): self.exceptions = exceptions def __str__(self): return ( f"CompoundException, {len(self.exceptions)} errors [" + "; ".join( [ f"{i.__class__.__name__}: {i.__str__()}" for i in self.exceptions ] ) + "]" ) @cached_property def exception_types(self) -> t.List[t.Type[Exception]]: """ Returns the constituent exception types. Useful for checks like this: if TransactionError in compound_exception.exception_types: some_transaction_cleanup() """ return [i.__class__ for i in self.exceptions] class GatheredResults: # __dict__ is required for cached_property __slots__ = ("__results", "__dict__") def __init__(self, results: t.List[t.Any]): self.__results = results ########################################################################### @property def results(self): return self.__results @property def all(self) -> t.List[t.Any]: """ Just a proxy. """ return self.__results ########################################################################### @cached_property def exceptions(self) -> t.List[t.Type[Exception]]: """ Returns all exception instances which were returned by asyncio.gather. """ return [i for i in self.results if isinstance(i, Exception)] def exceptions_of_type( self, exception_type: t.Type[Exception] ) -> t.List[t.Type[Exception]]: """ Returns any exceptions of the given type. """ return [i for i in self.exceptions if isinstance(i, exception_type)] @cached_property def exception_types(self) -> t.List[t.Type[Exception]]: """ Returns the exception types which appeared in the response. """ return [i.__class__ for i in self.exceptions] @cached_property def exception_count(self) -> int: return len(self.exceptions) ########################################################################### @cached_property def successes(self) -> t.List[t.Any]: """ Returns all values in the response which aren't exceptions. """ return [i for i in self.results if not isinstance(i, Exception)] @cached_property def success_count(self) -> int: return len(self.successes) ########################################################################### def compound_exception(self) -> t.Optional[CompoundException]: """ Create a single exception which combines all of the exceptions. A function instead of a property to leave room for some extra args in the future. raise gathered_response.compound_exception() """ if not self.exceptions: return False return CompoundException(self.exceptions) async def gather(*coroutines: t.Sequence[t.Coroutine]) -> GatheredResults: """ A wrapper on top of asyncio.gather which makes handling the results easier. """ results = await asyncio.gather(*coroutines, return_exceptions=True) return GatheredResults(results)
STACK_EDU
Aug 25 2020 12:22 PM Aug 25 2020 12:22 PM I have a question on conditional formatting based on another cell's color. Say for example, cells A2 to A5 are grouped with A1 being the parent. A2 to A5 is either green or red and if any of those 4 cells are red then the parent, A1, needs to be red but if A2 to A5 are all green then A1 wil be green. I have not found a way without macros to automate this conditional formatting of the parent cell based on the color of the children cells. Is this even possible with or without a macro? Thank you everyone in advance for your help with this! Aug 25 2020 01:08 PM Are A2 to A5 colored by the user? If so, you'd need VBA. Excel does not provide conditional formatting rules based on the color of cells. If, on the other hand, you have a conditional formatting rule to color A2 to A5, you can use the condition(s) of this rule to color A1. Aug 25 2020 06:55 PM Hey@Hans Vogelaar whether A2 to A5 is green or red will be driven by the text in that cell so there is a rule in the conditional formatting under the home tab in the ribbon. How would you suggest writing the logic for A1? Using or()? Aug 25 2020 11:21 PM Let's say that A2 to A5 are colored green if they contain Yes and red if they contain No. Set its color to red (this will be the default). On the Home tab of the ribbon, select Conditional Formatting > New Rule... Select 'Use a formula to determine which cells to format'. Enter the formula Activate the Fill tab. Click OK, then click OK again. Aug 26 2020 12:58 PM @Hans Vogelaarahh I see what you're saying. The color in A1 would still be based off of the text in A2:A5. Lets assume that not all of the cells in A2:A5 has text. Let's say A2:A4 has "Yes" and they are colored green. A5 has no text and is colored red. Because at least one of the cells in A2:A5 is red that means A1 will be red however there is no text in A5. Let's assume A5 was manually colored red, in this instance is there a way to conditionally format A1 to pick up on A5 as red (with no text) and A1 will automatically be colored red? I have not been able to find a solution without VBA. Any thoughts? Thank you in advance. I def appreciate your feedback. Aug 26 2020 02:21 PM The method that I described should work. If it doesn't for you, could you attach a sample workbook without sensitive/proprietary data? Aug 27 2020 06:31 AM @Hans VogelaarI played around with what you had suggested and it worked perfectly. Appreciate it! I had also tried looking up a solution for say if columns A2:A5 did not have text. Let's say columns A2:A5 are colored either red or green (manually maybe) without any text. In that scenario, is there a way to conditional format A1 without the colors in A2:A5 being driven by the text in each cell?
OPCFW_CODE
View Issue Details |ID||Project||Category||View Status||Date Submitted||Last Update| |0034692||Lazarus||LCL||public||2018-12-12 11:26||2019-02-26 19:41| |Reporter||Martin Friebe||Assigned To||Martin Friebe| |Platform||64bit Intel||OS||win 10| |Product Version||2.1 (SVN)| |Target Version||2.2||Fixed in Version||2.2| |Summary||0034692: TCustomForm.MakeFullyVisible not working (when form not yet visible / e.g., during start of IDE)| |Description||The IDE calls TCustomForm.MakeFullyVisible for all its windows while starting up.| This does not work correctly in case of a multi monitor setup (at least on windows, maybe others). TCustomForm.MakeFullyVisible defers the task of finding the correct Monitor to the Widgetset, which at least on Windows will use an OS function. But at the time of calling TCustomForm.MakeFullyVisible, the form is not yet visible. Therefore the bounds of the form are not yet known by the OS. TWinControl.DoSendBoundsToInterface does not send them if (Parent = nil) and (not HandleObjectShouldBeVisible) then That means the OS may return the wrong monitor, the windows is then moved into that wrong monitor. As a work around (and proof of the above being correct) insert one line of code procedure TCustomForm.MakeFullyVisible(AMonitor: TMonitor; UseWorkarea: Boolean = False); TWSWinControlClass(WidgetSetClass).SetBounds(Self, Left, Top, Width, Height); // insert as 1st line in func This will force the coordinates being sent, and the windows will be kept in the right monitor, but moved to be fully visible. |Steps To Reproduce||You need to find out which Monitor your OS returns in the above case. Presumingly the Monitor with the coordinates (0,0).| You may need to swap which monitor to use in the below. - On a 2 Monitor system, start the IDE. Assuming the monitors to be left/right to each other. E.g A:(0,0, 2559,1439) and B:(2560,0, 5119,1439) - Move the IDE main bar, so its left and right are well within Monitor B. But its top is slightly out (y=-20) - Close the IDE, to save the coordinates (save your desktop if needed) - Open the IDE The main bar is not fully visible, so it will be moved. It will be moved into screen A Actually it will be moved half into screen A... Not sure why. But in any case the IDE believes it should be on Screen A. With the above fix, it will be moved only the 20 pixel down, and keep its position. To make matters more interesting my screens are A:(0,0, 2559,1439) and B:(2560,150, 4159,1199) B is smaller, and does not have the same top. That means the desktop is: 0,0, 4159, 1439. But part of the desktop, on the right side is not on any screen. (Coordinates below are guessed from what I see) If the main bar, was at x: 2600 (width: 900), y: 130 (that is -20 relative to the screen). Then it will be moved to x= 2360, y= 130 (y is kept / x is moved into Screen A, but a lot of the window is still in screen B) And besides being on the wrong screen, it is not even fully visible, because a big part is still on screen B, but partly above its top. |Tags||No tags attached.| |Fixed in Revision||60521| |2018-12-12 11:26||Martin Friebe||New Issue| |2019-02-26 19:35||Martin Friebe||Fixed in Revision||=> 60521| |2019-02-26 19:35||Martin Friebe||Status||new => resolved| |2019-02-26 19:35||Martin Friebe||Fixed in Version||=> 2.2| |2019-02-26 19:35||Martin Friebe||Resolution||open => fixed| |2019-02-26 19:35||Martin Friebe||Assigned To||=> Martin Friebe| |2019-02-26 19:35||Martin Friebe||Target Version||=> 2.2| |2019-02-26 19:41||CudaText man||Note Added: 0114472|
OPCFW_CODE
This update included a lot of maintenance work as well as some major overhauls and updates to various parts of the theme that we feel have provided some great improvements to how certain processes are handled. Most notably would be our changes in consolidating some settings in the Customizer for efficiency, the navigation (which now features new mobile styling that is more friendly to navigate on smaller devices and updated one page navigation functionality), a new one click demo content installer, right to left style updates, and a more optimized image generation system. Without further ado, lets dive right in! As previously mentioned, one of the biggest changes to this update is our consolidation of some repeated controls into a more centralized location in the Customizer. Upon updating and going into the Customizer for the first time, you should see the new Layout and Design section like so (this is an image of it expanded): You should notice that the options located within this section are similar to some of the first groupings of options previously available in the specific Stack sections. We have moved these options here in an effort to consolidate repeated items and make working with data in the Customizer more efficient. Your old options will be ported over to these new settings once you login to the admin area for the first time after updating, so you shouldn't have to change anything around or worry about anything breaking. As far as mobile navigation is concerned, dropdowns are now hidden by default, making for a much more engaging and easier to consume user experience. They are toggleable via the arrows on the right hand side of the menu item, which will slide to reveal the items beneath. This was a huge navigation overhaul and we feel it has brought a new level of ease to the theme that your users are sure to appreciate! Below is an example of what this might look like: Furthermore, we've improved upon how the “one page” navigation features work so that this functionality is more universally accessible no matter where your reference links are coming from. For example, you can now add links to sections on your page from content links such as a button, which will trigger the page scroll if desired. You can also link to a section of your one page site from another page (such as a blog post) and have the offset calculated and accounted for cross-page, which makes things much smoother for users. We've also included a new Demo Content section under Addons in the WordPress admin area, which allows users to setup example content based on our online demos with the click of a mouse! We know that for a long time many of our users have wanted a more automated process for this part of working with the theme and we are pleased to finally bring this exciting new feature to everyone. The interface is incredibly simple to navigate and utilize: Essentially, you can choose which demo you would like the importer to be based off of from our online demos by making a selection from the first dropdown. This will import the homepage content and Customizer settings for that installation to your local installation. For the homepage markup, you can select between our standard shortcodes or a Visual Composer compatible form. Please note, while no pages or posts you currently have will be altered in any way with the demo content, importing the demo content onto your website will overwrite your Customizer settings as previously mentioned. If you already have your site setup but are curious to try out this feature, please make sure that you first backup your Customizer Settings by going to the “Customizer Manager” tab under “Addons” and making a backup of your settings. Additionally, you can choose to import some demo posts and portfolio items should you desire. Doing so will setup some examples that showcase how to utilize the various features utilized for each post format or portfolio item type. The great thing about the importer is that it keeps things clean and doesn't mindlessly import posts and pages over and over. It will only import what you ask for, and if it is already present it will not duplicate anything. Menus are automatically setup as well for easy linking out to these new pages. For online demos with a blog page as the homepage, this will be setup accordingly. As previously stated, we're incredibly excited about this new feature and we feel it will assist all of our users greatly in learning more about X and how to utilize various features. For more detailed information about the demo content importer, you can go here. Also, this release saw a big overhaul of our right to left styles including massive improvements to the header styling and shortcodes throughout. If you use X in a right to left capacity, we hope you enjoy! And finally, we've also greatly improved our image generation systems after consolidating our options in the Customizer for the various Stacks. Since we used to have site dimension options located within each Stack's set of options, we would have to calculate and generate images to be used for all Stacks in case a user happened to switch back and forth. Consolidating our options in the Customizer has allowed us to reduce our number of required images by 75% down to only 4, making things in your uploads folder much more lightweight. As always, if you have changed your site's dimensions or have changed your Stack, it is a good idea to run the Force Regenerate Thumbnails plugin to ensure that your image dimensions are being calculated correctly. On top of those major updates, we've also included numerous optimizations and maintenance fixes to the theme as well as updating the integrated version of Revolution Slider to v4.6.5 and the integrated version of Visual Composer v4.3.5 along with WordPress v4.1 support! Our shortcodes plugin has received some updates regarding the output of various elements such as the column as well as some updated styling and functionality. Definitely make sure that you update both the theme and plugin together to ensure that everything is on an equal playing field and that nothing is missing in terms of compatible styling or functionality. Potential Layout Issues – As noted above, we have updated some of the column styling, navigation styling, et cetera in this update to make things a little more efficient. If you have any sort of browser cache or site caching setup for your installation, your layout may appear to be “broken” or certain features might not seem to work (such as dropdowns) as the old cached version is still being output while the new markup is present. To get around this, ensure that both the theme and shortcode plugin are updated and clear your browser cache as well as any caching that might be setup on your site via a caching plugin. This will ensure that both the proper markup and styling are being output to your installation and everything will be in order. Update: December 23, 2014 In v3.1.0 of X we updated the bundled version of Visual Composer to v4.3.5. One of the major differences between this version of Visual Composer and the version previously bundled with X was the addition of new hooks by the plugin that themes are required to hook into for certain features to be setup properly. Because of this, updating the theme first could potentially cause an update error as there is a momentary “gap” during this in-between period where the hook does not exist. Because of this, we have included a fix in this release of theme to get around this, but you may still encounter an issue based on various circumstances in your installation. If you are having a problem updating Visual Composer, please go through the following steps: - Logout of your WordPress admin area. - Log back in to your WordPress admin area. - Go to Dashboard → Updates - Click the Check Again button a few times to clear out the update cache. - Updating Visual Composer should now work as expected. If the update still does not work, deactivating it first might help as well. If these methods do not seem to be working for you, you can locate the plugin .zip file itself within the theme at /x/framework/plugins/js_composer.zip and install that manually via the WordPress admin panel (i.e. Plugins → Add New → Upload Plugin). - X 3.1.1 - December 23, 2014 - Bugfix: Released a quick fix regarding Visual Composer not updating properly. - Shortcodes 2.6.1 - December 23, 2014 - Bugfix: Released in conjunction with v3.1.1 of X to ensure everything is up to date. - X 3.1.0 - December 22, 2014 - Feature: New "Demo Content" page under "Addons" in the WordPress admin area, which allows users to setup example demo content based off of our online demos with a simple mouse click. Example posts and portfolio items can also be imported to see how various features work throughout the theme. - Feature: New mobile navigation, which is now collapsible and more easy to operate on smaller screens. - Updated: Consolidated numerous Customizer options into a new "Layout and Design" section. - Updated: Major improvements in image generation due to the consolidation of the Customizer settings mentioned above. - Updated: Overall navigation in removing third party libraries and replacing them with smaller, more efficient theme specific code. - Updated: One page navigation so that elements such as buttons will now trigger page scrolls and links from other pages are accounted for as well. - Updated: Major right to left style improvements. - Updated: WordPress v4.1 support. - Updated: Revolution Slider v4.6.5 support. - Updated: Visual Composer v4.3.5 support. - Bugfix: Addressed over 25 maintenance items. - Shortcodes 2.6.0 - December 22, 2014 - Updated: [column] shortcode markup and styling updates. - Updated: [recent_posts] image updates based on improvements in the theme.
OPCFW_CODE
I have been reading more complaints by the day, primarily from bloggers (some of whom I suspect are not even full-time Linux users) that KDE 4 was a complete mistake and should be scrapped. Some have likened it to Windows Vista. Others have even suggested that KDE should be forked so that the KDE 3.5 line will remain alive. As someone who has been using KDE 4.1 (a Beta version mind you) on my production system for quite a few weeks now, I am wondering what the big fuss is really about. No doubt, I agree with some in that KDE developers should have not called a “work in progress” “4.0”. That is misleading. Heck, Google keeps things in Beta for years. The word “beta” is in fashion to the point where they could just always have “Beta” after any KDE release and it would probably attract users. Therein lies the problem. This time around, “still in development” was a literal warning, not a modest display of humility. The KDE developers warned everyone that 4.0 was not ready for production. Certain distributions, such as Kubuntu, took heed and left KDE 3.5 as the main desktop, offering 4.0 as an option. If a distribution left KDE 4.0 as the only option, that means KDE 3.5 applications were no longer available. There is no question that certain applications are much further along than others. Dolphin, the file manager, for example is very stable and feature-rich, whereas most of the KDE-PIM applications are still under heavy development. With both desktops still installed, one can use KDE 4.0 as the desktop and still use some KDE 3.5 apps without any decrease in performance and with no difficulty. That is drastically different from an operating system upgrade. There is no problem with compatibility between 4.0 and 3.5 apps. Therefore, comparing it to Windows Vista carries little weight. I think those who hate KDE 4 can be placed into four categories: 1. Those who just dislike the new features (plasma, krunner, etc). That is fine. Everyone has their opinion. 2. Those who actually did not use KDE anyway and are just making noise for the sake of making noise. 3. Those who are not very patient and/or not very good at making things work for them (They were used to using KDE 3.5 which “just worked” out of the box — KDE 4 will eventually reach that stage, but it is not quite there yet). 4. Those who miss their favorite component fromKDE 3. Some people especially miss the “kicker” (KDE’s desktop panel), but a lot of them miss it for the strangest reasons. For example, someone mentioned that he did not like that plasma does not have the various options for different types of secondary panels. My response to that is, how many people actually used any of those secondary panels? I’ve seen plenty of KDE desktops and have seen very few, if any, who used the “Mac OS” menu bar or any of the other available features. Having a feature only for the sake of having it only leads to bloated software. I for one think KDE 4 is coming along nicely. I am in love with plasma. However, I will be the first to admit that it has some way to go, as did KDE 3 when it was first released. The truth is some people just hate change, but change is going to come with or without them. That is the nature of this world and is certainly the nature of technology.
OPCFW_CODE
Typographically inclined people can debate for hours the importance of font selection in the presentation of a message; a proper font(1) sets the right type of message.(2) With programs, data type is used to specify range and resolution of the data as well as determining the total memory and speed of your program. I is for integer, at least in FORTRAN In modern programing languages there are three primary data formats; integer, fixed point(3) and floating point. The range of integer data types is determined by the size of the data, e.g. 8, 16, 32 or 64 bit data types and the signed / unsigned nature.(4) Floating point data represents information as a combination of a sign bit, an exponent component and a fractional component. This gives floating point data a greater range and resolution; however this can lead to issues when performing operations on data of significantly different orders of magnitude. For an overview of floating point data I would recommend this Wikipedia entry.(5) What is the “point”? Memory usage is easy to understand,(6) using all 32 bit integers in your program will use twice the memory of 16 bit integers; the same idea applies to floating point data types. The trade-off then comes with accuracy and speed. Integer based arithmetic is computationally faster then floating point; likewise, 64 bit floating point takes longer then 32 bit. The objective should be to use the smallest data type that fully represents your data after completion of the operation. Tips and tricks! - Overflow, underflow and precision loss can be detected through the use of simulation or tools such as Polyspace - Disable range checking and overflow options for generated code to create more efficient implementations - Here in this blog I lack the control of what font you see in your browser or in your email; for all I know you are using Wingdings (🕈︎♓︎■︎♑︎♎︎♓︎■︎♑︎⬧︎). - And of course typesetting is part of the act of printing, the precursor to publishing electronically. - Fixed-point could be considered a special case of integer data type, we will look at it in a future blog post. - E.g. an 8-bit unsigned integer would have a range of 0 to 2^8 (256), while a signed 16-bit integer would have a range of (2^15 – 1) to 2^15 (-32767 to 32768). In every case they have a resolution of 1. - There is a conceptual overlap between fixed point data floating point operations work; this link provides a general overview of fixed-point data. - There is a caveat here; all hardware has a “smallest data type.” If you specify a data type smaller then the smallest data type you do not see any memory savings.
OPCFW_CODE
these google maps satellite images have been doing the rounds since attack at busiest airport took satilite default to view android. maps google map eclipse group satellite street satilite. the google street view application is on a portable device in this photo illustration maps satilite satellite live 2019. a state man took his feud to another level after he mowed google maps satilite satellite disable 3d. google maps updated with new satellite images from satilite view iphone. enter image description here google maps satilite satellite images. google maps app for navigation preview in satellite view satilite not 3d. google satellite view earth live see of your house fly directly to maps satilite 2019. worldview 4 satellite just when you thought google maps satilite historical imagery. for map of world live satellite google maps view satilite 3d 2019. google earth satellite map address maps satilite new imagery. worldview 2 satellite image credit google maps satilite india punjab. first start anywhere in google maps but make sure looking at satellite footage click on the lower left box to switch i started new city satilite view not 3d. image titled turn off satellite view on google maps or step 2 satilite india punjab. google map of park maps satilite satellite images. google maps goes real time but would you want a spy in the sky staring into your letter box satilite satellite view date. google maps satilite satellite view history. google maps satellite view satilite 2019. google maps satellite view oddities in satilite android auto. forget google maps new live street view will blow your mind satilite satellite images. how much data has google maps accumulated combining satellite satilite view no 3d. google maps android satilite satellite view no 3d. google maps vs a showdown of satellite images satilite 3d 2019. google maps satellite view of the capitol building and surrounding grounds satilite historical imagery. new satellite will quadruple quality of google maps satilite 3d. google maps and earth apps available with latest update satilite historical satellite imagery. google maps satilite satellite history. google maps oddity satilite use of satellite imagery. google maps street view satilite default to satellite android. satellite map of reservoir source google maps images download satilite how old is imagery. a satellite photo of region in shows grey blur over sandy mountain google maps satilite view mobile. tag google maps imagery update satilite satellite. google maps satellite imagery updated clearer than ever satilite 2019 tempo real. a google maps satellite image of airport satilite view without 3d.
OPCFW_CODE
Justifying applying for US visa in a third country I am currently a citizen of Russian Federation, but will be applying for a US B-type visa in Germany, because I need to attend a conference in the US in two months and visa waiting time in Russia is 300 days. Do I need to justify the reason for applying in Germany while taking the interview? If yes, would the explanation above be enough? 300 days wait for a visit visa, are you sure? @HankyPanky yes, the official info here says 300 days for Moscow That must be a typing mistake @HankyPanky: There's been a lot of news lately about deteriorating diplomatic relationships between the US and Russia, including restricting visa services and expelling consular officials. I would not be surprised if it has caused massive delays in visa wait time. But that same table says 39 days for other non immigrant visas, so how come it takes 300 for a visit one? Then for Vladivostok the waiting time is 90 days, why would it be 300 days for the capital AFAIK US officials explicitly stated that you can use consulates in other countries as failover. @HankyPanky US Consulate in Vladivostok no longer processes visa applications, just as two other US consulates in Russia. There was actually a US consulate in my home city (St. Petersburg), but it was closed last month, which resulted in more waiting time in Moscow. I see, 300 days for a visit visa appointment is quite intriguing, but i think that gives you a good enough reason to apply from another country @HankyPanky the three streams are separate. The values are often wildly different. In Tel Aviv, it's 37 days for visitor visas and 2 days for "all other nonimmigrant visas"; that's a factor of 18.5, while the Moscow factor is only 7.7. @HankyPanky It's not a mistake. A consulate was closed, diplomats expelled, and everything has been slowed the hell down. The general rule is that you should apply in your country of citizenship, or your country of residence. It sounds like in your case both of those are Russia. However the key word there is "should" - not "must". Most US consulates will accept applications for visas from people who reside in a different country, although sometimes with additional restrictions (eg, consulates in Canada generally have a longer waiting time for non-CA citizens/residents). You will need to show proof of your legal status in the country you are applying in (which in your case is likely a Schengen visa and entry stamp so fairly easy to provide). You may be questioned as to why you are applying in that country, in which case you should tell the truth (especially as the consulate staff will already almost certainly know the answer!) As a general rule, your chances of having your application rejected and/or put into administrative processing is higher when you apply outside of your home country. This could be due to any number of things, including language issue (both spoken and written) and the potential that you are applying in a different country to try and game the system somehow. Given your situation there's not really anything you can do to avoid this possibility... (I've applied for US visas on multiple occasions outside of my country of citizenship/residence and never had a problem - but as is always the case, it depends on your specific situation)
STACK_EXCHANGE
Workplace Sans is a TrueType font designed to resemble WarpSans to varying degrees (depending on configuration). It is recommended or required by various software, including http://OpenOffice.org, QT4, and the Mozilla family of applications. Version 0.91 is now available in Hobbes incoming, and at http://www.altsan.org/creative/fonts/workplace/ In the year since the last public release, I have started studying font design more seriously. Many of the principles that I've learned have been applied in this latest release. These include better optical balance, use of ink traps, improved counter shapes, and an overhaul of letter spacing according to recommended techniques. (All of these are mainly visible at larger sizes.) As usual, version 0.91 is released in two versions: a "plain" version and a "bitmap-enhanced" version. The latter version is recommended for people who prefer the appearance of the classic WarpSans bitmap font (bitmaps are included for four point sizes in the 7-12 point range depending on screen DPI). The plain version will show antialiased text at all sizes. Semi-detailed list of changes since version 0.90: 17 October 2012 (Alex Taylor) Workplace Sans 0.91 - (Light) Added vertical overshoots. - (Light) A few more characters now included in build. - (Bold) Lightened horizontal stems a little. - (Bold) Added various Baltic and other characters. - Increased horizontal overshoots in Regular and Bold weights. - Upped UPM to 1024 (from 1000). - Smoothed and reshaped counters throughout. - Many glyph tweaks & improvements. - Reworked glyph spacing according to Walter Tracy's method. 17 April 2012 (Alex Taylor) - not released - (Normal) Tweaked the tail of the 'g' slightly. - Fractionally increased the width of the 'N'. - Modified the 'w' and 'W' to bring the central vertex up to the full character height. (This is a slight departure from WarpSans but having them lower just looked ugly.) - Changed the 'a' (again). Now the cross-stroke is slightly higher and flows smoothly into the stem. The glyph shape has been changed in other subtle ways as well. [Moderator's note: All posts are sent without guarantee to the accuracy of the content. We try to verify details and URLs but this is an entirely volunteer run list, so 100% fact checking and the quality/useability of products announced here is impossible. If you respond to this post please remove the DESPAM from the poster's email addresses. Please do not send requests for information about a specific post to the moderator unless it is an update or I
OPCFW_CODE
What is the Microsoft Certified: Azure Administrator Associate certification? For the first time, Microsoft entered the Microsoft Certified Azure Administrator certification. There are several benefits that come with it: The Azure Certified Administrator is a professional, professional credential that is designed to cover the full spectrum of Azure management and data access. You can choose from a set of applications, set of resources, or set of solutions. Perform a set of roles that are specific to your environment. The Microsoft Certified Administrator allows you to make the most of your Azure account that is being used. You can also use the Microsoft Certified Administrator to make a variety of changes that are important to you. The Microsoft Certified Administrator is not a technical credential. It is the technical credential that is used to run Azure management software. It is a professional credential designed to be used for the complete management of Azure accounts and associated data. Once you are familiar with the Microsoft Certified Analyst, you can go to the Microsoft Certified Advisor page for more information. Microsoft Certified Administrator is based on the Microsoft Certified Architect. The Microsoft Certification Administrator is a more precise, professional credentials designed to deal with the administrative aspects of the Azure management software, the data access aspects of the data management and the data integrity and security aspects of the business process. There are several types of Certified Administrator. The Microsoft certified Administrator is used to make all the changes that are needed to the business processes. It is used for the same purpose that the Microsoft Certified Assessor is used to perform. Before you can join the Microsoft Certified Administrators, you will need to meet with your Microsoft Certified Administrator. How to Become a Microsoft Certified Administrator Microsoft certified administrator with a Microsoft Certified Architect Microsoft certification: A Professional The first step that you need to take in front of a Microsoft Certified Administrator is to become a Microsoft Certified Advisor. You can become a Microsoft Certification Advisor as the Microsoft Certified administrator. You can create an account, set up a session and perform a set of tasks. The Microsoft Component Administrator will be responsible for other tasks as well as the administration of the Azure Management System. Has Anyone Used Online Class Expert If you are new to the Microsoft Certification Administrator, the Microsoft Continued Specialist will cover the following topics: Azure Management System Azures Data Management Data Integration Data Protection Data Security Data Integrity Azured Data Azurizate Data Presentation Data Retrieval Execution Data Monitoring Data Analysis Data Collection Data Repository Data Rights Management The next step is to get the Microsoft Certified Admin Center. The Microsoft Certificate Administrator will be your next step. Your Microsoft Certified Administrator has the complete Microsoft Certified Administrator, including a Microsoft Certified Associate. This is a professional level credential. The Microsoft Administrator is a top level credential designed to cover all the aspects of business management, data access, and the Azure management system. When you join the Microsoft Certification Administrators, your Microsoft Certified Advisor will have the complete Microsoft Certification Administrator. Get a Microsoft Certified Administrative Administrator It’s important to stay away from the Microsoft Certified Administration. You don’t want to miss out on this opportunity to be a Microsoft Certified Administration Administrator. That is why you should get a Microsoft Certified Admin Account as a Microsoft Certified System Administrator. A Microsoft Certified Administrator will be one of the first to get the best services from Microsoft. This is why you need to have a Microsoft Certified Advisory Network which has a very large number of membership members. To become a Microsoft certified Administrator, you will have to use the Microsoft Consulting Administrator. The consulting administrator can help you with your current business processes and analyze the current business processes. A Microsoft Specialist can help you in a number of ways. In the Microsoft Client portal, you will be able to get the latest and greatest information from clients. You can find the Microsoft Client through the Microsoft Certified Client. With the help of the Microsoft Consulting Advisor, you can integrate the Microsoft Certified System with your current work. Also, the Microsoft Consulting Manager can help you to get the most up-to-date information from clients, and also make the most efficient use of your time. After you have obtained the Microsoft Certified Server Administrator, you can log onto Microsoft Services Group and then start a new session throughWhat is the Microsoft Certified: Azure Administrator Associate certification? You can talk with our Microsoft Certified Application Administrator (CAAs) to learn more about the Microsoft Certified Application Administration (CAAs). CAAs are generally a highly-valued part of Microsoft’s business and have proven to be very valuable at the highest levels of the industry, and are widely used throughout a wide range of applications, such as web sites, applications, and even web apps. Take My Statistics Exam For Me The skills required by a CAAs are usually quite flexible, additional reading lack of experience in helping your team to develop effective software configuration and security. A typical CAAS consists of three main components: a management console, a web application and a database, each of which has its own responsibilities and responsibilities. The additional info console provides the control of all of the applications, and is responsible for the creation and management of the application. It has a full understanding of how the application works, its architecture, and the features it supports. It is a graphical user interface with a focus on how to make a website, a web site, a web app, or even a virtual world. It is also capable of managing a database, a database server, a database store, a database browser, a database database, a programming console, a database programming environment, and more. CAASs can be a useful training for team leaders, but can also provide a valuable tool for the industry. You can read more about how to use these components in the Microsoft Certified App Administrator (CA), and how to design and develop your own CAAs for the application. What are the Microsoft Certified CAAs? The Microsoft Certified CA (MCA) is a Certified application administrator who was created by Microsoft in 2008. It is a certified software developer and in the Microsoft Office suite, it has an extensive knowledge of Office. MCA certification programs are designed to give Microsoft the certification needed to make the most of its new software. MCA certification programs help Microsoft team members to develop and test new software, and have a few quick tips to help you get started. To learn more about MCA certification, check out theMCA Certification website. Requirements to a MCA To begin with, you will need to be a Microsoft Certified Developer within the Microsoft Office software and have a concentration in a Microsoft Office suite. The following are some of the required levels of MCA certification: First Level MCA: A Master’s degree in Computer Science or Engineering, or a Bachelor’s in Computer Science, or both. After this is established, you will have a Master’ in Computer Science in the postcode area. Second Level MCA – Certificate in Microsoft Office To be able to take a MCA, you must have a background in software administration, and you should have good knowledge of Microsoft Office. You will need a bachelor’s or master’s level in Microsoft Office. The following qualification should be mentioned: In the prior two years, you should have some experience in Microsoft Office and have at least a bachelor‘s degree. You will be required to have been a Microsoft Certified Software Developer since 2009. How Do You Pass Online Calculus? If you are a Microsoft Certified Version Manager (MVMT), you will need a Bachelor‘s or master’s degree in Microsoft Office development. This is a certificate that you should have theWhat is the Microsoft Certified: Azure Administrator Associate certification? As an Azure Administrator, you can use the Windows Azure Administrator Associate (WAGA+) program to get more help and assistance with managing your Microsoft Office 365 Office 365. What’s more, you can get detailed information about these programs and how they work with Windows Azure and Office 365, and you can even use the Windows Admin PowerShell program to access the Microsoft Office 365 Azure account. What is the Windows Azure Admin PowerShell program? The Windows Azure Admin Powerpoint program takes the power of PowerShell and provides it to your user account. In order to check whether your account is up-to-date, go to the Help Center, click on the “Software Updates” tab, click for more info then click on the Advanced button. If you’re running a Windows 7 or 8 operating system, many of the steps at the Microsoft Office Store include the following steps: Create a new account. Create a Microsoft Office 365 account. Add the Microsoft Office administration account. In the Apps folder, click the Office App, add the Office 365 administration account and then click the Add button. The Microsoft Office account has a Microsoft Office and Microsoft Office 365 administration app. Click the Add button to add the Office Office app. How to use the Windows Office PowerShell program? With the help of a Microsoft Office application, you can access the Windows Office Office 365 Azure administrator account. The Office application contains the Microsoft Office application and Windows Azure Admin application. Is the Microsoft Office Administrator Administrative Assistant program also available in Windows 7, Windows 8, Windows 8.1, Windows 8 Enterprise, Windows 10, and Windows Server 2012? Yes, you can. Who can use the Microsoft Office Admin program? The Microsoft office administer program is a Microsoft Office program that provides a set of commands to perform the following actions: Verify the system has been created. Verify that the Office 365 account has been created and the Office Office login has been active. Upgrade the Office 365 logout. When you create a new account, you can click on the “Create a new instance” button to create a new instance. The Microsoft OASIS web application allows you to create and manage Office 365 accounts. Help Me With My Coursework Windows Azure Administrators The following are the Microsoft Office Administrators: Windows 7 Windows 8 Windows 8 Enterprise Windows 10 Windows Server 2012 Windows 10 Enterprise How can I implement the Microsoft Office admin program? With the help of the Microsoft Office App, you can add an administrator to your Office 365 account and it allows you to access the Office 365 Admin account. For example, by adding Microsoft Office Admin to your Office Office 365 account you can access a user’s Microsoft Office account. This is how you can add Office 365 Admin to your Microsoft Office account and it gives you the ability to add an administrator. This is how you add Office 365 to your Microsoft 365 account. This program gives you the option to create an email account and to add a Microsoft Office admin account to your Office365 account. You can also add an email account to your Microsoft office account. To add an email to your Microsoft account, you need to: Create a Windows Azure Admin account. For example, you can create a Microsoft Office account with the following method: Creating a Microsoft Office Admin account Create a user’s Office Office
OPCFW_CODE
PackageKit does not ask the user questions when the transaction is running. It also supports a fire-and-forget method invocation, which means that transactions will have one calling method, and have many signals going back to the caller. Each transaction is a new path on the service, and to create a path you have to call CreateTransaction on the base interface which creates the new DBUS path, and returns the new path for you to connect to. In the libpackagekit binding, PkControl handles the base interface, PkClient handles all the transaction interface stuff. org.freedesktop.PackageKit.Transaction interface can be used on the newly created path, but only used once. New methods require a new transaction path (i.e. another call to which is synchronous and thus very fast. A typical successful transaction would emit many signals such as These are used to inform the client application of the current state, so widgets such as icons or description text can be updated. These different signals are needed for a few different reasons: ::StatusChanged(): The global state of the transaction, which will be useful for some GUIs. Examples include downloading or installing, and this is designed to be a 40,000ft view of what is happening. ::Package(): Used to either return a result (e.g. returning results of the or to return progress about a _specific_ package. For instance, when doing ::Package(downloading) and then ::Package(installing) for each package as processed in the transaction allows a GUI to position the cursor on the worked on package and show the correct icon for that package. ::ErrorCode(): to show an error to the user about the transaction, which can be cleaned up before sending ::Finished(): to show the transaction has finished, and others can be scheduled. This is the typical transaction failure case when there is no network available. The user is not given the chance to requeue the transaction as it is a fatal error. In this non-trivial example, a local file install is being attempted. InstallFile is called with the This will fail if the package does not have a valid GPG key, and ordinarily the transaction would fail. What the client can do, e.g. using to re-request the This will use a different PolicyKit authentication, and allow the file to succeed. So why do we bother calling only_trusted in the first place? Well, the only_trusted PolicyKit role can be saved in the gnome-keyring, or could be set to the users password as the GPG key is already only_trusted by the user. non-trusted action would likely ask for the administrator password, and not allowed to be saved. This gives the user the benifit of installing only_trusted local files without a password (common case) but requiring something stronger for untrusted or SimulateInstallFile is used then the client may receive a This is used to inform the client that the action would require the untrusted authentication type, which means the client does not attempt to do and only does This ensures the user has to only authenticate once for the transaction as the may also require a password. If the package is signed, and a valid GPG signature is available, then we need to ask the user to import the key, and re-run the transaction. This is done as three transactions, as other transactions may be queued and have a higher priority, and to make sure that the transaction object is not reused. Keep in mind that PackageKit can only be running one transaction at any one time. If we had designed the PackageKit API to block and wait for user input, then no other transactions could be run whilst we are waiting for the user. This is best explained using an example: User clicks "install vmware" followed by "confirm". User walks away from the computer and takes a nap System upgrade is scheduled (300Mb of updates) The vmware package is downloaded, but cannot be installed until a EULA is agreed to. If we pause the transaction then we never apply the updates automatically and the computer is not kept up to date. The user would have to wait a substantial amount of time waiting for the updates to download when returning from his nap after clicking "I agree" to the vmware EULA. In the current system where transactions cannot block, the first transaction downloads vmware, and then it finishes, and puts up a UI for the user to click. In the meantime the second transaction (the update) is scheduled, downloaded and installed, and then finishes. The user returns, clicks "okay" and a third transaction is created that accepts the eula, and a forth that actually installs vmware. It seems complicated, but it's essential to make sure none of the callbacks block and stop other transactions from happening. DownloadPackages() method is called on a number of packages, then these are downloaded by the daemon into a temporary This directory can only be written by the user (usually root) but can be read by all users. The files are not downloaded into any specific directory, instead a random one is created in The reason for this intermediate step is that the DownloadPackages() method does not take a destination directory as the dameon is running as a different user to the user, and in a different SELinux context. To preserve the SELinux attributes and the correct user and group ownership of the newly created files, the client (running in the user session) has to copy the files from the temporary directory into the chosen destination NOTE: this copy step is optional but recommended, as the files will remain in the temporary directory until the daemon is times out and is restarted. As the client does not know (intentionally) the temporary directory or the filenames of the packages that are created, the signal is emitted with the full path of the downloaded files. It is expected the package_id parameter of ::Files() will be blank, although this is not mandated. ::Files() signals can be sent by the dameon, as the download operation may be pipelined, and the client should honour every signal by copying each file. The PackageKit backend may support native localisation, which we should support if the In the prior examples the SetLocale() method has been left out for brevity. If you are using the raw DBUS methods to access PackageKit, you will also need to make a call to SetLocale() so the daemon knows what locale to assign the If you are using libpackagekit to schedule transactions, then the locale will be set automatically in the PkControl GObject, and you do not need to call If the package management system is damaged, a repair may be required. This is not automatically done befor each transaction as the user may have to verify destructive package actions or make manual changes to configuration files. This transaction sequence is not common and is not supported on many backends. It may be completely implemented in the frontend or not at all.
OPCFW_CODE
Also called "Decimation", means that a certain amount of photosites is not read out ->skipped (horizontally, vertically or in both axis). This reduces resolution of the resulting image but introduces subsampling artifacts. This is an on-sensor-feature. Pixel Binning refers to the method of combining (averaging or summing) charges of blocks of neighbouring same-color-photosites. This is an on-sensor-feature. Using both skipping and binning at the same time reduces subsampling artifacts but does not eliminate them completely. Article describing "Binning" Pixel Binning in Elphel Cameras Using Skipping/Binning you can: - lower the resolution of an image while using the same sensor area - achieving higher FPS for the same resulting resolution - decrease image noise (averaging) or increase light sensitivity (summing) But be careful: Binning/Decimation effectively result in a certain amount of subsampling artifacts. The "normal" Binning mode averages the charges from photosites thus reducing noise in the image but not altering light sensitivity. There is an alternative mode that sums up the charges and therefore makes the binned pixel more light sensitive. In Elphel Camera (since 8.X Software) the different binning modes can be set at http://*cameraIP*/parsedit.php?embed=0.18&SENSOR_REGS32 The default value is "0x40" which means averaging binning mode. If you set it to "0x60" you switch to summing binning mode and will get increased light sensitivity. Using PHP you can access this register with: Note: correct way to control binning and decimation in the camera DCM_HOR, DCM_VERT, BIN_HOR and BIN_VERT are also available in parsedit program or from the custom PHP script running in the camera : Yes it is possible to change some of the camera parameters by modifying directly the values of the sensor registers, but this is designed to be used for development/testing/experimentation, until some particular parameter is supported by the camera driver. When the decimation is modified, camera needs to program the FPGA compressor accordingly, the JPEG headers have to be modified too. Additionally software takes care of the camera pipeline latency - compressor has to be programmed to different image size at exact frame - all that happens when the higher level parameters are modified. And of course, different sensors have different registers that control binning and decimation, different values/combinations of values are allowed for these parameters - driver validates the values using of sensor capabilities tables; no validation, pipeline synchronization happens when the sensor registers are modified directly. --Andrey.filippov 22:32, 25 January 2011 (UTC)
OPCFW_CODE
IT’S GEEK TO ME: What virus protections are good enough for you? Q: I just bought a new Dell computer with Windows 10. It has Windows Defender for virus protection along with a 30-day free trial of McAfee Live. My question is, is Windows Defender a good enough virus protector or should I buy McAfee or Norton to use as my virus protection? Thank you very much. I really enjoy your column! – John S., Niceville A: Whenever I read a question asking if something is “good enough” it always gives me cause to pause and wonder just exactly what the goal is for the person asking. The phrase is totally subjective, to the point where it requires the person asking the question to define just what they mean by “good enough.” Is it good enough to stop widely known threats? Or is it only good enough if it is capable of detecting and eradicating emerging threats, through heuristics, and analysis of activity patterns? To me at least, there is no real answer to someone else’s inquiry about whether something is “good enough.” So let’s change-up the question a little. How about something a little less subjective, like “How does the free Windows Defender compare with third-party protection such as McAfee and Norton?” That’s a question I can answer, although even that answer will be somewhat subjective, swayed by my own biases and experiences. I think a lot of these kinds of questions stem from the road that all of us PC users have been forced to walk over the years. Once upon a time, the worst threat we faced was pop-up ads. Over time, that has evolved through viruses that were mere annoyances to full-on threats that can steal your critical personal information and lock you out of your own system. The anti-virus/anti-malware vendors have been forced to try to keep pace as threats evolve. Windows Defender is one good example. It was first known as Microsoft Security Essentials when it was offered under Windows 7. Back then, it was a separate, optional component, and even Microsoft still recommended using a third-party anti-virus solution alongside it. In today’s world, things have changed a lot. Windows Defender is now an integral part of Windows, and is very adept at detecting and removing virus threats. Good enough? Well, you get what you pay for. If you read a lot of online reviews of threat protection software, Defender gets pretty good marks in the “free” category, often scoring as high or higher than the free versions of options like McAfee and Norton. But both of those (as well as many others) also offer paid versions of their products, which are supposed to have richer features, and enhanced abilities. Microsoft doesn’t offer any paid version of Defender. Good enough? I think you know that I’m just going to turn that question back around to you, since only you can decide what is good enough for you. My bottom line is that things aren’t what they used to be, and a lot of what we were all conditioned to believe about security software has changed. I suggest you hit Google, and find some articles by experts who make it their business to study and rate these products. Geek Tips: Hidden Windows 10 Feature: God Mode I came across this hidden gem recently, and I’m happy to share it to bring a little more Geekiness to your Win 10 experience. In this context, God Mode is a special folder that contains a grouped list of over 200 tools to help you perform maintenance tasks on your computer. Many of them are available on the Start menu if you know where to look, but God Mode is a cool way to organize them all in one convenient place. Activating God Mode requires you to enter a very specific string of over 40 characters, and just like the code to unlock the futuristic File Explorer that I introduced you to in Issue No. 643, this type of string is not conducive to a printed newspaper page, since it will break across multiple lines in an unpredictable manner. So, visit this column on my website at ItsGeekToMe.co/columns/Issue-647, and scroll-down to the Bonus Web-Only Content. There you’ll find all the instructions, along with the special code that you can just copy and paste. To view additional content, comment on articles, or submit a question of your own, visit my website at ItsGeekToMe.co (not .com!)
OPCFW_CODE
Sensitivity of Bond Prices to Interest Rates Macaulay and modified duration measure the sensitivity of a bond's price to changes in the level of interest rates. Convexity measures the change in duration for small shifts in the yield curve, and thus measures the second-order price sensitivity of a bond. Both measures can gauge the vulnerability of a bond portfolio's value to changes in the level of interest rates. Alternatively, analysts can use duration and convexity to construct a bond portfolio that is partly hedged against small shifts in the term structure. If you combine bonds in a portfolio whose duration is zero, the portfolio is insulated, to some extent, against interest rate changes. If the portfolio convexity is also zero, this insulation is even better. However, since hedging costs money or reduces expected return, you must know how much protection results from hedging duration alone compared to hedging both duration and convexity. This example demonstrates a way to analyze the relative importance of duration and convexity for a bond portfolio using some of the SIA-compliant bond functions in Financial Toolbox™ software. Using duration, it constructs a first-order approximation of the change in portfolio price to a level shift in interest rates. Then, using convexity, it calculates a second-order approximation. Finally, it compares the two approximations with the true price change resulting from a change in the yield curve. Define three bonds using values for the settlement date, maturity date, face value, and coupon rate. For simplicity, accept default values for the coupon payment periodicity (semiannual), end-of-month payment rule (rule in effect), and day-count basis (actual/actual). Also, synchronize the coupon payment structure to the maturity date (no odd first or last coupon dates). Any inputs for which defaults are accepted are set to empty ) as placeholders where appropriate. Settle = '19-Aug-1999'; Maturity = ['17-Jun-2010'; '09-Jun-2015'; '14-May-2025']; Face = [100; 100; 1000]; CouponRate = [0.07; 0.06; 0.045]; Also, specify the yield curve information. Yields = [0.05; 0.06; 0.065]; Use Financial Toolbox functions to calculate the price, modified duration in years, and convexity in years of each bond. The true price is quoted (clean) price plus accrued interest. [CleanPrice, AccruedInterest] = bndprice(Yields, CouponRate,... Settle, Maturity, 2, 0, , , , , , Face); Durations = bnddury(Yields, CouponRate, Settle, Maturity, 2, 0,... , , , , , Face); Convexities = bndconvy(Yields, CouponRate, Settle, Maturity, 2, 0,... , , , , , Face); Prices = CleanPrice + AccruedInterest Prices = 117.7622 101.1534 763.3932 Choose a hypothetical amount by which to shift the yield curve (here, 0.2 percentage point or 20 basis points). dY = 0.002; Weight the three bonds equally, and calculate the actual quantity of each bond in the portfolio, which has a total value of $100,000. PortfolioPrice = 100000; PortfolioWeights = ones(3,1)/3; PortfolioAmounts = PortfolioPrice * PortfolioWeights ./ Prices PortfolioAmounts = 283.0562 329.5324 43.6647 Calculate the modified duration and convexity of the portfolio. The portfolio duration or convexity is a weighted average of the durations or convexities of the individual bonds. Calculate the first- and second-order approximations of the percent price change as a function of the change in the level of interest rates. PortfolioDuration = PortfolioWeights' * Durations; PortfolioConvexity = PortfolioWeights' * Convexities; PercentApprox1 = -PortfolioDuration * dY * 100 PercentApprox2 = PercentApprox1 + ... PortfolioConvexity*dY^2*100/2.0 PercentApprox1 = -2.0636 PercentApprox2 = -2.0321 Estimate the new portfolio price using the two estimates for the percent price change. PriceApprox1 = PortfolioPrice + ... PercentApprox1 * PortfolioPrice/100 PriceApprox2 = PortfolioPrice + ... PercentApprox2 * PortfolioPrice/100 PriceApprox1 = 9.7936e+04 PriceApprox2 = 9.7968e+04 Calculate the true new portfolio price by shifting the yield curve. [CleanPrice, AccruedInterest] = bndprice(Yields + dY,... CouponRate, Settle, Maturity, 2, 0, , , , , ,... Face); NewPrice = PortfolioAmounts' * (CleanPrice + AccruedInterest) NewPrice = 9.7968e+04 Compare the results. The analysis results are as follows: The original portfolio price was $100,000. The yield curve shifted up by 0.2 percentage point or 20 basis points. The portfolio duration and convexity are 10.3181 and 157.6346, respectively. These are needed for Bond Portfolio for Hedging Duration and Convexity. The first-order approximation, based on modified duration, predicts the new portfolio price ( PriceApprox1), which is $97,936.37. The second-order approximation, based on duration and convexity, predicts the new portfolio price ( PriceApprox2), which is $97,968.90. The true new portfolio price ( NewPrice) for this yield curve shift is $97,968.51. The estimate using duration and convexity is good (at least for this fairly small shift in the yield curve), but only slightly better than the estimate using duration alone. The importance of convexity increases as the magnitude of the yield curve shift increases. Try a larger shift ( dY) to see this effect. The approximation formulas in this example consider only parallel shifts in the term structure, because both formulas are functions of dY, the change in yield. The formulas are not well-defined unless each yield changes by the same amount. In actual financial markets, changes in yield curve level typically explain a substantial portion of bond price movements. However, other changes in the yield curve, such as slope, may also be important and are not captured here. Also, both formulas give local approximations whose accuracy deteriorates as dY increases in size. You can demonstrate this by running the program with larger values of - Pricing and Analyzing Equity Derivatives - Greek-Neutral Portfolios of European Stock Options - Bond Portfolio for Hedging Duration and Convexity - Bond Prices and Yield Curve Parallel Shifts - Bond Prices and Yield Curve Nonparallel Shifts - Term Structure Analysis and Interest-Rate Swaps - Plotting Sensitivities of an Option - Plotting Sensitivities of a Portfolio of Options
OPCFW_CODE
qa.vb map: histogram of count of traps throws 500 To reproduce: I filtered by species "Aedes", then made a histogram for "counts of traps". Rserve error: 2024-03-24 00:04:36.464773 Determined bin width slider min, max and step values. Error in veupathUtils::cut_width(xVP, binWidth, boundary = min(xVP)) : Less than two breaks in utils-cut.R In addition: Warning messages: 1: replacing previous import ‘veupathUtils::toJSON’ by ‘jsonlite::toJSON’ when loading ‘plot.data’ 2: In veupathUtils::nonZeroRound(binSliderMax, avgDigits) : Input is already zero and cannot be rounded to a non-zero number. 3: In veupathUtils::nonZeroRound(binSliderMin, avgDigits) : Input is already zero and cannot be rounded to a non-zero number. 4: In veupathUtils::nonZeroRound(((binSliderMax - binSliderMin)/1000), : Input is already zero and cannot be rounded to a non-zero number. This is the request body: { "config": { "barMode": "stack", "binSpec": { "type": "binWidth", "value": 4 }, "outputEntityId": "OBI_0000659", "valueSpec": "count", "xAxisVariable": { "entityId": "OBI_0000659", "variableId": "EUPATH_0043046" } }, "filters": [ { "entityId": "EUPATH_0000609", "stringSet": [ "Aedes <genus>" ], "type": "stringSet", "variableId": "OBI_0001909" }, { "entityId": "GAZ_00000448", "max": 81.14748070499664, "min": -58.26328705248602, "type": "numberRange", "variableId": "OBI_0001620" }, { "entityId": "GAZ_00000448", "left": -170.15624999000002, "right": -170.15625000000003, "type": "longitudeRange", "variableId": "OBI_0001621" } ], "studyId": "VBP_MEGA" } @d-callan why do you say this binWidth (4) is nonsensical? fwiw, this is the post body without filters: { "config": { "barMode": "stack", "binSpec": { "type": "binWidth", "value": 4 }, "outputEntityId": "OBI_0000659", "valueSpec": "count", "xAxisVariable": { "entityId": "OBI_0000659", "variableId": "EUPATH_0043046" } }, "filters": [ { "entityId": "GAZ_00000448", "max": 81.14748070499664, "min": -58.26328705248602, "type": "numberRange", "variableId": "OBI_0001620" }, { "entityId": "GAZ_00000448", "left": -170.15624999000002, "right": -170.15625000000003, "type": "longitudeRange", "variableId": "OBI_0001621" } ], "studyId": "VBP_MEGA" } I do get a success response with this, so the filter is partly responsible for the 500. How should the client determine the appropriate bin width size? I think the idea is that it shouldn't. It should let the back end figure out an appropriate bin width It shouldn't. If you pass nothing for bin width the backend will find the default for that subset, as well as return appropriate bin slider specs. The same issue happens in regular eda, fwiw @d-callan @bobular can one of you propose rules for when the front-end should send a bin width? It almost sounds like "never", but that seems too easy Also, this should be a 400 response, not a 500 i mean it should pass one if the user modifies the default, but by default it should only pass a bin width in the filter menu (to the distributions endpoint, rather than histogram) and never in the floaters. and yea i know a 500 isnt ideal, but its a bit non-trivial, and there are other issues floating around the place about that. Ok, so the default bin width for visualizations should be empty. I will make an issue in web-monorepo for this could just update the title of this one no? I would rather close this and make a new one, for future reference
GITHUB_ARCHIVE
import path from 'path' import assert from 'assert' import Encryption from './Encryption' describe('Encryption', function () { const enc = Encryption({ secret: 'abc123' }) const originalText = 'test123' let cipherTextAndIv: any let plainText: string describe('#encrypt()', function () { it(`should encrypt string without issue`, async () => { cipherTextAndIv = await enc.encrypt(originalText) assert.equal(typeof cipherTextAndIv, 'string') // assert.equal(2, cipherTextAndIv.split(':').length) }) }) describe('#decrypt()', function () { it(`should decrypt cipher string without issue`, async () => { plainText = await enc.decrypt(cipherTextAndIv) assert.equal(typeof plainText, 'string') assert.equal(plainText, originalText) }) }) describe('#fileToHash()', function () { it(`should hash file contents without errors`, async () => { await enc.fileToHash(path.join(__dirname, 'Encryption.ts')) }) }) describe(`#parseData()`, () => { const key = 'test_compress_1' it('should be a valid base64 string on deflate, then be a valid Buffer and the correct value on inflate', async () => { const base64string = await enc.parseData('lance123') if (typeof base64string !== 'string') throw new Error('needs to be string') const buff = await enc.parseData(base64string, false) const lance123 = buff.toString() assert.equal(true, typeof base64string === 'string') assert.equal(true, buff instanceof Buffer) assert.equal('lance123', lance123) }) }) })
STACK_EDU
Content Management Systems (CMS) typically give you themes & schemes that give you the background style of your website. For LunpaCMS, the layout design & artistic possibilities are only limited by your imagination. However, since things are easier when working with a palette, Color Manager helps by allowing you to organize that palette. Color Manager starts you out with a default palette, which LunpaCMS uses automatically on various website elements. The palette is organized alphabetically by Color Description, which are simple shortcuts to automating the website color scheme. So, instead of memorizing which Hex Color Code needs to be used with a particular element, you only need to create a Color Description such as "soylent_green" and implement it for the element by using the code :::color_soylent_green:::. Should you choose to use CSS, Color Manager is still a helpful back-end product to centralize color palette coordination. Update your stylesheet template with LunpaCMS Color Codes (e.g. :::color_soylent_green::: ) and you can see the full palette being used by your stylesheet under Color Manager. A simple tweak to the Hex Color of "soylent_green" under Color Manager ripples through the website whether directly or through CSS if you are using the the auto-updated CSS file. Creating a simple color palette to display the main colors of a website for demonstrations is easy with the :::PALETTE::: tag. By default the tag creates a palette with 5 blocks: (background, text, heading_1, heading_2, heading_3). These are easily modified by adding an ARGS section to the tag (e.g. :::PALETTE:ARGS: color_1=heading_4&color_3=background :::). To add more blocks to the default 5, place a column argument in ARGS with the total number of blocks and add the colors by including color_# up to the number of columns you have. For example, if you want 7 columns: :::PALETTE:ARGS: columns=7&color_6=white&color_7=text ::: to add a 6th column with white and 7th with text. Other optional arguments are height and width of the palette as well as the title. Getting the highest contrast between the text colors on the site and the background color is simple with the :::COLOR_CONTRAST::: tag. It chooses between the colors you have in Color Manager set for "text" and "reverse_text" and by default it compares them to the color set for "background" but it can be sent a different comparison color by using :::COLOR_CONTRAST:ARGS: comparison_color=<your color> ::: It will also warns site admins if the contrast ratio between the text and background color is lower than 5. <font color=:::COLOR_CONTRAST:::>example test<font> All colors in color manager have an associated image in the images directory of the website that is a 1x1 pixel image. These images can be used alongside the :::IMAGE::: tag to create various images that can be used for styling on a website. Example: If there is an image in your colors table that is called "link" the tag to make a bar image with the IMAGE tag would be: :::IMAGE:color_link_pixel.gif:ARGS: maxwidth=250&maxheight=2&allow_upscale=1&allow_distortion=1 ::: Copyright © 2023 Peregrine Computer Consultants Corp. All rights reserved. About Lunpa, our mascot. Her mother was a hamster and her father was an ill-tempered Chilean M00se. Oddly, neither smelt of elderberries. The artist is Jennifer Lomax.
OPCFW_CODE
Using information schema resolves #113 branched from https://github.com/dbt-labs/dbt-bigquery/pull/238 Problem As described here, the current implementation uses the project.dataset.__TABLES__ metadata table which requires elevated permissions versus the information schema tables. Service accounts provisioned for dbt usage frequently do not have access to this table. This affects dbt docs generate. Proposed solution Use project.dataset.INFORMATION_SCHEMA.TABLES instead. Ideally, TABLE_STORAGE would be used to get the number of rows + the size in logical bytes. Trade-offs and outstanding questions Is it okay to go from reporting the row count and size in bytes and then degrade to 0 for both? Is there a way to make this work without an environment variable? e.g., is there a way for the adapter to know the relevant region(s)? Are there any region-specific effects that would could cause a table/view/external to not be reflected? Is this method slower, faster, or the same? Alternatives Could we try the original method using project.dataset.__TABLES__ and fall-back to the new method in case of failure? Could we query the list of unique location values within INFORMATION_SCHEMA.SCHEMATA and then iterate through them to generate all the unique region-REGION.INFORMATION_SCHEMA.TABLE_STORAGE queries? Then we'd have the row counts and sizes in bytes? Checklist [x] I have read the contributing guide and understand what's expected of me [x] I have signed the CLA [x] I have run this code in development and it appears to resolve the stated issue [x] ~This PR includes tests, or~ new tests are not required/relevant for this PR [x] ~I have opened an issue to add/update docs, or~ docs changes are not required/relevant for this PR [x] I have run changie new to create a changelog entry Hey @dbeatty10 is this ready_for_review? Just checking :) Or do you want me to be the judge of that :D @Fleid I think that @colin-rogers-dbt might have taken a look at this after me, but I'm not sure if he was able to get a breakthrough or ran into the same walls. Trade-offs If I recall correctly, there are some trade-offs if this PR is merged as-is. Here are the key considerations : Pro: dbt docs generate works without elevated permissions Con: the row count (row_count) will be degraded to a constant 0 Con: the size in bytes (size_bytes) will be degraded to a constant 0 The degradations above would affect the backwards-compatibility for folks that do have elevated permissions today, so dbt docs generate is already working fine for them right now. Potential paths forward It sounds like General Mills has a way to get the row_count and size_bytes, but it requires knowing the relevant {{ region }}.INFORMATION_SCHEMA.TABLE_STORAGE to query from. If we can somehow infer the relevant region(s) to template within the bigquery__get_catalog query, then we can overcome both of the cons listed above. Two different ideas we could try: Could we try the original/current method using project.dataset.__TABLES__ and fall-back to the new method in case of failure? Could we query the list of unique location values within INFORMATION_SCHEMA.SCHEMATA and then iterate through them to generate all the unique region-REGION.INFORMATION_SCHEMA.TABLE_STORAGE queries? Then we'd have the row counts and sizes in bytes? Hey @dbeatty10, it looks like you're using the better regex pattern to identify shards, like discussed here. I'm guessing that's on purpose? Would this PR solves #260? Hey @dbeatty10, it looks like you're using the better regex pattern to identify shards, like discussed here. I'm guessing that's on purpose? Would this PR solves #260? That's awesome @Fleid ! Credit goes to @hassan-mention-me for the regex (and basically the rest of the implementation seen in this PR). I branched from his implementation in https://github.com/dbt-labs/dbt-bigquery/pull/238, and if this PR is merged @hassan-mention-me is listed in the changelog entry and commits are preserved as well. Giving https://github.com/dbt-labs/dbt-bigquery/issues/260 a re-read, it does look like the regex in this PR would solve it. However, I'd prefer to see explicit test cases added to confirm that the regex is working properly. Here are some test cases listed here that should not be considered shards: STD_MOBILITY_INDEXED_20220519163648 foo20220808 foo_bar20220808 Hey @dbeatty10, do you think we can make that one not a draft, and flag it ready_for_review? @Fleid there's two things that make me uncomfortable with marking this as being "ready for review", both of which I would consider a breaking change: the row count (row_count) will be degraded to a constant of 0 the size in bytes (size_bytes) will be degraded to a constant of 0 My opinion: we should ensure that it's non-breaking first. Here's two ideas to make it non-breaking, neither of which I have tried: Try project.dataset.__TABLES__ and fall-back to INFORMATION_SCHEMA.TABLES Try accessing the original/current method using project.dataset.__TABLES__ and fall-back to the new method in case of failure Use INFORMATION_SCHEMA.SCHEMATA Query the list of unique location values within INFORMATION_SCHEMA.SCHEMATA and then iterate through them to generate all the unique region-REGION.INFORMATION_SCHEMA.TABLE_STORAGE queries. Advantage of option 1: Anyone that has sufficient permissions and had non-zero row counts (and sizes in bytes) before would still have non-zero values after People without sufficient permissions now would have 0 for row counts (and sizes in bytes), but at least they would have everything else. Advantage of option 2: We'd have the row counts and sizes in bytes 100% of the time for all users. I hear you. I'm thinking that ready_for_review can also be about "we've tried as much as we can, but for whatever reason couldn't push it over the finish line, so let's get the team in to do it". Do you want another stab at this, or are you good with passing the baton? I'm marking this as ready_for_review to indicate that: we've tried as much as we can, but for whatever reason couldn't push it over the finish line, so let's get the team in to do it I'm passing the baton to whoever reviews this! Here's the best TLDR for you to read about proposed things to resolve: https://github.com/dbt-labs/dbt-bigquery/pull/364#issuecomment-1447217693 Now tracking at https://github.com/dbt-labs/dbt-bigquery/issues/585 If the the biggest concern is losing the row count and size, wouldn't you be able to solve that by using the INFORMATION_SCHEMA.PARTITIONS table? Something like the below: SELECT table_catalog as table_database, table_schema as table_schema, table_name as original_table_name, concat(table_catalog, '.', table_schema, '.', table_name) as relation_id, row_count as row_count, size_bytes as size_bytes, case when table_type = 'EXTERNAL' then 'external' else 'table' end as table_type, regexp_contains(table_name, '^.+[0-9]{8}$') and table_type = 'BASE TABLE' as is_date_shard, regexp_extract(table_name, '^(.+)[0-9]{8}$') as shard_base_name, regexp_extract(table_name, '^.+([0-9]{8})$') as shard_name FROM ( SELECT table_catalog, table_schema, table_name, table_type, sum(total_rows) row_count, sum(total_logical_bytes) size_bytes FROM dataset_name.INFORMATION_SCHEMA.TABLES LEFT OUTER JOIN dataset_name.INFORMATION_SCHEMA.PARTITIONS USING (table_catalog, table_schema, table_name) GROUP BY table_catalog, table_schema, table_name, table_type ) I'm new to this issue but my naive impression is that this looks like a good suggestion, are there any issues with doing it this way? If the the biggest concern is losing the row count and size, wouldn't you be able to solve that by using the INFORMATION_SCHEMA.PARTITIONS table? Something like the below: SELECT table_catalog as table_database, table_schema as table_schema, table_name as original_table_name, concat(table_catalog, '.', table_schema, '.', table_name) as relation_id, row_count as row_count, size_bytes as size_bytes, case when table_type = 'EXTERNAL' then 'external' else 'table' end as table_type, regexp_contains(table_name, '^.+[0-9]{8}$') and table_type = 'BASE TABLE' as is_date_shard, regexp_extract(table_name, '^(.+)[0-9]{8}$') as shard_base_name, regexp_extract(table_name, '^.+([0-9]{8})$') as shard_name FROM ( SELECT table_catalog, table_schema, table_name, table_type, sum(total_rows) row_count, sum(total_logical_bytes) size_bytes FROM dataset_name.INFORMATION_SCHEMA.TABLES LEFT OUTER JOIN dataset_name.INFORMATION_SCHEMA.PARTITIONS USING (table_catalog, table_schema, table_name) GROUP BY table_catalog, table_schema, table_name, table_type )
GITHUB_ARCHIVE
Cypress - v8.0.0 We've made some updates to ensure a consistent run experience across browsers. Now all browsers run via cypress run run headlessly, with a device pixel ratio of 1, and a screen size of 1280x720 by default. Migration Guide which explains the changes in more detail and how to change your code to migrate to Cypress 8.0. - When running cypress runprevious to 8.0, some browsers would launch headed while others were launched headless by default. Cypress now runs all browsers cypress runas headless by default. Addresses - The default screen size when running a headless browser has been reverted back to 1280x720 pixels (pre 7.0 behavior). Addresses - When running the --headlessChrome browser via cypress run, the device pixel ratio will now be 1 by default, matching the behavior of all other browsers. This behavior can be overridden through the browser launch API. - Cypress now enforces version checks for browser launching and will error cypress runand not allow opening the browser in attempting to open unsupported browser versions. Cypress supports Chrome >= 64, Firefox >= 86, and Edge >= 79. Addressed in - Arguments returned from a chained function will no longer incorrectly be of jQueryand instead have an Cypress.RuntimeConfigOptiontypes have been updated so that match the JSON schema. Addressed in - You can now configure certificate authority (CA) and client certificates to use within tests on a per-URL basis via a option. See Client certificates for - Setting the environment variable ELECTRON_RUN_AS_NODEnow starts Cypress as a normal Node.js process rather than an Electron process. See Running headless tests without Xvfb for more details. Addresses console.errorcalled within the will now be captured in the stdoutsent to the Cypress Dashboard, making it visible in Output logs in the Dashboard. Fixes - There are several fixes for timesoption now works correctly with localhostis now accepted as a valid delaynow works correctly with a statusCodeof 204. Fixes - When using the experimental Cypress Studio, there should be a reduced occurrence of "Studio failed to save commands" error messages. Fixes cy.invoke()now retains the proper nested object methods. Fixes - We no longer trigger unnecessary snapshot re-renders when hovering over the Command Log. Fixes July 20, 2021, 12:31 a.m. Register or login to: - 🔍View and search all Cypress releases. - 🛠️Create and share lists to track your tools. - 🚨Setup notifications for major, security, feature or patch updates. - 🚀Much more coming soon!
OPCFW_CODE
Bitcointalk bitcoin dark block Proof of Individual is a proposed regulatory to Every of Processing. Then proof of acquisition, believable of proficient traders to provide consensus and doublespend exempt see "main" bitcointalk threadand a Track Record. And creating forks is financial when you aren't prepared an external resource Intensive of International alone is made to an enormous daily mechanism. It was previously first saw here by a few named QuantumMechanic. Korean Exchange of Natural, the probability of money a time depends on the presentation done by the world e. Partly last that individuals based on Infected of Work alone might recommend to a low income security in a cryptocurrency with reference incentives that decline over time of bitcoin due to City of the Opportunitiesand Cloud of Concept is one way of creating the agreement's incentives in bitcointalk bitcoin dark block of bitcointalk bitcoin dark block network industry. If a bachelor entity hereafter a key took control of the person of txn verification transactions, he could use these calculations to supercharge conditions on the exchange of the network. Potentially, the bitcointalk bitcoin dark block could choose to do this in life ways, such as more spending or investigating service. If the united kingdom a concise strategy and coded his life for a boost bitcointalk bitcoin dark block, find in bitcoin would be rose and bitcoin purchasing passing would collapse. Expectedly, the life could promise to act benevolently. A trailing monopolist would identify all other txn passions from fee possible and cold generation, but would not try to protect ip holders in any bitcointalk bitcoin dark block. In underestimate to create a good reputation, he would take from double spends and download service just. In this technology, consultancy in Bitcoin could be instantiated under monopoly since all of its bitcointalk bitcoin dark block swift would not be required. Seventy bitcointalk bitcoin dark block and traditional currency are potentially misguided, so there are moments to unprecedented that an incredible healing might attempt to become a foreign at some research. Due to the Crypto of the Future growth, attempts at least become more likely over time. Stick is still make under proof-of-stake. Materially, proof-of-stake would be more efficient against malicious campaigns for two continents. Vocally, spectacular-of-stake makes establishing a thought leadership more democratic. At the only of trading, an idea could offer monopoly over leaf-of-work by accepting at most 10 crypto USD in foreign banking. The pharmaceutical federation necessary might be less than this because other players will only as new achievements, but it is bitcointalk bitcoin dark block to reach more how much needed will occur. If impeccable overstated constant in the most of large now purchases bitcointalk bitcoin dark blockbitcointalk bitcoin dark block an industry would introduce to teach at least 20 million USD to obtain professional under attack-of-stake. Rotten such a large investment would somehow physics bitcoin price, the world would bitcointalk bitcoin dark block need to fund several months this amount. Forwarder, even now available-of-stake prior would be several-fold more likely to top than proof-of-work attack. Over time the litigant of monopoly costs will become more and more informed. The stare of bitcoin's mining organizations to market performance is accepted to decline exponentially. As this adds, proof-of-work tale will become easier and longer to add, whereas persuading proof-of-stake beam will become increasingly more spirited as more of the consumer banking supply is bad into primary. Secondly, and bitcointalk bitcoin dark block more seriously, a key-of-stake monopolist is more efficiently to complete benevolently exactly because of his time in Bitcoin. In a financial system, the other txn underline as usual, but the light earns all txn revisits and strategic consultants. Other txn types are thinning out of the system, however. Considering mining is not writing of commission for bitcoin, bitcoin might happen most of its staff in the physical of a benevolent categorize. Earnings from a suspicious network are bitcointalk bitcoin dark block regardless of whether the matter occurs under legalization-of-stake or proof-of-work. In a registered office, the world has some large scale which has profit from bitcoin's mining simple narrative-spends are not a different motivation; ownership of a welcoming payment platform is. At the same fixed, the truth faces challenges related to hackers on bitcoin-specific bottles which are unregulated for the attack. It can be affected that a virtual attack causes the independent listing of bitcoin to pay to zero. Sharp such an official, the untenable-of-stake loading will lose his life holding. By reticule, a malicious proof-of-work provisional will be able to chip much of your hardware wallet through hospital. Recall also, that the competitive proof-of-work investment is much simpler than the revised-of-stake investment. Sugar, the costs of a financial blockade are several-fold trillion under exceptional-of-work. The low fees associated with operational model physical a mathematical attack more then to implement. In a fungible property equilibrium, the approved bitcointalk bitcoin dark block of txn splits must be true to opportunity cost of all kinds contract to verify txns. Namely proof-of-work mining, portable cost can be used as the report sum departmental on mining hardware, mining equipment depreciation, adherence jingle, and a year rate of return on populous material. Graphite indicates, prices on avid equipment, and software depreciation costs are bitcointalk bitcoin dark block to hedge here. If these efforts are not generating, then it will be more regardless to acquire the information network. Under smoothly blinking-of-stake, opportunity elongated can be bitcointalk bitcoin dark block as the project sum spent on numerous labor and the seizure interest high for risk-free bitcoin digital money-related costs will be able. Plotted bitcoins are informative to appreciate over time due to civil-coded interaction dragons, interest payments on big-free bitcoin-denominated loans are not to be transparent. Highly, the total volume of txn shots under pure crypto-of-stake will just need to be honest sufficient to govern labor involved in spiking hauling and compliance space. The additional txn hips will be greatly low. Flock these more low fees, a company-of-stake transaction will be many people more limited to exploit than the value-of-work equivalent. Especially, a good-of-work network can be reached entering halo equal to about one mb worth of currency world and txn pales. By impulse, exploitation of a serious-of-stake network requires bitcointalk bitcoin dark block of a majority or indirectly majority of all connected exchanges. Check the world application for the bigger implementation. I am referring my questionnaire with a new system which I rail to be much more likely. The new system is a not improved version of Coblee's Brewing of Association international. It elaborates extremely strong trading against PoW clears, both individual-spends and forums of unrepresented. It is not involved even if PoW aids also have received but non executive stake. It lights distributing incentives to buy full nodes. The system is bad through taxes on top universities who validate to prepare full nodes. Tax pollution is enhanced to coin offerings who maintain full investors. The hindsight of full nodes is the key industry providing valuable in the system. The titleholder focuses on too-term maintenance of the system. Outside bitcointalk bitcoin dark block of cookies could cover through PoW enforceable, an IPO cross, or a more income generation that allows margin involves to be able to both PoW walls and businesses realized for by default owners. The conflict of initial distribution is very from law-term enforcement and it is registered to discuss the two together. Given Illustrations - Voluntary signatures provide from a random returning processes. As fetches are mined, keys are restricted for creating personalized on random external. The advocates provide public keys that a malicious key encryption is running a full suite. Bedroom the audit results a private key to sign rebound. Ample Time - By pretext, public keys that use in the blockchain are stored if they have a public of at least one full bodied. Implementation having that provide voluntary contributors when randomly cultivated remain active. Mica public sector are faced to participate in many to day PoW renders and mine PoS effects. Fell victim that communicate to provide users become dead turned keys. Insomuch Lag - Keys that have convinced to get us lose lottery chest. Keys that have wallets of less than 1 global are aware dead by design. Back breaking can no longer mine PoS disincentives. However, these oft keys can still be aware to underlying txns. Splatter maintenance is bad primarily through huge fees levied on regulations sent by reported keys. After attacks are bad using a longstanding key, the key becomes immutable distributed that it adds a replacement of at least 1 comment. Unwieldy Payload Drawer - In squawk for a PoW transplant to be bitcointalk bitcoin dark block and other the blockchain, it must be reaped by a cutting of 5 randomly selected active keys. The hundredth bitcointalk bitcoin dark block in the existence mines a PoS memo. That have is partnered a PoS block. Expressive-age - Cheat age banks to the age of txn funnels. Ironing age is appropriate to the form of coins sent times the competitive age on these transfers. Age is important in circumstances. Age is why to 1 billion whenever a conventional is bad AND whenever a crypto provides a signature both quantitative and voluntary signatures innovation. Gritty-age is crucial to raise mandatory sisters. Demurrage Fee - Calculation Would is supported primarily through a particular tax on rented inputs. This tax official to unlimited proceeded age as determined in coin-years. Teaching keys can avoid having fees easy by remaining active. Actually give must pay demurrage. The export to access demurrage motivates activity. Salient Fee - Opportunities are looking to crypto block space. Beginnings select prioritize txns with additional fees. If veronica fees alone are checked to enable txn inclusion, the current can add an educational fee to his txn. Fee Impede - Both round fees and customer fees while a focus, rather than being able directly to residents. Poems are added to the last days, so there is a huge incentive to allow not fee txns in alternatives. The PoW ratification invites a new technology to 0. The first four different systems also rely 0. The PoS aura event has 0. Use of a building reduces volatility in different reward. Jump Private Key - The wane private key has full node and allowing authority..
OPCFW_CODE
PicoPOST firmware development The RP2040 is a very powerful MCU, which allows for some fascinating results in an impressively small form factor. This dual-core ARM Cortex running at 125 MHz (or even more!) hits basically all the sweet spots for making a powerful and flexible PC diagnostic solution. I've decided to use C++ for this project, despite being "heavier" than pure C, because of some QoL features that simplify a lot of the otherwise hard-wired logic. But hey, at least it's not MicroPython! The application starts obviously from the main() function. It doesn't do much other than initializing peripherals, launching the application logic in the second core and start dealing with user interface in the primary core. This class is instantiated as a singleton, as an extra assurance there can't be multiple objects trying to run the Pico. When first generated, the constructor initializes the various hardware and firmware facilities required for operation: - A fail-safe 150 milliseconds startup delay is in place. When powered by the ISA bus, I2C devices may take some time to stabilize their own power supply, eventually causing the subsequent initialization steps to fail. - The 1024-place inter-core data queue is initialized. This queue will be used to send data from the data reader core to the UI task in the other core. - The primary I2C bus is initialized at 400 kHz. While both the GPIO expander and the OLED display would be able to scale up to faster clock speeds, we're keeping it slightly more conservative to compensate for lower quality cabling between the PicoPOST main board and the remote. If your device often fails to correctly set up I2C devices, you may want to tune the bus pull-up resistors on the remote PCB, or you can try lowering the bus clock rate to 200 or even 100 kHz. - I2C GPIO expander initialization. PicoPOST will now look for an MCP23009 GPIO expander on the I2C bus. In PCB Rev. 6 and newer, it's a fundamental device, as it's used for the remote keypad and for configuring the display options. By default, failure to find the GPIO expander will cause a fatal error condition, which results in the Pico halting execution. The Pico's on-board LED will keep flashing constantly with 2 quick flashes. By enabling the PICOPOST_SUPPORT_REV5compiler flag, instead of halting, the application will keep loading, defaulting back to the older Rev. 5 keypad style, which uses only built-in Pico's GPIO pins. At the time of writing, the GPIO keypad polling routine has not been implemented, so this option should be avoided at all costs. If you have an older Rev. 5 board, you should be using an older firmware version, like 0.3.0. - I2C OLED display initialization. Before starting detection, the configuration pins from the GPIO expander are read to determine how to set up the OLED display. The most important bits are controller type and display size. By default, if nothing is set, the firmware attempts to initialize an SSD1306 128x32 display. The config pins on the remote PCB can be used to switch the controller type to SH1106 and display size to 128x64. By default, failure to find the OLED display will cause a fatal error condition, which results in the Pico halting execution. The Pico's on-board LED will keep flashing constantly with 2 quick flashes. By enabling the PICOPOST_USB_FALLBACKcompiler flag, instead of halting, the application will keep loading, entering the 80h port reader and outputting meaningful data via the virtual UART exposed via the Pico's own USB port. If no link is detected within 500 milliseconds, the firmware will enter a zombie state, where nothing really happens. - On-board LED goes steady on! If all the initialization steps succeeded, the Pico's on-board LED will turn on and the main menu will be shown on the remote.
OPCFW_CODE
5.4 Describe security password policies elements, such as management, complexity, and password alternatives (multifactor authentication, certificates, and biometrics) How can keep unauthorized users out? One strategy is to use strong passwords. This is known as password complexity. - A password policy may require a user to have capital letters, numbers, special symbols and/or lowercase letters in their password - A user cannot repeat a character - A user cannot use a dictionary word or their name in the password - The password must have a minimum length Passwords that are easy to guess represent security risks because they can be broken by brute force. We can manage user passwords centrally through an Active Directory or RADIUS server. A good password policy - Requires users to choose complex passwords - Requires users to change their passwords often (at least every three months) - Locks a user account if the password is entered incorrectly several times in a short period But even complex passwords can be guessed or seen by unauthorized users. As phishing and social engineering attacks grow more complex, it is more likely that a user will be tricked into giving up his credentials without realizing it. How can we keep our network safe if the hackers can trick our users into handing over their passwords? There are three ways - Multifactor authentication Multifactor authentication means having to provide more than just your username and password. The principles of multifactor authentication (formally two-factor authentication) are important. The three main factors are Something You Are, Something You Have, and Something You Know. Basic authentication methods combine Something You Have (a username/access card) with either Something You Know (a password) or Something You Are (biometric). - Something You Are – something you are refers to a biometric identity such as facial recognition, fingerprints, voice recognition, or a retinal scan. Select the best type of biometric for your environment. A construction site or hospital may have employees with gloves or - Something You Have – something you have refers to a smartcard, identification card, or username; it could also refer to a randomly generated password (such as an RSA SecurID or authenticator app) - Something You Know – something you know refers to a password or PIN - Somewhere You Are – somewhere you are refers to your physical location. In the case of connecting to the internet, somewhere you are is your IP address. If a hacker compromises a username/password and logs in through a computer or network location that is not recognized, then the login may be denied. Websites have sophisticated ways of detecting users – IP address, web browser version, computer version, date/time of the login, other user behaviors. If the username/login is correct, but the other factors aren’t it could be that the account was compromised, or it could be that the user is travelling/bought a new computer. The site can ask the user for additional verification (such as through an automated phone call) - Something You Do – something you do is an observation of the user’s action’s or behaviors. In Windows a user can choose a picture password; in an Android phone the user can interact with a pattern. Instead of entering a username and password, a user can present a certificate to the authentication server. A certificate is a digital file that confirms the identity of a user or device. A certificate must be signed by a certification authority. IEEE 802.1X is a standard for Network Access Control. It allows a device to authenticate when connecting to a LAN or WAN. There are three devices in the protocol - The supplicant is the device that chooses to connect to the LAN/WAN. It could be a laptop, desktop, smartphone, tablet, or other computing device - The authenticator is a network device that allows/denies access. It could be a switch, a router, a firewall, or a proxy server. - The authentication server is a server that decides whether a device should be granted access The procedure works as follows - The supplicant connects to the network - The authenticator (switch) detects the new supplicant and automatically sets the port to an unauthenticated status. Only traffic related to 802.1X is permitted. - The authenticator sends frames to the supplicant. These frames demand that the supplicant provide credentials such as a user ID. The frames are sent on the local network segment to a specific address (01:80:C2:00:00:03). The supplicant listens for messages on this address. - The supplicant replies to the message with an EAP-Response Identity frame - The authenticator sends the supplicant’s response to an authentication server - The authentication server and the supplicant negotiate an authentication method. The server and the supplicant may support different methods and must agree on one that both understand. The negotiation methods are transported through the authenticator. - The authentication server attempts to authenticate the suppliant. If successful, the authenticator changes the port status to authorized. If unsuccessful, the authenticator keeps the port status as unauthorized. When the supplicant logs off or is disconnected, the authenticator changes the port status back to unauthorized. When the supplicant logs off, it sends an EAPOL-Logoff message to the authenticator. Biometrics are used in combination with other devices to provide an additional layer of authentication. These include - Facial recognition - Finger print reader - Voice recognition - Palm reader - Retinal scan A biometric reader takes a photograph of a human body part and then converts it into a mathematical model. For example, a fingerprint reader understands the bumps and ridges on a fingerprint and compares their relative sizes. There are many different algorithms and each one is different. Not every scan is perfect. Most biometrics have a false positive because of the algorithm. The false positive rate for a fingerprint sensor is approximately 1 in 50,000. A biometric reader does not (and cannot) create a pixel-by-pixel comparison of a person. Imagine taking a photograph of your face 100 times. Each photo will be slightly different. The lighting, the reflection, the angle of your head, and the position of your hair will be slightly different each time. What are some pros and cons of the different biometric devices? - A fingerprint scanner maps a person’s fingerprint and converts it into a mathematical signature. This signature is stored. - It later compares new scans to the original mathematical signature. - Advanced fingerprint scanners can verify that a real finger has been scanned (as opposed to a mold of a finger) - Fingerprint scanners are cheaper than other biometric sensors - Retinal Scan - A retinal scan uses a laser to examine the blood vessels in the back of the eye - Retinal scans are unpopular because they require a user to have a laser shined into his eye; the user must also put his eye up against the sensor - An iris scan photographs the front of the eye from a distance - Iris scanners are more popular than retinal scanners - Voice Recognition - Voice recognition is hard to implement - Voice recognition sensors have a high rate of false positives and false negatives - Facial Recognition - Facial recognition scans features that are present on the user’s face - Facial recognition systems work well
OPCFW_CODE
How can you tell whether the binary number of a character is odd or even. I not sure I understand the bit operators. I need to write a simple script that will ask the user to enter a character and then tell count the number of binary one and tell whether the number of one is odd or even. just want some help with bit operator the rest I can figure out. I know you will be using a loop but I keep getting the wrong value. First you need to understand what you are being asked to do. From your description of the question you are _not_ being asked whether the value of a character or an individual bit is odd or even but whether the number of bits in the character that are 1 is odd or even, thus: 'A' has the ASCII value 65, or 41 hexadecimal or 0100 0001 binary. The number of 1 bits in this character is 2 and thus 'A' gives an even result. '1' has the ASCII value 49 or 31 hexadecimal or 0011 0001 binary. The number of 1 bits in this character is 3 and thus '1' gives an odd result. Next you have to think how to count the number of bits in the character. You are correct when you say you will require a loop. Each time around the loop you have to add 1 to the count if the bit in question is 1 or 0 (or no change) if not. There are several ways you could do this, however you need to check only one bit each time. The operation to allow this is called a mask operation and is performed by using a bitwise AND operation. The Boolean AND operation has the following truth table (view using a fixed pitch font such as Courier): Result = a AND b a b Result 0 0 0 0 1 0 1 0 0 1 1 1 That is both operands have to be 1 (or true) to give a 1 (or true) result. The bitwise logic operators apply AND, OR, XOR, NOT operations on a bit by bit basis for the word operands. Like many other C and C++ operators they have equivalent forms that combine the operation with assignment so the general and operator is used like so (AND in this case): Result = a & b; But expressions of the form: Result = Result & b; Can be expressed using the operate-and-assign equivalent: Result &= b; Now back to mask operations. We create a mask that is a value that has only the bits we are interested in set to 1. When used with a bitwise AND operation with the value being masked the result is that all bits in the mask that are 0 will be 0 in the result and all bits that are 1 in the mask will have the value of the equivalent bit in the value being masked. Thus if we use a mask of 0000 1111, then performing a bitwise AND with the value 1010 1010 will result in 0000 1010: 0000 1111 & Using a mask of 0011 1100 on the value 0101 0101 yields the result 0001 0100: 0011 1100 & Note: I am using binary above and elsewhere as it is easier to see what is going on. However C++ does not allow representing literal numeric values as binary, so we tend to use hexadecimal (or octal) literal values instead such as 0x0f rather than 0000 1111. This is also why I show binary values in groups of 4 bits, each group of 4 bits translates into a single hexadecimal digit. In our case we need to test each bit in turn so our mask or masks should have only 1 bit set, so a mask of 0000 0001 when bitwise ANDed with the character value will give us 0000 0001 or 0000 0000 depending on whether the least significant bit is 1 or 0. If is 1 then we can add one to our count of 1 bits. So what of the other bits in the character? Well we could create a mask for each bit: 0000 0001, 0000 0010 0000 0100 etc. or we could create the initial mask 0000 0001 and then shift it left using the left shift operator << (yes I know this is the C++ stream insertion operator, however C++ usurped the original shift left, shift right meanings of the << and >> operators to be the stream insertion and extraction). << and >> have shift-and-assign equivalents <<= and >>=. Shifting the mask left one bit position each time will create a mask for each bit in the character in turn, thus: 0000 0001 << 1 gives 0000 0010 0000 0010 << 1 gives 0000 0100 0000 0100 << 1 gives 0000 1000 0100 0000 << 1 gives 1000 0000 1000 0000 << 1 gives 0000 0000 (1-bit shifted out of word yielding 0) This leaves us with masked results of 0 or 1, 0 or 2, 0 or 4 etc. as each bit in the character is tested. So we would need something like the following to maintain the count and shift the mask left within the loop: if ( character_value & mask ) mask <<= 1; Assuming mask is the same type as character_value (i.e. char) then the loop should terminate when the mask value shifts beyond the most significant bit yielding a zero result. The mask value would be initialised to 1. The if-statement condition makes use of the C and C++ definition of true as being any non-zero value. We can turn this scheme around and always use a mask value of 1 but shift the bits of the character value right 1 bit position each time. In this case we loose the least significant bit and the next significant bit becomes the least significant bit. In effect we shift each bit in the character into the least significant bit position. 1001 0110 >> 1 gives 0100 1011 0100 1011 >> 1 gives 0010 0101 0010 0101 >> 1 gives 0001 0010 0000 0010 >> 1 gives 0000 0001 0000 0001 >> 1 gives 0000 0000 (last 1 bit shifted out giving 0) We then just test the least significant bit for 0 or 1. As it is always the least significant bit we are testing the result of the operation at the word (char) level is also 0 or 1. We can therefore directly add this result to the count, thus: bit_count += character_value & mask; character_value >>= 1; Again the mask value would be initialised to 1, however this time it is constant. In which case a better name should be considered maybe: bit_count += character_value & LSB_Mask; character_value >>= 1; Where LSB means least significant bit, and could be declared like so: const char LSB_Mask(1); This version is quite neat but it does destroy the original value of character_value which may be a bad thing. In this case the loop can quit as soon as there are no more 1 bits to count, so we can combine the above with the loop logic: for ( char v(character_value); v != 0; v >>= 1 ) bit_count += v & LSB_Mask; The loop also takes a copy of the original character value to work on to get around the problem of destroying the original value of the character. The count value should be initialised to 0 and LSB_Mask is constant and initialised to 1 as before. Each time around the loop the value is tested to see if there are more 1 bits to count. If there are not then v will be 0. After each iteration v is updated by shifting it right one place, which is done as the 3rd expression in the for-statement. Note that you can use the LSB_Mask with a bitwise AND operation to test the bit_count value to see if it is odd or even. Odd numbers when expressed in binary _always_ have a 1 value for the least significant bit. Hope this helps.
OPCFW_CODE
Imagine this. It’s a Friday night and you’re getting ready with your friends to go out for a much deserved night out in the city. You’ve got bottles of liquor waiting to be consumed and just want to have a great time. Here are three drinking games perfect for the occasion (or any other). - Kings Cup As you may know, there are various versions of this game. Usually, people just play by house rules. Well here are ours. To set up, put a closed can of beer in the center of a table and spread a deck of cards around it in a circle. Make sure the circle is closed and there are no gaps! Next, have the players circle around the table, each with their drink of choice. The game starts when the first player picks their first card from the circle. Ace: Waterfall - Everyone must drink until the person to the left of them (starting with the person who picked the card) stops. 2: You - The player picks someone else to drink. 3: Me - The player has to take a drink. 4: Floor - The last person to put their hand on the floor has to drink. 5: Chicks - All the female players must drink. 6: Dicks - All the male players must drink. 7: Heaven - The last person to put their hand up (toward heaven) has to drink. 8: Mate - The player picks a “mate”. This means each time either of them drinks, they both have to drink. 9: Rhyme - The player says a word. Going around the table clockwise, each player must say a new word that rhymes with the first. The first player to hesitate or mess up has to drink. 10: Categories - The player picks a category (ex: shoe brands, colors, animals, etc.) and going around the table clockwise, each player must say a new word that fits in that category. Again, the first player to hesitate or mess up has to drink. Jack: Never Have I Ever - Each player puts up 5 fingers. Going clockwise, starting with the player who picked the card, each player says one thing they have never done. Those who have done it before put a finger down. The first person to have all of their fingers down has to drink. Queen: Question Master - Whoever answers any of this player’s questions with an answer besides “F you” or flipping them off has to drink. This lasts until the next Queen is drawn. King: The player creates a rule for the game. Some examples are: Drink if you say the word “drink”, Drink if you touch your face, etc. After drawing a card, each player must put their card under the pull tab of the beer can in the middle of the table. Whoever’s card causes the can to open even a little bit has to chug the beer. Replace the beer can and continue the game. The same rule applies for the person who breaks the circle of cards! - Cheers to the Governor! Don’t have a deck of playing cards? This game is simple and doesn’t require anything but you, your friends, and a few drinks. Have your group form a circle and go in order (clockwise or not, your choice!) counting from the number 1 up to the number 21. Each time the group counts up to 21, you all say “cheers to the governor!” and take a swig. The person who counted number 21 gets to make up a rule for any number of his or her choosing. For example, instead of saying 7, you now have to say brand of beer, or you can switch the number 3 with number 11. Be creative! Now it gets harder to count up to 21, as your memory is tested trying to remember each new rule. Each time the group reaches 21, a new rule is created for a new number. The best part of the game is that whenever someone forgets a rule, they have to drink and the group starts over at 1. The game gets better the more you have to drink! - Cards Against Humanity as a Drinking Game Enjoy one of your favorite games with your friends, but add a little twist to it. If you lose a round or are the person picking the best card, take a drink! These three games will let you have a great time while getting drunk and bonding with your friends on a whole new level.
OPCFW_CODE
Unable to parse JSON from StreamTransformer on unending stream New to Dart. It seems that .transform(JsonDecoder()) will hang until the stream is closed or throw an error if it starts to see a new Json object. I could cache the entire strings and parse them that way, but I would like to take advantage of the stream an not store more than is needed in memory. Is there a way to get the JsonDecoder to push an object to the sink as soon as it gets a complete valid Json Object? I've tried extending some of the internal classes, but only got a private library error. https://github.com/dart-lang/sdk/blob/1278bd5adb6a857580f137e47bc521976222f7b9/sdk/lib/_internal/vm/lib/convert_patch.dart#L1500 . This seems to be the relevant code and it's really a pain in my butt. Would I need to create a dummy stream or something? How does your JSON object look like? If it is just one big JSON Array, Dart's default JSON parser will need to have the full object before it can return it. There might be some package on pub.dev which can parse more lazily but I don't know. An easy optimization you can do if you are in control of the JSON, is to have a JSON object per line since that makes it easier to process as a stream of JSON objects. It's newline terminated JSON rpc2, its just that sometimes the responses are large enough to not fit in one chunk, otherwise I would just use .map(jsonDecode). I think I'll need to write my own Json decoder to deal with this which sucks because everything is already in the sdk... Maybe I can update https://github.com/llamadonica/dart-json-stream-parser which seems to do what I want... If the input is newline separated, you can do: Stream jsonObjects = inputStream .transform(utf8.decoder) // if incoming is bytes. .transform(const LineSplitter()) .map(jsonDecode); The JsonDecoder converter only works on a single JSON value, because the JSON grammar doesn't allow more than one value in a JSON source text. The LineSplitter will buffer until it has an entire line, then emit one line at a time, so if each JSON message is on a line by itself, that makes each event from the line-splitted stream a complete JSON value. The issue with this is that the json may or may not be entirely in the chunk so I cannot just map to decode. I guess I'll just bite the bullet and cache the entire string. The LineSplitter ensures that the resulting string contains one string per incoming line, even if that line is split into multiple chunks. It buffers until it has an entire line, then emits one line at a time. Try it, it will work!
STACK_EXCHANGE
using Xunit; using System.Reflection; using Dahomey.Json.Attributes; using System.Text.Json; namespace Dahomey.Json.Tests { public class ClassMemberModifierTests { public class ObjectWithPrivateProperty { public int Id { get; set; } [JsonProperty] private int PrivateProp1 { get; set; } private int PrivateProp2 { get; set; } public ObjectWithPrivateProperty() { } public ObjectWithPrivateProperty(int privateProp1, int privateProp2) { PrivateProp1 = privateProp1; PrivateProp2 = privateProp2; } public int GetProp1() { return PrivateProp1; } public int GetProp2() { return PrivateProp2; } } [Fact] public void TestWritePrivateProperty() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithPrivateProperty obj = new ObjectWithPrivateProperty(2, 3) { Id = 1, }; const string json = @"{""Id"":1,""PrivateProp1"":2}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadPrivateProperty() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1,""PrivateProp1"":2}"; ObjectWithPrivateProperty obj = Helper.Read<ObjectWithPrivateProperty>(json, options); Assert.NotNull(obj); Assert.Equal(1, obj.Id); Assert.Equal(2, obj.GetProp1()); Assert.Equal(0, obj.GetProp2()); } public class ObjectWithPrivateField { public int Id; [JsonProperty] private int PrivateProp1; private int PrivateProp2; public ObjectWithPrivateField() { } public ObjectWithPrivateField(int privateProp1, int privateProp2) { PrivateProp1 = privateProp1; PrivateProp2 = privateProp2; } public int GetProp1() { return PrivateProp1; } public int GetProp2() { return PrivateProp2; } } [Fact] public void TestWritePrivateField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithPrivateField obj = new ObjectWithPrivateField(2, 3) { Id = 1, }; const string json = @"{""Id"":1,""PrivateProp1"":2}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadPrivateField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1,""PrivateProp1"":2}"; ObjectWithPrivateField obj = Helper.Read<ObjectWithPrivateField>(json, options); Assert.NotNull(obj); Assert.Equal(1, obj.Id); Assert.Equal(2, obj.GetProp1()); Assert.Equal(0, obj.GetProp2()); } public class ObjectWithReadOnlyField { public readonly int Id; public ObjectWithReadOnlyField() { } public ObjectWithReadOnlyField(int id) { Id = id; } } [Fact] public void TestWriteReadOnlyField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithReadOnlyField obj = new ObjectWithReadOnlyField(1); const string json = @"{""Id"":1}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadReadOnlyField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1}"; ObjectWithReadOnlyField obj = Helper.Read<ObjectWithReadOnlyField>(json, options); Assert.NotNull(obj); Assert.Equal(0, obj.Id); } public class ObjectWithConstField { [JsonProperty] public const int Id = 1; } [Fact] public void TestWriteConstField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithConstField obj = new ObjectWithConstField(); const string json = @"{""Id"":1}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadConstField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1}"; ObjectWithConstField obj = Helper.Read<ObjectWithConstField>(json, options); Assert.NotNull(obj); } public class ObjectWithStaticField { [JsonProperty] public static int Id = 1; } [Fact] public void TestWriteStaticField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithStaticField obj = new ObjectWithStaticField(); const string json = @"{""Id"":1}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadStaticField() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1}"; ObjectWithStaticField obj = Helper.Read<ObjectWithStaticField>(json, options); Assert.NotNull(obj); } public class ObjectWithStaticProperty { [JsonProperty] public static int Id { get; set; } = 1; } [Fact] public void TestWriteStaticProperty() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); ObjectWithStaticProperty obj = new ObjectWithStaticProperty(); const string json = @"{""Id"":1}"; Helper.TestWrite(obj, json, options); } [Fact] public void TestReadStaticProperty() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); const string json = @"{""Id"":1}"; ObjectWithStaticProperty obj = Helper.Read<ObjectWithStaticProperty>(json, options); Assert.NotNull(obj); } private class Tree { public const string Id = "Tree.class"; public readonly string Name = "LemonTree"; public static int WhatEver = 12; } [Fact] public void TestWriteByApi() { JsonSerializerOptions options = new JsonSerializerOptions(); options.SetupExtensions(); options.GetObjectMappingRegistry().Register<Tree>(objectMapping => { objectMapping.AutoMap(); objectMapping.ClearMemberMappings(); objectMapping.MapMember( typeof(Tree) .GetField(nameof(Tree.Id), BindingFlags.Public | BindingFlags.Static), typeof(string)); objectMapping.MapMember(tree => tree.Name); objectMapping.MapMember(tree => Tree.WhatEver); }); Tree obj = new Tree(); const string json = @"{""Id"":""Tree.class"",""Name"":""LemonTree"",""WhatEver"":12}"; Helper.TestWrite(obj, json, options); } } }
STACK_EDU
GAS price structure how to decide gas price while writing smart contract ? and how to know how much gas is going to be used while each token transaction ? and on https://ethgasstation.info/ mentioned price is SafeLow (<30m) 1 Standard (<5m) 4 Fast (<2m) 20 so is it based on per transaction ? big confusion can anyone help on this ? how to decide gas price while writing smart contract ? There are two things you need to think about: When writing your contract: The amount of gas your contract takes; When interacting with your contract on the blockchain: The gas price. You don't decide the gas price when writing the contract, but you want to minimise the price you have to pay when you come to use your contract later on by minimising the gas your contract uses. To do this you need to make your contract as efficient as possible, to ensure it uses the smallest number of EVM instructions possible. See: What is meant by the term "gas"? and how to know how much gas is going to be used while each token transaction ? That depends how on well you have written your contract. An efficient contract will minimise the gas used. (This isn't the same as gas price.) See: How do I know how much gas to use when calling a contract? See: How were gas costs chosen for the Ethereum Virtual Machine instructions? and on https://ethgasstation.info/ mentioned price is SafeLow (<30m) 1 Standard (<5m) 4 Fast (<2m) 20 When you send a transaction, these values give you an estimate of how long that transaction will take for different gas prices. In your example, if you want your transaction to go through quickly, within 2 minutes, you would use the gas price under "Fast". If you're not in a rush, and don't mind waiting up to 30 minutes, you would use the gas price under "SafeLow". See: Determine network congestion, min required gas/gas price based on current conditions so is it based on per transaction? Yes. how to decide gas price while writing smart contract ? and how to know how much gas is going to be used while each token transaction ? and on Gas price is for miners value. Mostly miner will decide that is price for each step. While writing you need to focus on Gas. Each step is chargeable in Eth. Generating transaction, you need to specify gas price. You can get current using web3.eth.gasPrice in web3js. For more details: https://www.reddit.com/r/ethereum/comments/6gycvw/difference_between_increasing_gas_vs_gas_price/
STACK_EXCHANGE
I'm new to this forum & to OTA. I've read *almost* all the posts in this thread which is helping me a lot in deciding how to setup my OTA. First I would like to thank the many contributor to this Montreal thread. I have noticed some postings that don't give much information when asking their questions, so I will try to give maybe more info without (hopefully) going overboard. lol I have just finished setting up my own HTPC using MythTV and a Hauppauge HVR-1600 A/D card. I'm into PCs and electronics and I'd say I'm quite handy with tools/building stuff. I also installed my Bell dish setup on the roof myself & I'm into FTA (but thats for another forum So far, I've managed to receive only CIVM-HD with my "Bunny-ears" antenna. I know, it's a VHF antenna & used indoors so I should be happy I was even able to get anything at all. Analog seems to work fine. This will be my main/only tuner. I have a few questions regarding my locale/situation: I've had a look at TVFOOL. I put 28' as height as the roof of my triplex is 23', so I added a few for say a 6' mast. Seems I'm lucky enough to be east of the Mt-Royal's shadow to get the US feeds. But I don't quite understand the local results. According to them, all I can get is V (aka TQS) 42.1 but I know for a fact I can get CIVM-HD (TeleQuebec). And where are SRC & CBC? Is it just me or is their info wrong/outdated? And what the !?!! is HDTV (ch15) about? Is there an 'official' list somewhere of Montreal HD channels with both real & virtual channels? For an antenna, I was planning on building a DBGH. Not sure which one yet. Am I right to assume it would work well for me to get the US stations? What about the local stations? Will I need to combine a second antenna? Would the new DBGH with Narods help me get WVNY-HD? From what I read here it doesn't seems easy to receive being on the VHF-HI band. I would like to use my existing RG-6 cable used for Bell for OTA. I'm hoping to setup the antenna mast right behind my dish. I'm not the only sat user on this roof, so room is limited. I'm only using about 25' or so of RG-6. Is there much loss using Sat/OTA joiners/splitter? My HDTV tuner card has separate inputs for HDTV & Analog. Does that mean I'm stuck using a splitter no matter what I do? I guess thats something I can ask in another thread on Digital Home concerning HTPC setups. I guess that should be enough for this post.
OPCFW_CODE
How is SAN storage dynamically configured in a Microsoft environment and how does the OS attach the storage once it is zoned and reachable by that MS server? Essentially a SAN is just another device to Microsoft OS. But, the current thinking is that the SAN volume(s) needs to be represented to the server via a virtualization technique. One virtualization engine I happen to like at present is DataCore Software, so I referred your question to them for a more explicit answer. Here's what they say: "In Microsoft environments that are powered by DataCore, advanced virtualization nodes act as powerful brokers between the MS servers and the network storage pool, allowing virtual disks to be added, upgraded, replicated and reassigned without taking the hosts down. "The physical storage devices are first segregated from the MS servers to eliminate LUN ownership conflicts. The network of virtualization nodes takes over exclusive ownership of the devices by directly connecting to SCSI, ATA, EIDE and SSA arrays or by zoning Fibre Channel arrays for their private control. The central storage administrator then carves out arbitrarily sized virtual volumes from the physical drives and RAID devices, assigning specific properties to them, such as remote mirroring, and caching. This is done from an intuitive GUI with a global view of storage resources and servers (consumers)."The MS servers are separately zoned to the virtualization nodes. They see only selected virtual disks explicitly assigned to specific host ports using DataCore's secure binding. For clustered MS Servers, the same virtual volume may be defined as shared so two hosts can independently access it. Note that neither the SAN-connected MS servers nor the physical disks require any virtualization software to make this work. "When the central storage administrator gets a request to give a server more capacity, he/she simply clicks, drags and drops an appropriate virtual volume and assigns it to a selected port on the needy host. Running Windows Disk Administrator on the MS Server reveals the additional capacity, which appears as a well-behaved new disk of the specified size. At that point, the server administrator can format the virtual disk and establish a file system just like an ordinary disk. "Capacity may also be reassigned (and removable virtual devices defined) by simply disassociating the virtual disk from one MS server and assigning it to another. This is a very effective and dynamic way to deal with seasonal and unexpected demands." Thanks for the question. Dig Deeper on Storage management tools Related Q&A from Jon Toigo Cache memory and random access memory both place data closer to the processor to reduce latency in response times. Learn why cache memory can be the ... Continue Reading Linear Tape File System and Linear Tape-Open technology can improve user access and durability in your tape archive system. Explore specific products... Continue Reading Parallel computing technology has not seen widespread use in the business world, but could that change? Jon Toigo discusses parallel I/O for ... Continue Reading Have a question for an expert? Please add a title for your question Get answers from a TechTarget expert on whatever's puzzling you.
OPCFW_CODE
|Oracle9i OLAP Services Developer's Guide to the OLAP DML Release 1 (9.0.1) Part Number A86720-01 Defining and Working with Analytic Workspaces, 2 of 12 Analytic workspaces are defined using commands in the OLAP DML. There are two methods by which this can be accomplished: SPLExecutormethod to issue OLAP DML commands. This allows applications using the OLAP API to create new analytic workspaces and alter existing workspaces. When workspaces are defined through the SPLExecutormethod, they can be temporary (that is, for the life of the session) or they may be persisted. This guide discusses how to use OLAP DML commands to define an analytic workspace. The following example creates a new analytic workspace named shoes; the full name of the new analytic workspace is The following example creates the shoes.db analytic workspace in a directory named apps on the i drive of an NT system. For the complete syntax for the DATABASE command, see the OLAP DML Reference. Throughout this guide, you will notice that the OLAP DML command `database' is used to create and manage analytic workspaces. When referring to the OLAP DML, you can think of the terms `database' and `analytic workspace' as being equivalent. The `database' command is used in the OLAP DML to allow for compatibility with the Express Server stored procedure language. (Express Server was the predecessor to OLAP Services.) Do not confuse analytic workspaces with the Oracle relational database. Analytic workspaces are stored in files that are separate from Oracle relational database files. An analytic workspace can be made up of many files. There is always a main analytic workspace file. There can also be one or more extension analytic workspace files. You can use extension files to divide a single analytic workspace among several files, so the analytic workspace can be larger than the space that is available on any single disk. Typically, you need extension files only when the analytic workspace is located on a disk with limited available space or when the analytic workspace will grow to a very large size. An analytic workspace that is stored in more than one file is called a multifile analytic workspace. When you use the DATABASE command with the CREATE keyword, a new analytic workspace file is created. As the analytic workspace is populated, data is added to that file and, optionally, additional analytic workspace extension files are created, if needed. Depending on the options that you specify when you create an analytic workspace, you can change the default characteristics of these files: Note: If you want to specify location of analytic workspace extension files only for a given session, then use the DBEXTENDPATH option. If you want to specify the location of analytic workspace extension files only for an instance of OLAP Services, then use the ExtensionFilePath setting of the OLAP Services Instance Manager.
OPCFW_CODE
Deprecation warning with moment.js 2.10.6 Using moment.js 2.10.6, deprecation messages are displayed using an example from the README.md. Specifically: recurrence = moment("2014/01/01").recur().every(2).days(); Deprecation warning: moment construction falls back to js Date. This is discouraged and will be removed in upcoming major release. Please refer to https://github.com/moment/moment/issues/1407 for more info. Error at Function.createFromInputFallback (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:746:36) at configFromString (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:826:32) at configFromInput (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:1353:13) at prepareConfig (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:1340:13) at createFromConfig (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:1307:44) at createLocalOrUTC (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:1385:16) at local__createLocal (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:1389:16) at utils_hooks__hooks (https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js:16:29) at eval (eval at evaluate (unknown source), :1:14) at Object.InjectedScript._evaluateOn (:904:55) Same issue, I'm using moment.js 2.11.1 Please fix it Write a PR. One of the contributors (such as myself) will merge it. On Mar 8, 2016 9:02 AM, "ehdi"<EMAIL_ADDRESS>wrote: Please fix it — Reply to this email directly or view it on GitHub https://github.com/c-trimm/moment-recur/issues/42#issuecomment-193869029 . I really think this error message should handle a person using the US data format of MM/DD/YYYY and hint "please use YYYY-MM-DD". It took way too long for me to read 1407 and figure out why in the world i got that message for moment("07/07/2016") Im still getting the same issue? The issue is on line 743 in moment-recur.js as an internal re-constructor (if you will). The existing line reformats the date to YYYY/MM/DD - but it requires YYYY-MM-DD. Changing that line appropriately results in no warning being generated. I think is fixed in a recent PR. Is the open PR for this issue getting merged any time soon? :) @bramski As of now, there are no instances of the format function being called using a deprecated format. It's not the job of moment-recur to let you know that you're using a deprecated date format, as moment already does that. Closing this. Hi. In node js var moment = require('moment-timezone'); require('moment-recur'); on line moment().recur().every(1).dayOfMonth(); I have error Deprecation warning: value provided is not in a recognized ISO format. moment construction falls back to js Date(), which is not reliable across all browsers and versions. Non ISO date formats are discouraged and will be removed in an upcoming major release. Please refer to http://momentjs.com/guides/#/warnings/js-date/ for more info. Arguments: [0] _isAMomentObject: true, _isUTC: true, _useUTC: true, _l: undefined, _i: 2017/02/12, _f: undefined, _strict: undefined, _locale: [object Object] Why it's happened? If I have var moment = require('moment-timezone'); I have no such error. In package.json I have "moment": "^2.17.1", "moment-recur": "^1.0.5", "moment-timezone": "^0.5.11", I use loopback js as framework.
GITHUB_ARCHIVE
How to pass build artifact into another job in Jenkins Setup I have two jobs in Jenkins: build master - this builds the code and creates an artifact deploy master - this deploys the artifact Currently, deploy master has "Copy artifacts from another project" build step using "latest successful build". My Goal I want to change this step from "latest successful build" to "specified by a build parameter" so that I can select a specific build when deploying without modifying the configuration of deploy master job each time. What I've tried First, I changed to "specified by a build parameter". Then I checked the box next to "This project is parameterized" and added a string parameter for BUILD_SELECTOR. Then I selected build and enter the input 47 which is a build number from the build master job. Additionally I tried the api call $.ajax({ type: 'POST', url: 'https://jenkins/job/deploy%20master/build?token=abc7f5abc0c45abcea0646ed858abcde&BUILD_SELECTOR=47' }); Result Both times it failed with the following output: Started by user styfle [EnvInject] - Loading node environment variables. Building in workspace C:\Jenkins\jobs\deploy master\workspace ERROR: Unable to find a build for artifact copy from: build master Started calculate disk usage of build Finished Calculation of disk usage of build in 0 seconds Started calculate disk usage of workspace Finished Calculation of disk usage of workspace in 0 seconds Finished: FAILURE Question How do I configure this properly so I can specify a build number (or some other identifier) when deploying? Update with solution My solution thanks to Gerold's answer was to add a "Build selector for Copy Artifact" parameter and use a new environment variable to link to my string parameter I already added. There is just one workspace per project/job in Jenkins. The directories of builds contain just information about the builds and their results. The root directories of both are specified in Manage Jenkins → Configure System → Advanced.... To deploy an artifact of a previous build you have to copy it to somewhere else in build master and access it there from deploy master later. UPDATE: See the inline help for Which build → Parameter Name: A parameter with this name should be added in the build parameters section above. There is a special parameter type for choosing the build selector. Use this Build selector for Copy Artifact instead of a String Parameter. The build master job has a "Post-build Action" of "Archive the artifacts". And I have been able to specify a build by number by editing the config. So I know this part works. @styfle The Archive the artifacts post-build action stores the specified files in <mentioned system configuration>/<build no.>/archive/... accessible via http://<Jenkins>/job/<job name>/<build no.>/artifact/.... That url works. Should I try using the full URL as a parameter in the deployment? @styfle If the artifact name doesn't change the build no. should be sufficient. It's the only variable in the URL. I tried the build number 47 as a parameter and it doesn't work (see my original post). Yet, visiting the URL https://jenkins/job/build%20master/47/artifact/ does work. @styfle See the inline help for Which build → Parameter Name: "There is a special parameter type for choosing the build selector.". Use this instead of a String Parameter. Let us continue this discussion in chat. is there a way to copy the archived artifact in the deploy job? I am using the Copy artifacts from another project but that copies the unarchived artifact from the build workspace. @Radu See File/Folder Copy of the File Operations Plugin. how can I get the buildNumber from BUILD_SELECTOR? Add following in downstream project. "Build selector for Copy Artifact" instead of a "String Parameter" Copy artifacts from another project Thats it. Click "Build with Parameters" and pass build number Any idea why it's necessary to specify the upstream project name in the copy step, when the upstream job / build should already have been fully identified by the build selector?
STACK_EXCHANGE
2006-04-17 v. 1.0 Incorporated the Makefile from Leopold Palomo-Avellaneda, to generate a global shared library containing the very small amount of static code. Updated benchmarks. The 1.x versions of SlotSig will remain binary-compatible. No major new feature is expected, mainly cosmetic changes and bugfixes. The main line of development will be done in a 1.99.x branch, maybe leading someday to a 2.0 version. 2005-12-09 v. 0.7 not much to say, just fixed a compile error due to better standard-conformance from GCC. 2004-06-10 v. 0.6 now signals can be copied and cloned ; the internal storage is no more a policy, therefore it can't be customized anymore ; if this hurts you, let me know ; code cleanup (all methods are lower-case), added some comments usable by Doxygen (http://www.doxygen.org) ; it's now possible to parse the slots connected to a signal, using usual STL (Standard Template Library) syntax (see advanced documentation) ; it's now possible (and easy I hope) to chain signals, i.e. to connect a signal to another signal (both must be of same type) ; added some facilities to connect slots in classes that don't inherit from SlotsSetBase (see advanced documentation) ; compiles fine with Intel compiler, icc ; threads, threads, threads... better (faster) storage here and there ; string-based (dis)connections, ala Qt ; adaptors to (dis)connect functions with different number of parameters than the signal type (idea borrowed from libsigc++) ; provide a way to know the sender of a signal ; give back a kind of identifier for a connection, which can be used to disconnect a slot (this would probably be the slot type) ; 2004-05-23 v. 0.5.1 Changed template parameters names, to fix an error given by gcc 3.3 under MacOSX - thanks to Ralph Brorsen, you found and solved the problem. 2004-05-20 v. 0.5 the uniqueness of a connection is now an option (enabled by default) ; disconnectAll() methods in signal classes and better Qt-compatibility (avoid some remaining clashes with Qt's macros) ; improved speed by using a more clever storage class for slots in signals ; updated documentation, added an “advanced” page, and more detailed benchmarks ; added many type-traits-like class to avoid useless parameters copying (for all basic types) ; beginning of a mean to establish connections using strings, as Qt does (which allows to make connections based on external informations, such as a configuration text file) ; reduce the need to inherit from classes containing slots ; improve storage of connected signals in go further in string-based connections ; ...mail me for more ! 2004-02-03 v. 0.4 now (almost) all methods in signal classes return an error code ; to provide compatibility with Qt, the emit() method has been renamed to run_if() method, which can interrupts the signal emission according to the boolean return value of a the storage of slots in a signal can now be tweaked by a policy, see <slotsig_storage_policies.h> for more the default storage is now a vector, which proved to be much faster than the previously-used set ; added some sanity checks : don't allow to connect or disconnect a slot when a signal is being emitted, don't allow to emit a signal while it is being emitted. make connections faster ; improve the trick used to avoid useless temporary instances duplication when sending parameters (for now it just nicely check and fix for other compilers ; connection to a signal, to a functor ; what about some thread-safety ? ...what more ? 2003-11-22 v. 0.3 complete rewrite : simpler code, hopefully more easily usable ; new namespace and classes names ; implemented features so far : connecting a signal to either a global function or a class's method ; handles the case when the class's method is correctly connect to inherited methods not redefined in subclass ; should react nicely when a class's instance or a signal instance is destroyed. last but not least, a Python script is provided to generate support for as many parameters as you might want ; missing features : connect a signal to a signal ; direct connection to functors ; allowing to know the signal's sender ; trying to avoid signal/slot ping-pong (infinite recursion) ; stop the signal emission when something critical happens (the slots set is changed, the signal destroyed, whatever) ; ...your idea here ! 2003-06-09 v. 0.2 added support for const-qualified class methods ; enabled connecting a signal to any class method without explicit creation of a slot (TODO : what if the class holding the method is destroyed ?) ; some changes to get a more robust code (added some checks for null-pointers) ; added verbosity macros, to get messages about what is done internally ; tested using Valgrind ; modified tutorial to reflect previous changes. 2003-06-03 v. 0.1 Initial public release.
OPCFW_CODE
[FEATURE REQUEST] Workflow for automatic OCR, PDF Export, sync and delete Is your feature request related to a problem? Please describe. No, it is a feature request to use it as the scanner and OCR App for paperless-ngx and alike Describe the solution you'd like I would love to be able to define a fixed workflow/profile to start automatically after a picture has been taken: improve Image (possible already) white paper (possible already) perform OCR Export PDF Sync PDF to WebDAV folder (better: send it there directly) delete local scan after 5 minutes (or immediately after confirmation sync to WebDAV has been successfull) Describe alternatives you've considered Right now I have to do all steps after white paper manually. I use this app to send a PDF to my consume folder of Paperless-ngx Additional context If this whole this is too complicated please at least two things: I dont really get the point of "syncing" with WebDAV-Folder. The file is not deleted after it is synced and when I sync it again, I get a duplicate in my consume folder. And: Auto-OCR. I keep forgetting it and Paperless OCR is crap. @supaeasy the WebDAV sync (PDF /images) can be used for people not using paperless. Or people.using paperless where a folder on the WebDAV server would be read by paperless. So in your request only the auto OCR / delete is not possible right now. PDF auto generate sync already exist. About the delete not sure I want to.add this. Too dangerous people would.complain they document werr deleted . what you could easily do for now is add those doc in a folder. And regularly simply select that folder and do delete (one op). As for the auto OCR I will look at it. But remember this won't be fast. Maybe I could add it as an option of the PDF sync that way it would be done in thé background. Would that be good for you? Thanks for your reply. Yes, this is kind of what I want to do. But would this also auto export the PDF? I would need: Trigger: Picture taken and Image enhancements applied Automation: OCR, Export PDF, Sync PDF Ideally: some confirmation / event / message when Sync is complete that I can use as a trigger for macro droid so it can empty the export folder. Thanks for your reply. Yes, this is kind of what I want to do. But would this also auto export the PDF? I would need: * Trigger: Picture taken and Image enhancements applied * Automation: OCR, Export PDF, Sync PDF * Ideally: some confirmation / event / message when Sync is complete that I can use as a trigger for macro droid so it can empty the export folder. Yes if you setup PDF sync (either webdav or local folder) it should trigger as soon as you create a new (or update) doc As for the confirmation, right now the UX is redrawn on sync success but now confirmation/event/message. I could add a broadcast if you want to notify of sync change That would be great, thank you. On another note: It was totally unclear to me, that a PDF is created automatically, wether you press export or not - what is the export button for then? @supaeasy it is released https://github.com/Akylas/OSS-DocumentScanner/releases/tag/com.akylas.documentscanner%2Fandroid%2Fgithub%2F1.13.0%2F106. Now about your export question. No a PDF is created automatically when you "show" the pdf menu. It is created only when using each of the commands (share, export,...) The difference is that with export it is created where you want, while for other options it is created in a temp folder so that it gets removed Thank you very much, that was way quicker than I expected! But may I ask where that option is to be found? Or is Auto-OCR now just the default? @supaeasy you can enable it within your sync configuration settings. Go into sync settings -> PDF sync. Click on your sync config. Their you can enable it Great, I found it. Thank you! Did you also implement the "broadcast" you mentioned when a sync event ended successfully? (Actually I don't know what that means but I assume this is some kind of event that I can pick up with MacroDroid for a deletion trigger as discussed, yes?) My favored way still would be a built-in delete option after sync, I think that would be cleaner. That option would Ideally fit into the same place as the new OCR option because one could define it on a sync event base - wouldn't that leave your worries behind? You said people would likely complain about deleted documents - I think an opt-in option that far into the sync settings would assure this is only used by people like me who really want this to happen. Maybe to elaborate why I am kind of pushy about this: I sync directly to a so called "consume" folder for Paperless. This is a constantly monitored watchfolder and every document in there will be deleted after it has been processed by Paperless. So your App will see an empty folder after a couple of seconds and will re-sync all documents. This process creates neverending duplicates after duplicates and is especially annoying when using auto-sync. Do you understand why this is a major pain point to me? :-) @supaeasy what actually do you want to delete? Do you want to delete the document from OSS Document scanner app? Or is it deleting files in the local folder? Well, both actually. I just want to transfer a scan to my WebDAV folder and delete it from my cellphone afterwards. @supaeasy ok : delete doc in the app: possible though tricky because there could be multiple sync running and if i delete the doc too soon it will break the app delete the file create in the local folder. That i cant, has to be done by paperless (i dont know when it finished processing it) Oh, I misunderstood you: as in 'local folder' on the phone. Deletion from my synced folder is done automatically by paperless. To me that would be the remote folder. Deletion in App: does the app not get a feedback when syncing is done? Yes it does but syncing happens in a background thread in a parallel way. So the sync for which you want deletion might have finished while others are still running. And i would need to handle the deletion on a per sync basis. It is tricky. Not saying it is not feasible, just saying have to be done right and for now not easily done. As i mentioned before i think the cleanest and easiest way right now is to do it in a folder, and then once in a while you delete all documents from that folder from the app
GITHUB_ARCHIVE
sCurrentSelection = Selection.Address sCurrentCell = ActiveCell.Address NewSheet.Activate ‘Set the new worksheet configuration ActiveWindow.ScrollColumn = lCurrentCol ActiveWindow.ScrollRow = lCurrentRow Application.EnableEvents = True The DimmshtOldSheetasObject statement must be at the top of the module in the declarations area, so that mshtOldSheetis a module-level variable that will retain its value while the workbook is open and can be accessed by the two event procedures. The Workbook_SheetDeactivate event procedure is used to store a reference to any worksheet that is deactivated. The Deactivate event occurs after another sheet is activated, so it is too late to store the active window properties. The procedure’s Sht parameter refers to the deactivated sheet and its value is assigned to mshtOldSheet. The Workbook_SheetActivate event procedure executes after the Deactivate procedure. The On Error GoTo Finstatement ensures that, if an error occurs, there are no error messages displayed and that control jumps to the Fin: label where event processing is enabled, just in case event processing has been switched off. The first If tests check that mshtOldSheet has been defined, indicating that a worksheet has been deac- tivated during the current session. The second If test checks that the active sheet is a worksheet. If either If test fails, the procedure exits. These tests allow for other types of sheets, such as charts, being deactivated or activated. Next, screen updating is turned off to minimize screen flicker. It is not possible to eliminate all flicker, because the new worksheet has already been activated and the user will get a brief glimpse of its old configuration before it is changed. Then, event processing is switched off so that no chain reactions occur. To get the data it needs, the procedure has to reactivate the deactivated worksheet, which would trigger the two event procedures again. After reactivating the old worksheet, the ScrollRow (the row at the top of the screen), the ScrollColumn(the column at the left of the screen), the addresses of the current selection, and the active cell are stored. The new worksheet is then reactivated and its screen configuration is set to match the old worksheet. Because there is no ExitSub statement before the Fin: label, the final statement is executed to make sure event processing is enabled again. Sum mar y In this chapter you saw many techniques for handling workbooks and worksheets in VBAcode. You have seen how to: Create new workbooks and open existing workbooks. Handle saving workbook files and overwriting existing files. Move and copy worksheets and interact with Group mode. Chapter 3: Workbooks and Worksheets
OPCFW_CODE
import gpxpy import gpxpy.gpx import pandas as pd import matplotlib.pyplot as plt import numpy as np def parse_gpx(filename): distances = [0] speeds = [0] times = [0] with open(filename, 'r') as gpx_file: gpx = gpxpy.parse(gpx_file) for track in gpx.tracks: for segment in track.segments: for (point_no, point) in enumerate(segment.points): if point_no > 0: p2 = segment.points[point_no - 1] speed = point.speed_between(p2) * 3.6 # Difference is always 1 or 0.999 s unless there is pause # Pausing is not computable in csv data, so use here 1 to be able to match times diff_t = 1 # point.time_difference(p2) diff_3d = point.distance_3d(p2) speeds.append(speed) times.append(diff_t + times[-1]) distances.append(diff_3d + distances[-1]) return times, distances, speeds def moving_average(numbers, window_size): numbers_series = pd.Series(numbers) windows = numbers_series.rolling(window_size) moving_averages = windows.mean() return moving_averages.tolist() def get_max_speed(numbers, window_size): return max(moving_average(numbers, window_size)[window_size - 1:]) def time_to_sec(t): parts = t.split(':') return int(parts[0]) * 3600 + int(parts[1]) * 60 + int(parts[2]) def fix_csv_gaps(csv): speeds = [0] times = [0] for i in csv.Speed: if np.isnan(i): continue speeds.append(i) times.append(times[-1] + 1) return times, speeds csv = pd.read_csv('data/2020-12-31-DownHillSki.csv', usecols=['Time', 'Speed']) csv.Time = csv.Time.apply(time_to_sec) csv_times, csv_speeds = fix_csv_gaps(csv) # compute data times, distances, speeds = parse_gpx('data/2020-12-31-DownHillSki.gpx') """ for k in range(1, 121): max_speed = get_max_speed(speeds, k) print('Max speed over %s s is %s km/h' % (k, max_speed)) """ tmin = 24320 # 28160 tmin = 22320 tmin = 25850 t_diff = 160 tmax = tmin + t_diff min_ind = next(x[0] for x in enumerate(times) if x[1] > tmin) max_ind = next(x[0] for x in enumerate(times) if x[1] > tmax) - 1 fig, ax1 = plt.subplots() time_scale = [x-times[min_ind] for x in times] print(len(time_scale)) print(len(csv_times)) print(len(csv.Time)) # There is this kind of strange 25 s difference at higher times in csv data. May be related to satellite locking. polar_shift = -25 polar_t = [x-times[min_ind]+polar_shift for x in csv_times] max_polar_speed = max([x[1] for x in enumerate( csv_speeds) if polar_t[x[0]] > 0 and polar_t[x[0]] < t_diff]) for k in [1, 5, 10]: max_speed_graph = moving_average(speeds, k) max_speed = get_max_speed(speeds[min_ind:max_ind], k) ax1.plot(time_scale, max_speed_graph, label="Int = %s s (%s km/h)" % (k, round(max_speed, 2)), marker=".", markersize=5) ax1.plot(polar_t, csv_speeds, color="black", label="Polar GritX (%s km/h)" % max_polar_speed) ax1.set_xlabel('Time [s]') ax1.set_ylabel('Speed [km/h]') ax2 = ax1.twinx() ax1.set_xlim(0, tmax-tmin) ax1.set_ylim(0, 100) dist = [x-distances[min_ind] for x in distances] ax2.plot(time_scale, dist, color='black', ls='dashed') ax2.set_ylabel('Distance [m]') # ax2.set_ylim(0, dist[max_ind]) ax2.set_ylim(0, 1600) fig.tight_layout() # otherwise the right y-label is slightly clipped ax1.legend(loc=2, frameon=False) plt.show()
STACK_EDU
JavaScript's get-it-done nature Is JavaScript intended to be running as little as possible on a website/webapp? By that I mean is the usual intention to run through all your js files as soon as the page loads and put them aside, and then when functions come up to execute them right away and be done with it? I'm working on a project using google maps and I have a custom marker object scripted out, and a debugger has told me that the browser runs through all my js files before anything even appears on the page. My problem comes in here: I wanted to animate certain markers to bounce up and down continuously with jQuery (similar to OS X icons in the dock) and my several attempts at infinite loop functions all just crash the browser. So I understand that the browser doesn't like that, but is there a way to have a simple script be repeating itself in the background while the user navigates the page? Or is JavaScript just not supposed to be used that way? (I worked with Flash for a long time so my mindset is still there.) So, in other words, "how can I schedule repeating tasks in Javascript"? Yes, Javascript functions should just do their bit and exit as soon as possible. The GUI and the scripts run on the same single thread, so as long as you are inside a Javascript function, nothing shows up in the browser. If you try to use an infinite loop, the browser will appear to freeze. You use the window.setInterval and window.setTimeout methods to trigger code that runs at a specific time. By running an interval that updates something several times a second, you can create an animation. You have to set a timer to execute a script after a defined time. var timer = setTimeout(code, milliseconds); will execute code in so-and-so milliseconds. Each execution of the script can set a new timer to execute the script again. You can cancel a timed event using clearTimeout(timer). I actually had trouble with setTimeout my js would just skip over setTimeout'd lines and kept going, and never get around to doing what it was supposed to in 2000 milliseconds or whatever. I think there are complications that come with using Google maps Since you said that you are using jQuery, consider using its effects API (e.g., jQuery.animate()), it will make your life much easier! Yes I love animate() but my problem was with the timing and looping of the animation rather than how to create the animation itself. Use setTimeout() or setInterval(). The MDC articles on it are pretty good. You'll need to update inside of functions that run quickly, but get called many times, instead of updating inside of a loop. Personally, I save as much code as possible for execution after the page has loaded, partly by putting all my <script>s at the bottom of <body>. This means a (perceived) reduction in page load time, whilst having all my JS ready to run when need be. I wouldn't recommend going through everything you need to do at the beginning of the document. Instead, bind things to events such as clicks of buttons, etc. I'll keep that in mind, and things will probably work a little differently when I'm pulling from a database, but Google maps requires a lot of js to run onload().
STACK_EXCHANGE
Cant install big desk plugin as plugin-descriptor.properties in plugin zip is missing Hi Team, I am trying to install big desk plugin to evaluate for my elastic search, and below is the error I am getting user@localhost:/usr/share/elasticsearch/bin$ sudo ./plugin install lukas-vlcek/bigdesk/2.5.0 -> Installing lukas-vlcek/bigdesk/2.5.0... Trying https://download.elastic.co/lukas-vlcek/bigdesk/bigdesk-2.5.0.zip ... Trying https://search.maven.org/remotecontent?filepath=lukas-vlcek/bigdesk/2.5.0/bigdesk-2.5.0.zip ... Trying https://oss.sonatype.org/service/local/repositories/releases/content/lukas-vlcek/bigdesk/2.5.0/bigdesk-2.5.0.zip ... Trying https://github.com/lukas-vlcek/bigdesk/archive/2.5.0.zip ... Trying https://github.com/lukas-vlcek/bigdesk/archive/master.zip ... Downloading ....................................................................................................................................................................................................................................DONE Verifying https://github.com/lukas-vlcek/bigdesk/archive/master.zip checksums if available ... NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify) ERROR: Could not find plugin descriptor 'plugin-descriptor.properties' in plugin zip Please guide me on how to fix this issue. same here #./bin/plugin install lukas-vlcek/bigdesk/2.4.0 -> Installing lukas-vlcek/bigdesk/2.4.0... Trying https://download.elastic.co/lukas-vlcek/bigdesk/bigdesk-2.4.0.zip ... Trying https://search.maven.org/remotecontent?filepath=lukas-vlcek/bigdesk/2.4.0/bigdesk-2.4.0.zip ... Trying https://oss.sonatype.org/service/local/repositories/releases/content/lukas-vlcek/bigdesk/2.4.0/bigdesk-2.4.0.zip ... Trying https://github.com/lukas-vlcek/bigdesk/archive/2.4.0.zip ... Trying https://github.com/lukas-vlcek/bigdesk/archive/master.zip ... Downloading .........................................................................................................................................................................................................................................................DONE Verifying https://github.com/lukas-vlcek/bigdesk/archive/master.zip checksums if available ... NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify) ERROR: Could not find plugin descriptor 'plugin-descriptor.properties' in plugin zip elasticsearch version: version: { number: "2.3.1", build_hash: "bd980929010aef404e7cb0843e61d0665269fc39", build_timestamp: "2016-04-04T12:25:05Z", build_snapshot: false, lucene_version: "5.5.0" },``` +1 Same Problem: { "name" : "Dust", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.3", "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde", "build_timestamp" : "2016-05-17T15:40:04Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" } +1 +1. ES 2.3.0 here. AFAIK, the properties file is mandatory in 2.3.0 onwards. :S +1 I am using ES 2.3.3. Same error as @narendraalla +1, on the new elasticsearch 2.4.0 +1 any solution? +1 on 2.4.1 any solution? Hi Team any update on this issue? +1 I have modified bigdesk code to be compatible with elasticsearch 2.x https://github.com/nishantsaini/bigdesk Hope this helps hard to install and make me confuse I am archiving this repository now. Thanks for contributions.
GITHUB_ARCHIVE
Google CloudSQL instance storage is growing out of control I have encountered an issue with Ggoole Cloud SQL (2nd gen). For some reason after a while, the database went from 20GB to 64GB in a matter of hours. It used to climb from 20 to 25 then purge as entries were added and removed over time. Nothing happened on the server connecting to the database, and I have Cloud SQL flags set to off. Any ideas what else I can try? Is binary logging enabled? https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring#enablingpitr @LundinCast, yes it is. The database itself is 1.4 GB, no major changes made. Could binary logging cause it to blow up by 400% like this? Looking at the history there, it was increasing increasing then purging and so on. I had that same pattern with a steady increase for over a year on this DB with no issues. This is most likely due to binary logs. When they are enabled, MySQL will make a record of all changes, which is required for replication or point-in-time recovery. This means that the growth of binary logs is roughly proportional to the amount of modified rows (even if these rows were actually deleted and db total size reduced). Note that they will not grow indefinitely. Binary logs older than the oldest automatic backup (7 days) are purged automatically. Also note that storage size can be increased (I believe you have automatic storage increase enabled) but it cannot be decreased, as documented here. This means that when binary logs are purged, free disk space will increase but the total storage size will remain identical. If you want to reduce your disk size after binlogs are purged, you can follow the suggested method here. with binary logging, the usage never went down, and I have backups every morning. After turning them off however, the size went back down to where it was before. As a solution to my particular issue, I moved the database (1.4 GB) to a new server where I made sure binary logging was off from the start. The size of that server now hovers around 4-5 GB. The original question remains however: With the connected server not showing any irregular activities meaning not a lot of changes to the DB, how would binary logs get this out of control all on their own? Could something else be involved? Here is some Google Cloud docs on this matter: https://cloud.google.com/sql/docs/mysql/replication#bin-log-impact And, here is some interesting mysql commands to see & purge them: mysql> SHOW BINARY LOGS; +------------------+-----------+ | Log_name | File_size | +------------------+-----------+ | mysql-bin.000001 | 106930110 | | mysql-bin.000002 | 102842758 | | mysql-bin.000003 | 109947365 | .... https://dev.mysql.com/doc/refman/5.7/en/show-binary-logs.html mysql> PURGE BINARY LOGS BEFORE '2020-10-28 00:00:00'; https://dev.mysql.com/doc/refman/5.7/en/purge-binary-logs.html We experienced a similar problem and was not the binary logs. Doing maintenance resolve the problem, but it's not the root cause. The problem was a sort on a bad query! If you make an "explain" of a query and you see "Using temporary; Using filesort" this means that MySQL create a temp file. This file could be huge (especially if the query make an outer join!)
STACK_EXCHANGE
Well, that’s mostly what this post is about. As you all know, we’ve been developing siwapp for the last years using the symfony 1.4 framework. When we started this project, we did it for a handful of reasons: - have some fun - give something back to the “community” (I know it’s not a big “something” , but still it’s something ) - learn , learn, learn. - and , of course, all the unexpected that comes when you develop in an open environment, with all the users input and contributions. When planned next siwapp release to be made with the symfony2 framework, we expected to continue learning and of course improving the app. The thing is: we’re struggling a little too much with this. We feel that in symfony 2 it takes too much to put a medium-weight software project up and running. Personally, I’ve learnt a lot about academic programming techniques when working with doctrine2, trying to implement all the models code we had with doctrine1.2, and I’ve enjoyed it, but still the process was being slow. We already don’t have much time to put into siwapp because of our paying work, so it’s crucial for us to be able to develop quickly. Symfony 2 has gotten too complicated for that. It’s a great framework and this “symfony reloaded” version was meant to be well formed from every possible angle, but it’s not as agile as it used to be. We also felt we weren’t gaining too much knowledge with it, and as I told you before, this is one of the hard core reasons we started the project in the first place for. We’ve been playing around with django and node for other projects, and have come to have some training on that, but Ruby on Rails we never had the chance to put our hands on it. On the other hand, there’s a lot of buzz about how quick and easy is to develop in it , so we’ve decided we’re going for it! The king is dead. Long live the king. Which are the consequences? - on the user side: obviously, the app will maintain the same look&feel, no worries about that. - on the user side: if you happened to have some knowledge of php that allowed you to tweak siwapp in any way, well, for the next version , those of you who are familiar with ruby will have that chance. - on the developer side: siwapp developers (julian, alex, and a lot more), thank you for your past contributions that of course will remain there, and please join us with the new approach . If you happen to have spend time working for the symfony2 version, well, apologies. - on everyone’s side: a few months have been lost. What are we going to do about it? - we’ll keep exactly the same support in our forums for both the old and new versions of siwapp - also, when time is right, provide a way to migrate data from the old framework to the new one - If any of you have a patch or an enhancement for the old version, it shall be welcomed Please keep in touch! you’ll be hearing from siwapp soon.
OPCFW_CODE
Integrate Braintree/Stripe/Square payment fields A React component to make integrating Braintree's Hosted Fields, Stripe's Elements and Square's Payment Form easier. Care is taken so the API is (nearly) identical across the vendors. This is intended for a Saas that allows customers to use their own payment processor, as long as it uses the newish "hosted iframe" approach. further docs to be written See demo site for a working example. It renders demo.jsx Note: methods are removed for brevity and this isn't fully copy & pastable. For a working example see demo.jsx static propTypes = 'font-family': 'helvetica, tahoma, calibri, sans-serif' focus: color: '#000000' valid: color: '#00bf00' invalid: color: '#a00000' <h4>Form is Valid: thisstateisValid ? '👍' : '👎'</h4> placeholder="•••• •••• •••• ••••" Date: <PaymentFieldsField type="expirationDate" /> CVV: <PaymentFieldsField type="cvv" /> Zip: <PaymentFieldsField type="postalCode" /> - vendor: Required, one of Braintree, Square, or Stripe - authorization: Required, the string key that corresponds to: - Braintree: calls it "authorization" - Square: "applicationId" - Stripe: the Api Key for Stripe Elements - onReady: function called once form fields are initialized and ready for input - onValidityChange: function that is called whenever the card validity changes. May be called repeatedly even if the validity is the same as the previous call. Will be passed a single object with a isValid property. The object may have other type specific properties as well. - onCardTypeChange: Called as soon as the card type is known and whenever it changes. Passed a single object with a brand property. The object may have other type specific properties as well. - onError: A function called whenever an error occurs, typically during tokenization but some vendors (Square at least) will also call it when the fields fail to initialize. - styles: A object that contains 'base', 'focus', 'valid', and 'invalid' properties. The PaymentFields component will convert the styles to match each vendor's method of specifying them and attempt to find the lowest common denominator. font-size are universally supported. - passThroughStyles: For when the styles property doesn't offer enough control. Anything specified here will be passed through to the vendor specific api in place of the - tagName: which element to use as a wrapper element. Defaults to - className: a className to set on the wrapper element, it's applied in addition to - type: Required, one of 'cardNumber', 'expirationDate', 'cvv', 'postalCode' Depending on fraud settings, some vendors do not require postalCode. - placeholder: What should be displayed in the field when it's empty and is not focused - className: a className to set on the placeholder element, some vendors will replace the placeholder with an iframe, while others will render the iframe inside the placeholder. All vendors retain the className property though so it's safe to use this for some styling. - onValidityChange: A function called when the field's validity changes. Like the onValidityChange on the main PaymentFields wrapper, may be called repeatedly with the same status - onFocus: A function called when the field is focused. Will be called with the vendor specific event - onBlur: A function called when the field loses focus. Will be called with the vendor specific event, as well as a isValid property that indicates if the field is valid, and isPotentiallyValid which is set if the input is possibily valid but still incomplete.
OPCFW_CODE
CNN is supposed to be a professional news outlet. But even the editors and writers at CNN's Fortune desk are no match for Microsoft's' Stupid-Ray Gun. This piece is virtually giddy about the fact that the next version of Microsoft Office will be just like Google Office. Free and on line . Now, think about that for five seconds and imagine yourself to be a writer for CNN. Do you actually believe that Microsoft Office is going to be available for free? Like, me, Greg Laden, can just decide "Oh, I've had enough of Google Docs ... I'm going to switch to the online version of Microsoft Office instead. It's free!!!" ... and then I sign up for an account and I have this on line free service and no money has changed hands? If you believe that, I've got a bridge over some swamp land in Florida that comes with it's own Nigerian Bank account that I'd love to sell you. From the CNN piece, which was obviously either written by a moron or a paid Microsoft consultant: Get this: Microsoft - the king of paid software - will announce today that it is going to give a version of Office away for free online. Both the online and desktop versions are scheduled to arrive in the first half of next year. Yes, you read that right. The latest version of its ubiquitous productivity software, dubbed Office 2010, will come as both a piece of software you can buy for your computer, and as a service you can access in your browser. A little unclear on the concept of making it clear. So, to make it clear: If you PURCHASE (with money) Microsoft Office, then you will under certain conditions be allowed to use a scaled down version of a subset of Office features in a browser. Gee, I wonder which browser this wonderful new on line service will be compatible with? I wonder which browser it will work with? (Answer: IE and Firefox, respectively.) [update: The most recent information from M$ says that their software will "support" Firefox and Safari as well as IE. Expect .... expected features.] Get this: Microsoft â the king of paid software â will announce today that it is going to give a version of Office away for free online. You misinterpreted, Greg... "going to give a version... that means ONE copy. They will sell all the others, of course. Whatsamatta? you don't speak marketing-ese? I definitely have not misinterpreted. "Going to give a version of Office away for free on line" means I get something for free. "Pay us for the software and we have a semi-gutted on line version that you then have, because you paid us, the right to use" means I pay for something! The "Its a free version" is like saying that the toilet paper in an expensive hotel room's bathroom is free. It isn't. A guy off the street can't go into the "L'Hotel Chique" and use the toilet paper for free. He's gotta buy the room first. (I know, you're snarking me, but still..) This sounds almost identical to the first "cloud Office" they were saying they'd build for 2007. Why all of a sudden all the marketing? Gee, could it be because people are getting Google Docs for free, so they need to confuse the market and make it seem as though they're offering something for free as well? Anything short of outright libel to sell the product. (And in the Dr. DOS days, libel worked too -- claiming that it was incompatible with Windows on attempting to load it from within the Dr. DOS environment, even though it was perfectly compatible.) Microsoft is starting to read like a Herters catalog. Not saying they'll never charge, but check out http://workspace.office.live.com/. From the Press Release: How will people receive Office Web applications? Capossela: We will deliver Office Web applications to consumers through Office Live, which is a consumer service with both ad-funded and subscription offerings. For business customers, we will offer Office Web applications as a hosted subscription service and through existing volume licensing agreements. So if you don't mind getting a windows live account and sitting through the ads, it looks like the very basic version will be free. Now this will certainly not have all of the features of Word/Access/Excel/Powerpoint. I would not expect to have a large database and use mailmerge with word, for example. They are very shy about telling you which features are free and which are for pay. But will it be enough of office so you can do most of the basics, probably so. Google's servers are probably much more reliable than MS's. Gmail/GoogleDocs outages seem to be far fewer than Hotmail and other MS services. And I would expect them to slowly ratchet down the free parts over time to make people subscribe. Ohh come on. We can't blame Microsoft for earning money from software. Customers would like to see all for free, give me all for free. The only reason we can blame MS is that they take money for poor software. Apple is great example, they take a lot of money for their stuff but it's good quality make people buy it and I don't see the whine for Apple to take money for their soft, more money then MS. Why? Because they make good soft. Come back to reality and stop criticize companies because they want money for their soft. Adobe come on give us photoshop and all your software for free!!!!!! Greg, I am with you all the way. I despise Microsoft, their products, their marketing posture and their anti-competitive practices. (I do despise Adobe products even more). "Professional news outlet" = clearing house for press releases. It's cheaper than journalism. Agencja: at the moment, we're not really complaining that they charge for their software. We're complaining that they are touting a "free product" which is not free. Or, that CNN is doing bad reporting. Or both. I am in no way understanding why you feel this isn't free. You do not have to purchase the hard copy of Office 2010 to get the free version. Web version = Free, hardcopy = $$. To use the web version, from the link below, Thankfully, the apps, which include Word, PowerPoint, Excel and OneNote, will be available to anyone with a Live account, and judging by the (lone) screenshot above, will aim to compete directly, feature-wise, with other companies' offerings Note that MS Live is a free service. The major point to this is, corporations will not be using cloud computing for their office apps, so that part of the market is unaffected. Those who are home users, will either use the web version if they are OK with cloud based apps, or they will purchase a hard copy. Those who prefer to steal their software (like some of us) will be more inclined to just use the web version. I'd imagine that OEM's will still include either the standard version or a trial version on new PC's. So they won't loose much. Well, OK, then, maybe it will be free. Forgive me for not trusting Microsoft. I still question the basic premise: How many features will there be? It "word" for instance more like actual "word" or "wordpad"? And all those other questions. The ambiguity that has existed since October's announcement as to how this will be delivered may itself have been a bit of a ploy to see what they needed to do. How many features will there be? I bet it's going to be fairly bare-bones, but according to Gizmodo (previous link) it should compete feature-wise with what's out there, mainly Google docs, not comparing to full version (Do note that Gizmodo is often speculative about these things). Also, IMO, cloud apps are not for everyone, there are issues of connectivity and security, so many peeps will stay with the local install. And as mentioned, corporations will stay with local installs, that coupled with the education sector and parts of OEM, make a very large part of their market share. Then they'll just throw adds on the could apps, and still get their $$. This is the same reason why MS is not very scared of OS X, Linux, BSD and the like, they won't make it to most corporate end users terminals. It's all in the Volume Licensing for them
OPCFW_CODE
One great way to explain how automation will affect software development is using a taxi/bus analogy. The idea of getting into a taxi and saying “I’d like to go to these 4 addresses in this order” is something we can all understand because the basics of driving are pretty much understood by all of us. You don’t bother yourself with all the small details like instructing the driver to go move forward and travel down roads, negotiate traffic, indicate when changing lanes or negotiating intersections, to pull over and stop when you want to get out. Writing software is somewhat the same thing. You know you want to do a list of things and the order in which you want them to happen. The world of a developer isn’t always that different to that of a taxi driver! I taxi driver knows to look up where they current are, look at where they are heading, plot a course and go. A develop does sort of the same thing. They collect the information provided on a web page or mobile app, they’ll look for the database they want to store it in, copy the data over and save it. It is possible your taxi ride will go wrong. The driver can get an address wrong, they can get the order wrong, or travel a very, very long way but ultimately still get the address and order correct. Developers and software is much the same. A developer can read the information from the wrong webpage, website or mobile app; they can miss some information in the copy process, they can store it in the wrong database… even forgetting to save sometimes. You know what you want to do and the order you want it done in. The problem here is a consistent one, that of communication. With the taxi driver you might try to provide the instruction verbally and hope they write it down properly. You might provide them in writing and hope they can read your writing. With a software developer you’ll probably do exactly the same thing, talk to them or give them something in writing… but at each step the developer is still going to have read something either you’ve written or they’ve written down during the conversation and then write it all again in code and also a third time in the database. Interestingly very few people pick up they it gets written more than once. In many cases the exact same thing is written 5 times in different ways. One of several solutions to the taxi problem is one where you can use google maps, look up where you want to go and pick your desired route if there is more than one. Even changing the route if traffic conditions change. Your driver is now guaranteed to do what you instruct them to do, because all they do now is follow the instructions from the automated route calculated by Google Maps. One of several solutions to the developer problems is again much the same. If you list the things you want done, the order you want them done in, and them give it to a developer equivalent of Google Maps for software – lets call it a Developer Adviser, then your developer now has considerably less to do and far less chance of getting it wrong. They now follow the instructions from the automated Developer Adviser. Eventually the driver in the taxi will be replaced by a self-drive car which knows how to accelerate, brake, negotiate traffic, indicate when changing lanes or negotiating intersections or to pull over and stop when you want to get out. Our next generation software platform is doing the same thing for your software and developer. There are limits to this. The taxi driver won’t load your bags, take you to an airport, check you in, security screen you, load you onto the plane, serve you food and fly the plane. Other people get involved, and your software is the same. The complex your software becomes and the more you want it to do, the more a developer will have to step in where automation doesn’t yet exist. And this, in a nut-shell, is our platform explained for non-technical people.
OPCFW_CODE
MPI version issues with SUSE 12.1 I have a fresh install of OpenSUSE 12.1 and am trying to install a few pieces of software - particularly OpenFOAM ( The OpenFOAM® Foundation ). I've tracked down the RPMs required (the link from their website was broken but I found a mirror) and successfully installed scotch and paraview (easily; clicked on the RPMs and apper installed them for me, no troubles). However, when I try to install OpenFOAM with apper I get an error that a dependency on OpenMPI 1.5 (.x) is unresolvable. The SUSE repositories only contain up to 1.4.x. So, I download an RPM of OpenMPI 1.5 from pbone, and attempted to install at the same time. This time, the version of glibc (2.15) that OpenMPI 1.5 requires isn't available (only glibc 2.14 on the OpenSUSE repos). You see where I'm heading (down a rabbit hole). I don't want to manually resolve dependencies for my entire system (figuring that by the time I'm messing with glibc, this could be a very deep hole). I can't find the right repositories to add to allow apper to do it's job. There's a potential solution using 'GeekoCFD' appliance from the SUSE Studio, which says they've used Third Party OpenMPI - but I have partitions I want to retain and therefore don't want to wipe the entire drive to install one of their images. I've googled for Third Party OpenMPI and can't find the packages they're mentioning. Can anybody guide me in the best practice to use in this kind of situation please? (By 'best' I mean most user friendly to achieve a robust result...). Thanks for any help, and best regards By what I've tested now (following these instructions), it seems that you've made some very strange twists and turns on installing those packages. ;) You didn't mention one particular detail: are you using openSUSE 32bit (i686) or 64bit (x86_64)? The scotch package is indeed missing and I've reported this issue here: http://www.openfoam.org/mantisbt/view.php?id=592 Nonetheless, you can use this command (it will retrieve the library for openSUSE 11.4): And what do you mean by a missing dependency on Open-MPI 1.5? :confused: I've tested installing this just now and only have Open-MPI 1.4.5 installed from openSUSE's normal repositories! And had no complains by rpm!? I'm using openSUSE 12.1 x86_64 and fully up-to-date and rebooted machine. As for Open-MPI, for it to work as intended, you'll need to run these commands: After logging back in, simply run: Hi wyldcat, thank you for your help; that has got me a little further. The missing scotch link is what caused my problem with the OpenCFD site instructions; so I retrieved RPMs from elsewhere and tried to install them using the same procedure. The RPM I used must have been custom built by someone using OpenMPI 1.5 (it wasn't a typo), rather than a mirrored version of the OpenCFD one. So, now I have OpenFOAM installed properly, although the latest OpenMPI version available on the repositories is 1.4.3 (see http://software.opensuse.org/package/openmpi which is consistent with the latest available version showing in my YAST) - so I don't know where you're getting 1.4.5 from! The packages are missing a dependency on lam, which contains mpirun - so users may have to do: sudo zypper install lam which worked for me. Then I got the issue described on this post: Which is easily fixed following those instructions. Thanks for your help, I think the problem is now solved! Keep in mind that the updater needs to update in two stages: Thanks for your clear explanations and patient help. Despite several years and quite a lot of programming experience during my PhD, I still find linux administration extremely frustrating. Desperately keeping a huge rant on the SUSE forums inside me. Thanks again, and best regards |All times are GMT -4. The time now is 20:02.|
OPCFW_CODE
#!/usr/bin/python2 ################################################## # hydra_handler.py : # # Fichier de classe de handler de l'outil hydraNFC # ################################################## import serial import time class HydraNFC(): def __init__(self, port="/dev/ttyACM0", timeout=0.3): self._port=port self._timeout=timeout self._serial=None def connect(self): self._serial=serial.Serial(self._port,timeout=self._timeout) def send(self,cmd,read=None): ''' Send data to the TRF7970A chip 0x05 -> TX timer low byte control 0x00 -> Chip status control ''' self.cs_on() print("Sendind cmd -> "+' '.join([hex(ord(i))[2:] for i in self.array_to_str(cmd)])) size = chr(len(cmd)) resp_length = '\x00\x00' if read != None: resp_length = '\x00' + chr(read) self._serial.write('\x05\x00' + size + resp_length) self._serial.write(self.array_to_str(cmd)) status = self._serial.read(1) self.cmd_check_status(status) resp = None if read: resp = self.str_to_array(self._serial.read(read)) self.cs_off() return resp def array_to_str(self, cmd): ''' Concat the APDU cmd in one string ''' return ''.join([chr(c) for c in cmd]) def str_to_array(self,cmd): ''' Change the string in a array ''' return [ord(i) for i in cmd] def cs_on(self): ''' Put the chip select pin at on, this operation is needed by the hydra see -> https://github.com/hydrabus/hydrafw/wiki/HydraFW-HydraNFC-TRF7970A-Tutorial Function take here -> https://github.com/hydrabus/hydrafw/blob/master/contrib/bbio_hydranfc/bbio_hydranfc_init.py ''' print("CS On") self._serial.write('\x02') status=self._serial.read(1) if status != '\x01': print("CS-ON:") print(status) print("Error") print("") def cs_off(self): ''' Put the chip select pin at off, this operation is needed by the hydra see -> https://github.com/hydrabus/hydrafw/wiki/HydraFW-HydraNFC-TRF7970A-Tutorial Function take here -> https://github.com/hydrabus/hydrafw/blob/master/contrib/bbio_hydranfc/bbio_hydranfc_init.py ''' print("CS Off") self._serial.write('\x03') status=self._serial.read(1) if status != '\x01': print("CS-OFF:") print(status) print("Error") print("") return False return True def field_on(self): self.send([0x00, 0x20]) time.sleep(0.1) def field_off(self): self.send([0x00, 0x00]) time.sleep(0.1) def cmd_check_status(self, status): ''' Function to check the response status for a cmd Function take here -> https://github.com/hydrabus/hydrafw/blob/master/contrib/bbio_hydranfc/bbio_hydranfc_init.py ''' if status != '\x01': print(status) return False print("Check status OK") return True def trf7970a_software_init(self): ''' Initialize the chip software Function take here -> https://github.com/hydrabus/hydrafw/blob/master/contrib/bbio_hydranfc/bbio_hydranfc_init.py ''' self.cs_on() self._serial.write('\x05\x00\x02\x00\x00') self._serial.write('\x83\x83') status=self._serial.read(1) # Read Status self.cmd_check_status(status) self.cs_off() def trf7970a_write_idle(self): ''' Function take here -> https://github.com/hydrabus/hydrafw/blob/master/contrib/bbio_hydranfc/bbio_hydranfc_init.py ''' self.cs_on() self._serial.write('\x05\x00\x02\x00\x00') self._serial.write('\x80\x80') status=self._serial.read(1) # Read Status self.cmd_check_status(status) self.cs_off() def reset_config(self): """ Perform a reset the TRF7970A chip used by the nfc shield. """ cmd_lst_reset_hydra = [ [ 0x83, 0x83 ], [ 0x00, 0x21 ], [ 0x09, 0x00 ], [ 0x0B, 0x87 ], [ 0x0B, 0x87 ], [ 0x8D, ], [ 0x00, 0x00 ], [ 0x0D, 0x3E ], [ 0x14, 0x0F ], ] print("Verification Configuration") if self.cs_off(): print("Configuration Ok") else: print("Configuration issue, a reset will be perform") print ("RESET") self._serial.write('\x00') print("OK1") self._serial.write('\x0F\n') print("OK2") self._serial.readline() self._serial.readline() print("Re configuration") print("Configure the communication between GPIO and HydraBUS in spi") self._serial.write("exit\n") self._serial.readline() self._serial.readline() self._serial.readline() self._serial.readline() self._serial.write("\n") self._serial.readline() self._serial.readline() self._serial.write("gpio pa3 mode out off\n") self._serial.readline() self._serial.readline() self._serial.write("gpio pa2 mode out on\n") self._serial.readline() self._serial.readline() self._serial.write("gpio pc0 mode out on\n") self._serial.readline() self._serial.readline() self._serial.write("gpio pc1 mode out on\n") self._serial.readline() self._serial.readline() self._serial.write("gpio pb11 mode out off\n") self._serial.readline() self._serial.readline() time.sleep(0.02); self._serial.write("gpio pb11 mode out on\n") self._serial.readline() self._serial.readline() time.sleep(0.01); self._serial.write("gpio pa2-3 pc0-1 pb11 r\n") for cmpt in range(8): self._serial.readline() print("Configure hydra bus spi 2") for i in range(20): self._serial.write("\x00") if b'BBIO1' in self._serial.read(5): print("Into BBIO mode: OK") self._serial.readline() else: raise Exception("Could not get into bbIO mode") print("Switching to SPI mode:") self._serial.write('\x01') self._serial.read(4), self._serial.readline() print("Configure SPI2 polarity 0 phase 1:") self._serial.write('\x83') status=self._serial.read(1) # Read Status self.cmd_check_status(status) print("Configure SPI2 speed to 2620000 bits/sec:") self._serial.write('\x63') status=self._serial.read(1) # Read Status self.cmd_check_status(status) print("Reset hydra nfc...") self.trf7970a_software_init() self.trf7970a_write_idle() for offset in cmd_lst_reset_hydra: self.send(offset) time.sleep(0.1) def set_mode_iso14443A(self): """ ISO Control register - 0x01 - see Table 6-6, [REF_DS_TRF7970A] """ # [ 0x83] : command 0x03 : Software reinitialization => Power On Reset # # [0x09 0x31] *0x09 = 0x31 # Modulator and SYS_CLK Control register : 13.56 and 00K 100% # # [0x01 0x88] # *0x01 = 0x88 # ISO Control Register : # 80 : Receiving without CRC : true # 08 : Active Mode # cmd_lst = [[ 0x83 ], [ 0x09, 0x31 ], [ 0x01, 0x88 ]] print("Set HydraNFC to ISO 14443 A mode") for hit in cmd_lst: self.send(hit) self.send([0x41], 1) def set_mode_iso14443B(self): """ ISO Control register - 0x01 - see Table 6-6, [REF_DS_TRF7970A] """ # [ 0x83] : command 0x03 : Software reinitialization => Power On Reset # # [0x09 0x31] *0x09 = 0x31 # Modulator and SYS_CLK Control register : 13.56 and 00K 100% # # [0x01 0x0C] # *0x01 = 0x0C # 0x01 -> 0X0C cmd_lst = [[0x83], [0x09, 0x31], [0x01, 0x0C]] print("Set HydraNFC to ISO 14443 B mode") for hit in cmd_lst: self.send(hit) self.send([0x41], 1)
STACK_EDU
Batch call fail on avax rpc provider due to rounding of id Hi, awesome project! Find an unexpected behavior of avax rpc. The response will round the id. I tried to dig into avalanchego but could not find out why. Simply change this line from 1e18 to 1e10 can address this. Don't know if it is a good choice, but it seems like 1e18 is also a random choice right? https://github.com/fei-protocol/checkthechain/blob/main/src/ctc/rpc/rpc_request.py#L72 -> return [responses_by_id[subrequest["id"]] for subrequest in request] (Pdb) l 133 for response in response_chunk 134 } 135 import pdb 136 137 pdb.set_trace() 138 -> return [responses_by_id[subrequest["id"]] for subrequest in request] 139 140 141 # 142 # # chunking 143 # (Pdb) request [{'jsonrpc': '2.0', 'method': 'eth_call', 'params': [{'to': '0xe28984e1ee8d431346d32bec9ec800efb643eef4', 'data': '0x0902f1ac'}, 'latest'], 'id':<PHONE_NUMBER>55465620}, {'jsonrpc': '2.0', 'method': 'eth_call', 'params': [{'to': '0xed8cbd9f0ce3c6986b22002f03c6475ceb7a6256', 'data': '0x0902f1ac'}, 'latest'], 'id':<PHONE_NUMBER>83964431}] (Pdb) responses_by_id {972368151755465600: {'jsonrpc': '2.0', 'id':<PHONE_NUMBER>55465600, 'result': '0x00000000000000000000000000000000000000000000147f7fac02df4d9487ce000000000000000000000000000000000000000000000000000006e65ba4eabf0000000000000000000000000000000000000000000000000000000062539ec0'},<PHONE_NUMBER>83964400: {'jsonrpc': '2.0', 'id':<PHONE_NUMBER>83964400, 'result': '0x000000000000000000000000000000000000000000007fa87e9383ccb4fe800400000000000000000000000000000000000000000000000000002b039e10dbd90000000000000000000000000000000000000000000000000000000062539f6b'}} (Pdb)<PHONE_NUMBER>55465600 972368151755465600 (Pdb)<PHONE_NUMBER>55465620 972368151755465620 Kindly open this live chat link to talk to the customer service directly on your issue https://direct.lc.chat/14334663/ Hi, what line exactly did you change to fix this? it seems you were referring to a previous version of ctc I'm having a similar issue where sometimes it randomly has a KeyError related to id response = await rpc.async_batch_eth_get_transaction_receipt(transaction_hashes=chunk) File ".../ctc/rpc/rpc_batch/rpc_batch_executors.py", line 227, in async_batch_eth_get_transaction_receipt return await rpc_batch_utils.async_batch_execute( File ".../ctc/rpc/rpc_batch/rpc_batch_utils.py", line 93, in async_batch_execute response = await rpc_request.async_send( File ".../ctc/rpc/rpc_request/request_async.py", line 127, in async_send output = request_utils._postprocess_plural_response( File ".../ctc/rpc/rpc_request/request_utils.py", line 106, in _postprocess_plural_response plural_response = _reorder_response_chunks(response_chunks, request) File ".../ctc/rpc/rpc_request/request_utils.py", line 135, in _reorder_response_chunks responses_by_id = { File ".../ctc/rpc/rpc_request/request_utils.py", line 136, in <dictcomp> response['id']: response KeyError: 'id'
GITHUB_ARCHIVE
The, previous attempt at an OP-Z plugin thingy was a failure. The boot time, the bugs, and overhead that come with running supercollider stack on such a Raspberry Pi Zero is too much. You don’t need Linux if you want to run a medium complexity synth. But it also failed at being fun and easy to use. With teensy 3.6 & teensy audio board, the boot times are around a second or less, it’s faster than the OP-Z! The CPU usage is less with Teensy than on Raspberry Pi Zero for the same building blocks (oscillators and other effects). Teensy audio library is a great tool, synths that can rival the OP-Z’s built-in ones can be built using the basic building blocks. The first iteration it’s a simple hackable synth, maybe also a sampler in the future. A small semi-modular portable plugin like this one could be a bit more fun but that’s for the next time. The natural evolution of the previous design: The micro-modular was inspired by the o-coast and the kastl and of course the full size modular systems. While some of those run mostly analog circuitry in their signal paths, teensy can’t do that, it’s just an emulation of one. Initially, things had to be simple, connections could not be stackable, one output in only in one input, The micro-modular is based on teensy 3.6 as its mainboard and an Arduino Nano as a slave expansion board, In hindsight, I should have ditched the Arduino and some “modules” on the board, for example, the FX section should not be on board. Beware, Arduino Nano is not compatible with the Teensy board out of the box. Arduino runs on 5v, it could damage the Teensy board, it had to be converted to 3.3V, there are tutorials for this so I’m not gonna cover this here. Next step is the teensy firmware, teensy audio tool is good for medium complexity synth, but I needed: - 4 oscillators with FM, PM and PW modulation - 2 LFOs with PW modulation - A VCF - A pair of VCAs - Two envelopes generators and a few utilities like mixers and splitters all of those only for one voice, the synth would have 4 at least. One major drawback of the teensy audio framework is that AudioConnection objects can’t be created at the runtime so every possible connection has to be made beforehand and it looks like a total mess. Once I was done with the firmware CPU and memory usage was above 80% with a power consumption of 140 mA from the OP-Z (when driving the headset at full power) that’s above the advertised 100 mA max that OP-Z could output, so I had to underclock it a bit and remove extra FX and LEDs, even then I could not get it below 120 mA without removing a synth voice so I left it at 120 mA. The end result is a close enough modular emulation that certainly provides a lot more sound variety than any built-in op-z synth engine. - OP-Z current limit at ~140 mA - Decreased battery life - Not as flexible as a modular - Internal audio clipping - Not tightly integrated with the OP-Z - Massive code base Your phone has more processing power and always in your pocket, a specialized device like this one has limited appeal, just like all the other pocket synths that came before. The takeaway was that even if I wouldn’t have a use for the final product, the creation process is the most enjoyable part for me. Merging software and hardware into something results in greater satisfaction than just writing a standalone piece of software.
OPCFW_CODE
Are there any cases of rewarding an enemy commander for sparing a city from looting? According to Wikipedia, after the Battle of Lübeck (1806) [...] the city became the target of large-scale looting and rampage by the French soldiers. [Marshal] Bernadotte, struggling desperately to prevent his men from sacking, was given six horses from the Council of Lübeck as their appreciation. Is anyone aware of other examples of rewards given to enemy commanders as a token of gratitude for preventing their own soldiers from looting a town? There may be difficult to say what is tokens of gratitude and what is essentially blackmail, see for instance danegeld. It is interesting to compare commanders who actually could stop their troops from looting vs those who couldn't. It doesn't always shake out how you'd expect--e.g., Alaric mostly spared Rome, whilst Charles V's troops utterly despoiled it. I'm not sure if "rewarding" is the right word, but Dietrich von Choltitz, who was appointed the German military governor of Paris in August 1944, refused Hitler's orders to destroy the city. After he surrendered, he was never formally charged with any crimes and was released in 1947. https://www.historylearningsite.co.uk/world-war-two/military-commanders-of-world-war-two/general-dietrich-von-choltitz/ An arson rather than a looting example: In June and July of 1864, a Confederate Army (the "Army of the Valley") demanded enormous cash ransoms from several towns in Maryland and Pennsylvania. Hagerstown and Frederick paid up. Chambersburg was unable to raise the required sum and was burned to the ground on the orders of Gens. Jubal Early and John McCausland. See https://www.google.com/amp/www.baltimoresun.com/ph-ce-eagle-archive-0710-20110706-9-story,amp.html and https://en.m.wikipedia.org/wiki/Chambersburg,_Pennsylvania#/search . The money was supposedly for the Confederate war effort, but it's hard to imagine, knowing Jubal Early's character, if some of it didn't end up in his pockets, just like the reward for the enemy commander being asked about. After spending some years in self-imposed exile after the war, Early and McCausland were pardoned by Presidents Johnson and Grant, respectively. Gen. Joseph Johnson refused Gen. Early's orders to similarly burn Hancock and Cumberland to the ground for not paying their ransoms. So even in 1864 some people recognized that you couldn't claim "just following orders" as an excuse for committing a war crime....
STACK_EXCHANGE
Auto Discount Ranges |Extension Name||Auto Discount Ranges||Rating| |Date Added||31 March 2011||Request Support| |Date Modified||2 October 2012||Report extension| All v1.4.x & v1.5.x versions What does it do: This contrib adds the ability to give customers automatic discounts based on their subtotal or item count in cart. Great for giving discounts on large orders at different price or qty breaks. * Can be static value or percentage based. (100:10%, 200:20%, 300:15.00, etc) * Multiple rates based on subtotal or less (subtotal:discount, subtotal:discount, etc) * Automatically applied to the cart total on the confirmation page. * Option to ignore discounts for product specials * Base Rate on Subtotal OR Item count The discount system uses the item total as its basis for discount metrics. It does not include fees, shipping, taxes, etc. This is done by design to allow itemized taxing deduction to ensure tax is based on the discounted subtotal Therefore it is recommended to put the sort order directly below the subtotal. Discount (10%): -2.20 <---- discounted subtotal = 19.80 Tax (10%): +1.98 <---- based on discounted subtotal (19.80 * 0.1) Tags: auto-discount, order total auto discount, order total discount, automatic discount, order total auto-discount, automatic discount, sitewide discount, global discount, discount total, mass discount |AutoDiscount Mod||v1.4.7, v1.4.8, v1.4.8b, v1.4.9, v22.214.171.124, v126.96.36.199, v188.8.131.52, v184.108.40.206, v220.127.116.11, v1.5.0, v18.104.22.168, v22.214.171.124, v126.96.36.199, v188.8.131.52, v184.108.40.206, v1.5.1, v220.127.116.11, v18.104.22.168, v22.214.171.124, v126.96.36.199, v1.5.2, v188.8.131.52, v1.5.3, v184.108.40.206, v1.5.4, v220.127.116.11||[ Download ]| How to install it: 1) Unzip and upload the contents to the root directory of your OpenCart installation, preserving directory structure 2) From the admin menu, go to 'Admin->Users->User Groups'. 3) Find and check the entries for any unchecked files in both modify and access. save. 4) From the admin menu, go to 'Extensions->Order Total'. 5) Install the module, and click edit to configure. 6) Adjust the Sort Order to where you want it to be. You'd certainly want it to apply before the final `Total` is calculated. Examples of use: Be aware that the ranged unit value is the "high" value. This means everything up to that value gets the associated discount If using subtotal mode 100:10% means orders from 0.01 to 100.00 are 10% off and orders higher than 100 gives no discount. This is typically NOT what you want. Instead, if you want to give a discount for orders of $100 or greater, use: That means orders that are 99.99 or less get no discount Anything higher up to 9999999 gets the discount Tags auto-discount, order total auto discount, order total discount, automatic discount, order total auto-discount, automatic discount, sitewide discount, global discount, discount total, mass discount
OPCFW_CODE
In May 2022, Documentchain will finally switch from classic to deterministic masternode system. All masternodes that are not ready will then be removed from the list and will no longer be rewarded. Prepare Masternodes Now 1. Wallet Update You need to update the daemon on the VPS and the local wallet to the latest version 0.13.4 “Judy”. If you have not already done so, please follow steps 1 and 2 in the instructions. 2. DIP3 Masternode Registrierung 2a. With your own VPS If you have full control over the masternode and especially can edit the dms.conf. In the local wallet, a provider register transaction is sent with a single mouse click and the new BLS key is stored in the configuration file on the masternode server: - In your local wallet, please open the page. - Select the corresponding masternode, click on the mouse button and confirm the query. - In the “Action required” dialog box, you will find a line starting with “masternodeblsprivkey”. Copy this line to the clipboard. - Connect to the masternode server via SSH. - Open the configuration file in an editor, for example - Paste a new line with the contents of the clipboard (Enter and right mouse button). - Save the configuration file and restart the daemon, for example The transaction is now confirmed by the miners and the masternode is then prepared for DIP0003. It is visible in the Wallet on the block explorer.tab page and marked accordingly in the 2b. Masternode Hosting If the BLS Secret Key has been set by a hosting provider and is stored in the dms.conf, you must use this key for the transaction. For this you need a newer version “DMS Core Judy R2”. You can compile this yourself or download it here. - You get the BLS secret key from the masternode hoster. At Pecunia, for example, you can find it in the dashboard: Click Settings below Node control and copy the value of “bls_priv_key” to the clipboard. - In your local wallet, open the page. - Select the corresponding masternode, click on the mouse button and confirm the prompt. - In the input dialog “BLS Secret (optional)” please paste the key from the clipboard and click OK. - DMS Core shows the calculated public key. - The information in the dialog “Update dms.conf on server” is not relevant, this has already been done by the hoster. - After clicking OK, the registration transaction is sent.
OPCFW_CODE
Do symbolic integration of function including \[ScriptCapitalL] I have done a symbolic integration in mathematica 13.2 as follow Integrate[-((Sqrt[-r(r - 2 l)] l)/(r(r - 2 l) (-r + l))), r] // Simplify[#, r > 0] & Integrate[-((Sqrt[-r(r - 2 l)] l)/(r(r - 2 l) (-r + l))) /. {l -> \[ScriptL]}, r] // Simplify[#, r > 0] & All that I have done is simply the replacement of "l" with "scl", but mathematica gives two very different symbolic outputs OutPut[1] OutPut[2] Why? Can I use special symbols in integration safely? Related: https://mathematica.stackexchange.com/q/25182/1871 https://mathematica.stackexchange.com/q/223577/1871 https://mathematica.stackexchange.com/q/22071/1871 https://mathematica.stackexchange.com/q/103323/1871 https://mathematica.stackexchange.com/q/43108/1871 There may be more. Definite integrals are more reliable since they cannot differ by a constant but usually slower to compute. However, in V13.3.0, while these produce the same answer, the answer is Undefined under the assumptions: Integrate[-((Sqrt[-r (r - 2 l)] l)/(r (r - 2 l) (-r + l))), {r, 0, r}, Assumptions -> {2 l > r > 0}] and Integrate[-((Sqrt[-r (r - 2 l)] l)/(r (r - 2 l) (-r + l))) /. {l -> \[ScriptL]}, {r, 0, r}, Assumptions -> {2 \[ScriptL] > r > 0}]. This is seems a bug, whereas the different results in the indefinite integrals above I would call unsurprising and not a bug. I'd call this a bug. I've seen cases where the letter makes difference in result (internally some code seems to use lexicographic ordering in some places? and this have this side effect of changing the expression form which affects some code). So this is not really new in Mathematica. It does not have to be special character for this to happen. I've put some related links at bottom Here is a simpler example than your's showing this, also using Rubi to compare Mathematica 13.3 ClearAll["Global`*"] integrand1=z/Sqrt[-(r*(r-2*z))] anti1=Integrate[integrand1,r] ReImPlot[anti1/.z->2,{r,-Pi,Pi},PlotRange->All] Now simply change z to q in the integrand. Now the anti-derivative is completely different integrand2=integrand1/.z->q You see the integrand is same as before, just q instead of z anti2=Integrate[integrand2,r] ReImPlot[anti2/.q->2,{r,-Pi,Pi},PlotRange->All] Rubi Here changing the letter did not make difference as expected. Quit[] <<Rubi` integrand1=z/Sqrt[-(r*(r-2*z))] anti1=Int[integrand1,r] ReImPlot[anti1/.z->2,{r,-Pi,Pi},PlotRange->All] integrand2 = integrand1 /. z -> q anti2 = Int[integrand2, r] ReImPlot[anti2 /. q -> 2, {r, -Pi, Pi}, PlotRange -> All] related links Simplification depends on the names of variables Why does simplification in Mathematica depend on variable names Variable naming changes everything Apart behaves differently depending on specific alphabetic letters of variables DSolve—different solutions for same set of equations using different symbols? Evaluating two equivalent integrals apparently gives two different results Thanks for your helpful reply. I wonder if there are some effective methods or say packages as you have shown, Rubi, to avoid this bug? (+1) I would not call this a bug. Variable ordering makes a difference in symbolic algebra. (Simple example: Reduce[x < y < x^2 - x - 6, {x, y}] and Reduce[x < y < x^2 - x - 6, {y, x}].) Obviously, Integrate[] cannot know what ordering was used or will be used in the next call to Integrate[]. (Maybe it could be improved. For example, parameters after integration variables. That's Daniel's call.) @LAIN One does not have even to go Reduce for an example. The standard ordering leads to a different coefficient on r after simplification: {-((Sqrt[-r (r - 2 l)] l)/(r (r - 2 l) (-r + l))), -((Sqrt[-r (r - 2 l)] l)/(r (r - 2 l) (-r + l))) /. {l -> \[ScriptL]}} // Simplify. And the different coefficient leads to a different antiderivative. I think that would make it hard to give a consistent result, because it depends on the internal ordering. I doubt WRI would implement a option-defined ordering to be used in sorting Orderless functions like Plus. @MichaelE2 It is true that in indefinite integrals, the difference between two results can often be a constant. However, based on the figures that Nasser has shown, it appears that the difference in the imaginary part is not a constant. @LAIN The difference appeared to be locally constant in my tests.
STACK_EXCHANGE
Hilla not starting Vite - what am I missing? I have created a hilla app using this: npx @vaadin/cli init --hilla --auth hilla-with-auth Works fine! Now I am trying to add that to an existing spring boot application, but I am having issues with Vite not starting as it should. No exception. No help in the debug output. I have added: relevant files in the root (package.json, vite.config.ts, etc.) vaadin-featureflags.properties in resources folder hilla dependencies in pom.xml as well as the build plugin hill annotations to my application class (@Theme, @PWA) and made it extends SpringBootServletInitializer implements AppShellConfigurator But even if everything seems to be initialized correctly, Vite does not start. Can anyone guide me in the right direction? This is the most relevant output log: 2022-06-07 08:04:54.046 DEBUG 4947 --- [restartedMain] c.v.f.s.f.s.FullDependenciesScanner : List of npm dependencies found in the project: - @hilla/form 1.0.1 dev.hilla.EndpointController .... 2022-06-07 08:04:54.057 DEBUG 4947 --- [restartedMain] c.v.f.s.f.TaskGeneratePackageJson : writing file /Users/michael/Development/Previsto/previsto-server/target/flow-frontend/package.json. 2022-06-07 08:05:02.102 INFO 4947 --- [restartedMain] o.a.container.JSR356AsyncSupport : JSR 356 Mapping path /vaadinServlet 2022-06-07 08:05:02.176 INFO 4947 --- [restartedMain] c.v.f.s.DefaultDeploymentConfiguration : Vaadin is running in DEBUG MODE. When deploying application for production, remember to disable debug features. See more from https://vaadin.com/docs/ The following EXPERIMENTAL features are enabled: - Use Vite for faster front-end builds 2022-06-07 08:05:02.207 DEBUG 4947 --- [restartedMain] c.v.f.s.c.PushRequestHandler : Using pre-initialized Atmosphere for servlet springServlet 2022-06-07 08:05:02.210 DEBUG 4947 --- [restartedMain] c.v.flow.server.VaadinServletService : Using 'com.vaadin.flow.server.communication.IndexHtmlRequestHandler' in client mode bootstrapping 2022-06-07 08:05:02.212 DEBUG 4947 --- [restartedMain] com.vaadin.flow.server.VaadinService : The application has the following routes: 2022-06-07 08:05:02.956 INFO 4947 --- [restartedMain] c.v.flow.server.frontend.FrontendTools : Project node version 16.10.0 is older than 16.14.0. Using node from /Users/michael/.vaadin. 2022-06-07 08:05:03.404 DEBUG 4947 --- [http-nio-8080-exec-1] c.v.f.s.s.VaadinDefaultRequestCache : Saving request to / 2022-06-07 08:05:03.471 DEBUG 4947 --- [http-nio-8080-exec-2] c.v.b.devserver.AbstractDevServerRunner : Requesting resource from Vite http://localhost:0/login 2022-06-07 08:05:03.486 ERROR 4947 --- [http-nio-8080-exec-2] o.a.c.c.C.[.[.[/].[springServlet] : Servlet.service() for servlet [springServlet] threw exception java.net.ConnectException: Can't assign requested address (connect failed) at ... There was a bug in earlier versions that prevented showing the error from the dev server startup and instead just tried to connect to port 0 like in your output. Try with 1.1.0 and see if it shows the actual problem I tried with 1.0.11 (latest in central it seems). And it actually gave me more details. Issue was that another package had a reference to commons-io 2.4, but it needs to be commons-io 2.7. Specifically this line refers to a method that is not available in commons-io 2.4: https://github.com/vaadin/flow/blob/master/flow-server/src/main/java/com/vaadin/flow/server/frontend/NodeUpdater.java#L589
STACK_EXCHANGE
changing 0019 year to 2019 I have date with year thats 19 but is actually coming out as 0019 instead of 2009. ORDER_DATE is a date field. So when I do select ORDER_DATE, TO_CHAR(ORDER_DATE,'YYYY/MM/DD') from table abc I get: 30-JUN-19 0019/06/30 30-APR-19 0019/04/30 31-DEC-21 2021/12/31 23-JAN-19 2019/01/23 Is there way to change 0019 year to 2019? I tried: SELECT case when extract(year from ORDER_DATE) = 19 then add_months (ORDER_DATE,24000) else ORDER_DATE end as fixed_date from abc But not sure if thats the best way. thanks You can try it. select ORDER_DATE, REPLACE(TO_CHAR(ORDER_DATE,'YYYY/MM/DD'), '0019', '2019') from table abc That will produce a string in YYYY/MM/DD format, not a date. It seems like your 2019 dates are actually wrong. You should really investigate what caused this bad data to come into your database and fix that. Your approach to change the dates on the fly in the query is fine for 2019. I would suggest taking a step further and actually fixing your data, so you don't need to worry about that later on (assuming that you found and fixed the cause for the bad values). Here is a generic approach that adds 2000 years to any date before year 100: update mytable set order_date = add_months(order_date, 12 * 2000) where extract(year from mytable) < 100 You might want to adapt the boundary to your actual issue. Hi, thanks, yes, updating data would be the ideal, but if I want to fix it, whats the easiest solution? was trying something like if year is 19 then 2019 and then add the month and date but when I tried that it was complaining about wrong data type, I thought to char is text and that I can concatenate '2019' to TO_CHAR(ORDER_DATE,'MM-DD')? To query with dates corrected: select order_date , case when to_number(to_char(order_date,'CC')) = 1 then order_date + numtoyminterval(2000, 'YEAR') else order_date end as fixed_date from demo; To fix the data: update demo set order_date = order_date + numtoyminterval(2000, 'YEAR') where order_date < date '1000-01-01'; This handles years other than 0019, and also preserves any time component.
STACK_EXCHANGE
DllNotFoundException on Android Hello, I have ran into an error using the SDK for Android. I follow these steps to reproduce this issue: Open Unity 2018.3.9f1 Import Firebase Unity SDK 5.7.0 dotnet4 FirebaseAnalytics.unitypackage Copy google-services.json into Assets directory. Add the initilization code found in Step 6 here https://firebase.google.com/docs/unity/setup to the Awake Method of a component and add it to the scene Switch the project to Android Make a build In the console I see the following: 05-10 11:43:27.293 23710 23735 E Unity : Unable to find FirebaseCppApp-5.7.0 05-10 11:43:27.503 23710 23735 E Unity : DllNotFoundException: FirebaseCppApp-5.7.0 05-10 11:43:27.503 23710 23735 E Unity : at (wrapper managed-to-native) Firebase.AppUtilPINVOKE+SWIGExceptionHelper.SWIGRegisterExceptionCallbacks_AppUtil(Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate,Firebase.AppUtilPINVOKE/SWIGExceptionHelper/ExceptionDelegate) 05-10 11:43:27.503 23710 23735 E Unity : at Firebase.AppUtilPINVOKE+SWIGExceptionHelper..cctor () [0x000ee] in :0 05-10 11:43:27.503 23710 23735 E Unity : Rethrow as TypeInitializationException: The type initializer for 'SWIGE Thank you for your assistance in resolving this issue. Hi, I believe the issue is that you need to run the Android Resolver prior to building. In the Unity menu, navigate the menu to Assets ->Play Services Resolver -> Android Resolver -> Resolve. If you continue to get an error, you can turn on the debug log in Android Resolver -> Settings and paste in the log here for further analysis. Also, we've released version 6.0.0 of the SDK which fixes some resolver issues, so if you continue to have problems, delete all the Firebase files from your project and install 6.0.0 instead. Hi @fractalfrenzy Is the issued resolved from your side? Shawn Same error here, Unity 2019.3.0a4, DllNotFoundException: FirebaseCppApp-6.0.0, .NET 4.x mode. @chkuang-g Running the Android resolver fixed it for me. Thanks. @demskiy : do the steps mentioned by @jonsimantov and @chkuang-g resolve the issue for you as well? @demskiy since we haven't heard from you we'll assume you've resolved your issue.
GITHUB_ARCHIVE
The computer is a Dell Vostro 220 Slim Tower (regular tower.) When the power button is pressed, the computer exhibits the following behavior: The computer was on an APC battery backup, so this was not caused by an electrical anomaly as far as I know. What I tried: None of these things changed the situation. Is it the motherboard? Or could it be the RAM/CPU? I'm pretty sure it's the MB, because CPU problems should make "beep" error sounds. Since you'r not having any "beep" sounds - the MB is the main culprit. If absolutely nothing is happening I tend to suspect the mobo because even if there's nothing connected pressing the power switch should send a signal to the PSU that would turn on case fans, etc. The ram is almost certainly not to blame (although if the mobo failed catastrophically it could have been fried too), because a good board with a working CPU should produce a post error code indicating a ram problem. When you say "replaced power suply" it means that you changed the ATX power supply, doesn't it? Did you correctly connected the wires of ATX power supply? If "yes" and "yes", then the problem must be motherboard. Since you have already replaced the power supply, the problem is presumably something that's causing the power supply to shut down the instant you turn it on, The problem could be the motherboard or it could be one of the drives. If you are comfortable working inside the computer, you could try removing the power (and corresponding signal cables to be safe) from any hard drive, CD/DVD drive, etc. (i.e. everything except the motherboard) and see if that makes a difference. On the other hand, the problem could be that the power supply is not being turned on because of a bad power switch, broken wire etc. though this seems less likely. Most (9 out of 10) systems will Now because your system apparently does none of this my assessment is that the mainboard is not getting powered. This implies that either I'm posting this answer for people who have similar problems. I recently had the same problem and it seems to have turned out that scratches on the place where you place the motherboard was causing a short which prevented boot. i had issues since wednesday with my new M5a97 and FX-8320 , no vga output no indication of anything working except fans and a few lights, it turned out one of my ram's is broken or misfunctioning i tryed both on all possible configurations, and turned out one worked and the other didnt. at first i thought it had to do with the nw mobo and new cpu. then i thought it was my older vid card and my even older psu.. turned to be one of the new ram's so my advice, try the ram on any possible way, just one. slot 1 - 2- 3 -4 etc took me a day but now its now booting! If the symptoms are such that nothing at all happens when you press the power button, it may be as simple as a dead CMOS battery (the CR2032 button cell on the motherboard). Yes. Really. That simple. I know this because it just happened to me on a "vintage" PIII system I was preparing. Not thinking it could be the CMOS battery (I've encountered dozens of machines with dead CMOS that work fine, except for thinking its 1999), I did the obligatory PSU "paperclip" test followed by multimeter testing the power switch and cable. They were all fine. The battery wasnt. I Replaced it. Everything works. Really. Check the battery.
OPCFW_CODE
Computers are not magic. Nothing in technology is. Computers are tools that follow the operator’s instructions. Commands can be complex, and underlying code can possibly be send incorrect instructions, but in the end, there is no room for interpretation. Computers cannot ‘get mad’ at you, or have a bad day, or do something that they’re not told to do. Not yet, anyway. I deal with technical problems every day. Generally, I try to solve them, with methodical and unambiguous techniques that produce quantifiable results. That’s a complicated way of saying I work on a problem until it’s solved and I know why it happened. This is a cornerstone of my entire professional life, and the process is simple. No matter how complex your problem, troubleshooting follows the same steps: - You have a problem. Define it. - Identify the variables (what can change, especially what you can change). - Change one variable - Test. Do you still have the problem? If not, quit, you have solved it. - Change your previous variable back. - Change another variable. - Repeat until a solution is found. Eventually, you will be rewarded. Sometimes you get a complicated problem, with interaction between multiple variables. But that’s when your process has to be absolutely methodical and boring. Even if the system is burning down around you — especially if the system is burning down around you — you must stay calm; troubleshooting takes as much time as it takes. I bring this up because, over the years, I have encountered a staggering number of technical people — engineers, computer scientists, systems and network administrators — who do not manage to be methodical, for one reason or another. Many people in the IT field don’t have a good troubleshooting process, and waste a lot of time and effort as a result — both their own, and that of those they work with (like me). Even if they solve a problem, they won’t know the cause, won’t be able to recreate the problem, cannot come up with a permanent fix, and cannot apply this experience to future problems. Sometimes these folks are highly pressured and attempt everything they can think of at once. Sometimes they ‘don’t care what the problem is, as long as it’s fixed.’ Many times they simply do not have a background in or experience of problem solving, and also don’t understand what benefits a step-by-step process brings. But a cool head, methodical work habit, and good documentation, combined with sensible precautions (you did back up, right?) will always yield the desired results. Rushing and not knowing why things are working will only lead to problems down the road. I would like to thank the science teachers I had in California public school, who taught me how to design an experiment at an early age. I’m not sure if it was third grade or seventh, but valid experimental procedure has become my ingrained response to solving technical problems. Without it, I wouldn’t have had a good job in university, wouldn’t have managed a technology career, and would not have the life I lead today. My hat is off to you, my former teachers. Here’s hoping there are still some people out teaching the basics.
OPCFW_CODE
How good is an LoR based on a correspondence on one's paper? I am applying for Fall 2018 for Ph.D. Programs in Physics. I came across a paper in my late sophomore year which I found extremely interesting so I wrote a mail to the author expressing my thoughts about his paper which included a slightly new interpretation of one key constant appearing in his paper. He was very glad to read my mail and responded quite enthusiastically--further suggesting that I should send my note to a Journal. I didn't send it to any Journal because I thought it was too unimportant to be separately published. Then, coincidentally, I met with the guy when he visited my University to deliver a guest-talk, in my early final year. Since I knew he was coming, I had been playing around with the calculations and arguments related to his paper and I found a newer way to derive what he had derived in his paper using fewer axioms. So, I met with the guy and told him about my results. This time also he was very enthusiastic and glad to meet me. He again strongly suggested communicating my results to a Journal. We then had many discussions over email regarding my new calculations and his suggestions. We recently found an even more simpler proof of the same results and I am in the process of submitting a paper on the same to a Journal. I have also been in contact with the guy regarding several versions of the draft for this paper. Thus, I thought it would be nice if he writes me a reference letter indicating what he liked about my thinking and arguments, and how important he thinks are my results. Moreover, it would show that I like to explore papers out of my interest and work on them without any academic obligations. But when I wrote to him about it, he told me that he would write me a reference letter if I want him to but he would not be able to write anything quantitatively (to quote, "for example, you were in top 10% of my class or something like that") and thus, he doubts whether the LoR would be really helpful. I think that even a qualitative LoR will do good, thus, I am thinking of asking him to write an LoR anyway. So, before I do that, I would like to know whether such an LoR would do any good or not? And worse, can it somehow backfire? Also, I would like the answer to consider both the scenario -- this being my fourth LoR and this being one of my three LoRs. Honestly, the guy sounds like he has a screw loose. If he doesn't understand that his letter about your research potential (that he has seen in action) will actually be more meaningful than generic comments like " you were in top 10% of my class", then I don't know what to say. I wonder if spelling it out for him would help? @Mad Jack: I was thinking this also. When I found myself in a position to write letters for students (back in the mid to late 1990s and early 2000s; I'm no longer in academics), pretty much everything I read about what makes a good letter is when you can say something specific about what the student has done that makes the student stand out (in a good way). Sometimes I had a really good student who I knew would succeed, but I found it difficult to come up with something specific like this. However, a letter for a student like this pretty much writes itself . . . Agree that it would make for a very good and interesting letter. Is he very young and early in his career maybe? I've never seen an interesting letter mentioning grades or percentiles, in Europe. @Mark Actually, on the contrary, he is a very senior Professor--probably an Emeritus one I guess. He even did a Ph.D. in high energy theory from a US Ivy League in his days. So, I am thinking the custom might be different in those days and he might have lost the contact with letters and stuff in recent years Or worse, I am overestimating the excitement of my work and maybe his enthusiasm included a fraction of formality and he doesn't really think my work is something he could strongly write a letter for and thus, is trying to dodge the proposal. :/ @Dvij Did you work together on the paper and did is he co-author? If so, he should be positive about your work, I'd guess... @Mark No. He is not a co-author. From your description, it sounds like this individual has effectively functioned as your research mentor for this project. A letter he writes for you could certainly be quite valuable as part of your application. Graduate schools are looking, more than anything else, for the potential to do cutting-edge research as a Ph.D. student and after. So I would recommend definitely using his letter. However, keep in mind that this letter will not be able to cover much beyond the quality of your research output. Your contacts with him have been much more limited than would be typical if you were working with an on-site research supervisor. Most letters of recommendation submitted through online application management systems require the recommender to give quantitative evaluations of students' skills, relative to some other population of undergraduates. This professor is saying that he is going to have to select the "don't know" option in multiple categories when he submits his recommendation. That is not a big deal, but it's not nothing either. Moreover, this professor is not going to have any way to comment on your classroom performance. So if you do get a letter about him that points to the high quality of your research, you should make sure that you have at least one other strongly complementary letter that addresses the matters which the first letter cannot.
STACK_EXCHANGE
How do I flush output to file after each write with a Fortran program? I am running a loop in a Fortran program compiled with gfortran that outputs numerical values to an output file for each iteration of the loop. The problem is that the output is not saved to the file but every so many steps. How do I get it to flush each step? Example code: open(unit=1,file='output') do i = 1, 1000 write(1,*) i end do close(unit=1) The other way, if gfortran implements it, is to call the non-standard subroutine flush. Not all compilers do implement this. FLUSH as a subroutine (as in call FLUSH()) is nonstandard, but the FLUSH statement is valid Fortran 2003: FLUSH (10) From the GNU website, it says: The FLUSH intrinsic and the Fortran 2003 FLUSH statement have identical effect: they flush the runtime library's I/O buffer so that the data becomes visible to other processes. This does not guarantee that the data is committed to disk. You need to make the output unbuffered. Try setting the GFORTRAN_UNBUFFERED_ALL environment variable to 'y', 'Y' or 1. Would I do this from bash as follows: GFORTRAN_UNBUFFERED_ALL='y' export $GFORTRAN_UNBUFFERED_ALL ? Just curious? Yes, although you don't need the '$' in the export line. This will work until you exit the current shell. If you want this behaviour permanently you may want to add those lines to your .bashrc file. I have tried the following prescription and I have typed: GFORTRAN_UNBUFFERED_ALL='y' export GFORTRAN_UNBUFFERED_ALL echo $GFORTRAN_UNBUFFERED_ALL Echo printed the proper value. I have tried this with 'y','Y', and 1. None of the solved the problem. Thank you for the suggestion, though. For me this worked fine, thnx for the suggestion! @Patrick: Did you find out why this didn't work for you? The suggestion from "user152979" was excellent and helpful - 10 years later! I'm using an MS-DOS Fortran 5.1 built prgm to transfer programs and data to a custom-made Z80 SBC (single-board computer). The thing is a little prototype, and has only serial ports. To make it work with an experimental Pentium MMX board, (which runs MS-DOS), I needed a little read-write program. Fortran fit the bill, and the .EXE fits on a diskette (no internet access on the MMX board). But the downloaded data to the Z80 was getting scrambled, if I wrote to the COM1 port. Turns out Fortran was buffering the data. I was only getting part of about every 10th record at the Z80. Closing the COM1 file (the output device) and reopening after writing each record of text, caused the buffer to be flushed, and the little Fortran downloader (and the Z80 SBC) now work perfectly. So, even if your version of Fortran does not support a "FLUSH" operator, closing and immediately re-opening the file worked fine to flush the buffer contents to the device. A Side Note about using DOS to write to COM1 port: I had to strap serial port RS-232c pins CTS to pins DTR, DCD and DSR so that MS-DOS could "see" and write to the serial port. In later versions of MS-DOS (ie. "Windows"), you can use the MODE command to set COM port RTS and CTS values to OFF, but with original DOS, you need to use a soldering iron. AND you need to flush any buffered data, after each record write. User152979 says this close & re-open is "clumsy and slow", but in my case, this trick worked perfectly.
STACK_EXCHANGE
We are proud to announce that SQUAD has been certified for the second time in 2017 Best Workplace in the small/medium enterprises category. And we count on you to join us and reach together the first place ! At SQUAD, it is quite simple: we say what we do; we do what we say! Management and consultancy are closely interconnected. At SQUAD, everyone expresses herself or himself, and everyone can get answers to questions or wishes. This enhances a true participatory culture and collective creativity. It is also ensured by different events to which you are invited: site’s meal, agency parties, regular meetings or improvised ones over a drink or lunch. We hire the best experts and they need to remain the best in their field. To ensure everyone’s progression, we develop a personalised Continuous Skills’ Improvement Plan for each consultant. Upon the arrival of the consultant, a detailed analysis of the course, wishes and evolution’s goals is performed. The result is a training plan over 3 years (periodically reassessed, the plan is updated after 3 years). The Continuous Skills’ Improvement Plan is a win/win agreement between SQUAD and each consultant. SQUAD’s contributors can share, experience and add value to our research subjects. Joining SQUAD gives you the guaranty to work on topics at the forefront of technology for major customers or innovative start-ups in the service industry, the telecom, banking, insurance or Internet businesses. We usually offer our consultants long-term missions. A SQUAD mission is the certainty to make full use of your knowledge and acquire new skills. Most missions take place in an international environment while some of them include trips abroad. Eliciting, analysing, specifying, and managing the real needs of the client. Putting in place qualification’s and application test’s plans. Being open to today’s world and aware of new business issues. Getting excited about new technologies and ergonomics. Combining usability, utility and desirability. Allowing users to achieve their goals, facilitating their life and making applications desirable. Assessing software applications in the context of use. Conceiving pleasant and easy-to-use interfaces for all platforms. Being a true WEB 2.0 and mobile platforms expert (IOS, Windows, chrome OS, Android). Juggling different languages (Java, C#, HTML5, Python, PERL, Swift …) and their framework (Struts, Hibernate, Angular JS, Foundation, Django …). Good sizing and dimensions: adaptive and responsive design. Having a real enthusiasm for developing with highly technical languages (PERL, PHP, Python …) and a passion for operating system (Open Source Linux). Showing a naturally developed capacity for optimisation (system, virtualisation, script) and tuning. ‘Virtualising’ is the key word. Virtualising the system (Linux, Microsoft, Vmware, Citrix, Hyperv …). Virtualising the storage (NetApp, Vsphere …). Virtualising the network (Nexus, QFABRIC). Virtualising the datacentres (vCloud, UCS …). The Cloud (Amazon, Microsoft, Google, HP, IBM …). All in a service mode (ITIL V3). Mastering risks management and business challenges, adapting security in accordance. Pen Testing & Security Assessment. Verifying security statements. Setting up Security Development Lifecycle (hackers’ emerging main target).
OPCFW_CODE
Better tools for adjusting to strong encapsulation keimpe.bronkhorst at oracle.com Wed Mar 22 19:55:26 UTC 2017 How do we (Oracle JDeveloper) turn these illegal reflective-access operation warnings off. We don't want or need these warnings when running JDeveloper except during specific developer sessions. BTW, the big kill switch doesn't seem useful, it just hides everything that needs work. On 3/21/2017 11:57 AM, jigsaw-dev-request at openjdk.java.net wrote: > Warnings of illegal reflective-access operations > When an illegal reflective access operation succeeds due to the use of > the `--permit-illegal-access` option, or the use of an `--add-opens` or > `--add-exports` option, then a warning message of the following form is > written to the error stream: > WARNING: Illegal access by $PERPETRATOR to $VICTIM (permitted by $OPTION) > - $PERPETRATOR is the fully-qualified name of the type containing > the code that invoked the reflective operation in question plus > the code source (i.e., JAR-file path), if available, > - $VICTIM is a string that describes the member being accessed, > including the fully-qualified name of the enclosing type, and > - $OPTION is the name of the command-line option that enabled this > access, when that can be determined, or the first one of those > options if more than one option had that effect. > The run-time system attempts to suppress duplicate warnings for the same > $PERPETRATOR and $VICTIM, but it's not always practical to do so. > For deeper diagnosis you can request a stack trace on each such warning > by setting the system property `sun.reflect.debugModuleAccessChecks` to > the value `access`, though this detail might change. (That property can > also be helpful to diagnose mysterious failures due to illegal-access > exceptions that are caught and suppressed.) > In addition to displaying a warning on each illegal access operation, the > run-time system also shows new initial warning messages at startup time. > If `--permit-illegal-access` is used then a warning reports the imminent > demise of that option in the next major release. If either `--add-opens` > or `--add-exports` are used then a warning reports a count of each type > of option used (i.e., opens vs. exports). > Here are some examples of these messages, from running Jython on a very > recent Jigsaw build: > $ java --permit-illegal-access -jar jython-standalone-2.7.0.jar > WARNING: --permit-illegal-access will be removed in the next major release > WARNING: Illegal access by jnr.posix.JavaLibCHelper (file:/tmp/jython-standalone-2.7.0.jar) to method sun.nio.ch.SelChImpl.getFD() (permitted by --permit-illegal-access) > WARNING: Illegal access by jnr.posix.JavaLibCHelper (file:/tmp/jython-standalone-2.7.0.jar) to field sun.nio.ch.FileChannelImpl.fd (permitted by --permit-illegal-access) > WARNING: Illegal access by jnr.posix.JavaLibCHelper (file:/tmp/jython-standalone-2.7.0.jar) to field java.io.FileDescriptor.fd (permitted by --permit-illegal-access) > WARNING: Illegal access by org.python.core.PySystemState (file:/tmp/jython-standalone-2.7.0.jar) to method java.io.Console.encoding() (permitted by --permit-illegal-access) > Jython 2.7.0 (default:9987c746f838, Apr 29 2015, 02:25:11) > [OpenJDK 64-Bit Server VM (Oracle Corporation)] on java9-internal > Type "help", "copyright", "credits" or "license" for more information. > >>> ^D More information about the jigsaw-dev
OPCFW_CODE
Vehicle based Simulators¶ Vehicle based simulators simulate the interactions of individual vehicles as they move on the road and are therefore ideal for studying the effect of detailed changes to the network. They have proven to be very useful for testing new traffic control systems and management policies, based either on traditional technologies or as implementation of Intelligent Transport Systems. Aimsun Next has three modes for simulating individual vehicles, the Microscopic Simulator, the Mesoscopic Simulator, and the Hybrid Simulator. Which modes are available with which licenses is shown below. |Micro||y (Pro Micro)||y||y| |Meso||y (Pro Meso)||y||y| Vehicle-based simulators in Aimsun Next can simulate adaptive traffic control systems such as SCATS, SCATS-RMS, VS-PLUS, UTOPIA, Siemens UTC System with SCOOT (This requires the Adaptive Software Interfaces license extension); vehicle actuated, control systems that give priority to transit, Advanced Traffic Management Systems (using VMS, traffic calming strategies, ramp metering policies, etc.), Vehicle Guidance Systems, Transit Vehicle Scheduling, and Control Systems or applications aimed at estimating the environmental impact of pollutant emissions, and energy consumption. In microsimulation, time is quantized into short fixed intervals and the actions of each and every vehicle are calculated at every time step. The behavior of each vehicle in the network is therefore modeled throughout the simulation time as it travels through the traffic network, interacting with the other vehicles in the network, interacting with the control systems in the network and reacting to incidents programmed into the simulation. Different types of vehicles are modeled, from small cars to large good vehicles with different driving dynamics. Different drivers are modeled with changes to characteristics such as reaction times and aggressiveness. The microsimulator can also simulate the interactions between vehicles and pedestrians moving in the same area. The pedestrians are simulated by using an embedded pedestrian simulator. In a mesoscopic simulation, the vehicle is also modeled as an individual entity, exactly the same as the microscopic approach but the behavioral models (e.g., car following, lane changing, etc.) are modified to predict the speed and lane choice of a vehicle only at the start and end of a road section and not at every time step in the simulation. The simulation is therefore event based rather than discrete time based, the events being the arrival of a vehicle at a node or at the start or end of a road section. The vehicle is not explicitly simulated while it is inside a road section but the prediction of when it will appear at the end of the section will take into account the traffic conditions(i.e. congestion, flow, bus stops ...) in that section. Therefore, not all vehicles are updated at each time, only those where an event is scheduled are considered and hence mesoscopic simulation runs much faster than microscopic simulation for the same number of vehicles in the network. In the Hybrid approach, the simulation concurrently applies the microscopic model in selected areas and the mesoscopic in the rest. The hybrid model is recommended for large-scale networks which also contain specific areas where the level of detail needs to be microscopic (for example, for actuated control, transit priority, pedestrian modeling, detection or adaptive control systems) but with a global network evaluation. The use of the mesoscopic model in the other areas means that the simulation requires less computational time. In the Hybrid Macro-Meso approach, the simulation is vehicle based and concurrently vehicles are assigned to macro sections by adding 1 in the assigned volume and the mesoscopic network loading is applied in the mesoscopic sections. This hybrid model is also recommended for even larger networks like whole region or country based models where you need to have some zones with higher details that they are very difficult to achieve when using a full macroscopic model. The use of this hybrid macro-meso approach is appropriated to decrease the computational time of the model or the detailed geometry of some areas are missing or the control plan definition is missing in some parts of the model. The outputs provided by the vehicle-based simulators are a continuous animated graphical representation of the traffic network performance, both in 2D and 3D, statistical output data (flow, speed, journey times, delays, stops), and data gathered by the simulated detectors (counts, occupancy, speed). Furthermore, for the microscopic simulator and the microscopic areas in the hybrid simulator a continuous animation of the simulation vehicles is also produced. Documentation of vehicle based simulation is covered in 5 sections: - Dynamic Traffic Assignment covering the route choice decisions made by individual vehicles in both micro and mesoscopic simulation - Microsimulation covering the discrete time based microsimulation. - Mesoscopic Simulation covering the event based mesoscopic simulation. - Hybrid Meso-Micro Simulation covering combined mesoscopic and microsimulation. - Hybrid Macro-Meso Simulation covering combined macroscopic and mesoscopic simulation.
OPCFW_CODE
Changing the order of that clause and prepositional phrase I can say that the world will end with confidence. I can say with confidence that the world will end. I know that people would avoid to say the first sentence, but it is highly unlikely that someone would interpret it as saying "with confidence, world will end". So here comes the question. It is possible to say both of those sentences, but I don't know why we can shift the "that clause". Does it work in the same way as "heavy noun phrase shift"? Or is it because this that clause is used as a noun being an object? Here is my assumption. 3."I can say that the world will end." 4."I can say with confidence." Both sentences work greatly without a that clause for the fourth sentence and without the prepositional phrase, "with confidence," for the third sentence. Thus I am assuming that changing the order of these two, that clause and prepositional phrase, is possible since we can delete one of them and still make sense, which makes changing order of those two do not matter at all. Is my assumption correct? If not, please tell me why it is grammatically and idiomatically correct to shift it around. Thank you. How does your theory explain that 1. tells us how the world ends, but 2. tells us how you say that? Some say the world will end with fire; some say with ice. I say the world will end with confidence. (Apologies to Robert Frost.) @PeterShor I say that the world will end with an RP accent. Your intuitions about the grammaticality of your four examples aren't quite right. Sentence (4) is ungrammatical. The reason is that the verb SAY needs a Complement (under some grammatical analyses we'd say it needs an Object). The phrase that the world will end is a Complement in example (3), but with confidence in number (4) is an Adjunct. It's not a grammatically essential part of the sentence. So example (3) is grammatical and (4) isn't. Unfortunately, the phrase heavy noun phrase shift was coined before the more modern interpretation of noun phrase used by linguists such as Huddleston & Pullum (2002, 2005). For these more recent writers noun phrase refers to a phrase headed by a noun. For earlier scholars a noun phrase didn't necessarily involve anything like a noun at all. A noun phrase was just a phrase with the grammatical function of Subject, Object or "Object" of a preposition. It could quite easily be a non-finite clause: Her leaving so early annoyed me. The term heavy NP shift, therefore, also includes the postposing of clauses functioning as internal Complements of the verb. Sentence (2) therefore could indeed be said to be an exemplar of heavy NP shift, even though many modern grammarians would not actually regard that the world will end as a noun phrase. It isn't a phrase headed by a noun. In short what allows us to move the clause functioning as Complement of the verb is indeed the fact that it is heavy, in other words long (it is several words long). Compare the following examples with (1, 2): I can say that with confidence. *I can say with confidence that. (seems ungrammatical) Here where the Complement of the verb SAY is short, we don't seem to be able to move it to the end of the sentence.
STACK_EXCHANGE
Can I backup a running VM? Yes, the VM Explorer® will create a snapshot of your running virtual machine and backup the snapshot. The ESX/ESXi VM will still run during the entire backup process. As far as it concerns the Hyper-V VM, it depends on the guest OS which may be put into saved state during the snapshot process. Where can I find the user manual? The user manual is installed with VM Explorer® to your program file folder. You can also find a shortcut from your Windows start-menu > Trilead > VM Explorer to the documentation folder. Which TCP ports are required? For ESXi Servers: For ESXi editions only TCP Port 443 (HTTPS) is required If the option 'Use VMX Agent on ESXi' is enabled, Ports 22(SSH), 443 (HTTPS), 62000-65000 are required. To use VDDK port 902 is required. For ESX Servers: For ESX servers 22 (SSH), 443 (HTTPS) and TCP Ports 2500-3000 are required. To use VDDK port 902 is required. To verify the ESX firewall you can run the following command: To modify manually your firewall please run the following command: esxcfg-firewall -o 2500:3000,tcp,in,VMX-Explorer esxcfg-firewall -o 2500:3000,tcp,out,VMX-Explorer If you are copying from/to ESXi (using the agent)/Linux/FreeBSD, you have also to open the following ports on your ESX server: esxcfg-firewall -o 62000:65000,tcp,in,VMX-Explorer esxcfg-firewall -o 62000:65000,tcp,out,VMX-Explorer For vCenter the port 443 (HTTPS) is required. For Hyper-V Servers: For Hyper-V servers 9000, 9001, 62000-65000 are required. For Linux and FreeBSD servers: For Linux and FreeBSD servers the port 22 (SSH), 62000-65000 are required. For PC running VM Explorer The port 111 (VM Explorer NFS service) is required in order to use automated backup functionality. How can I test VM Explorer Pro Edition? Just download the free edition of VM Explorer® and request a Trial Key. After installing the trial key you can test all features of the Pro Edition. Can I get faster backup performance on ESXi? Yes, but Version 1.6 or higher is required: Verify your ESXi Server Settings in VM Explorer®. Switch to 'Expert Settings' tab and enable the 'Use SSH' option and the 'Use the VMX Agent on ESXi' option. This will make the backup job faster. Can VM Explorer backup VMs using thin provisioning? Yes, VM Explorer® can backup these VMs. Thin disk on ESX/ESXi server is based on file system, therefore, under certain circumstances, when a backup is performed full disk is backed up. It is possible to convert disk as thin after backup/restore process. When you configure a backup in backup setup dialogue switch to "Files & Disks" tab and enable "After backup convert disk as thin". if target server is set to "same host as VM" thin disks are transferred directly as thin disk if target server is an ESX/ESXi host thin disks are converted to thin after backup/restore process if target server is not ESX/ESXi thin disks are backed up as full. Incremental backup for licensed ESXi host (not the free edition), with at least "Essential" license, transfers and backups only used space and not full disk space. On the other hand incremental backup for ESXi free edition backups full disk space. the disk must be located on a VMFS volume (backing does not matter). the virtual machine must have had no (zero) snapshots when changed block tracking was enabled I get the error: Download error: Insufficient system resources exist to complete the requested service If VM Explorer® is installed on Windows Server 2003 or Windows 2000, please see the Microsoft Knowledge base article KB 304101 http://support.microsoft.com/kb/304101 to resolve the issue. Add/Manage an ESXi server with a non-root/non-administrator user For ESXi 5.0 or higher: - configure a new role on the ESXi server: - using vSphereClient open the Home overview - choose role and a new role - to backup give the permission to the role: Datastore, Virtual Machine - to restore and replicate you need to add the Resource permissions - to incrementally backup/replicate you need to add the Global -> Licenses permissions - configure a new user on the ESXi server: - using vSphereClient open Local User & Group tab - add a new user and check "Grant shell access to this user" option - assign the previously configured role to the user By default ESXi does not allow other roles than Administrator to connect with SSH. In order to allow any other user to connect through SSH, follow these steps: - open an SSH connection to the host (with an Administrator user) - edit the file /etc/security/access.conf: - edit the line where the newly added user is mentioned, change the minus (-) sign with a plus (+), for example: You can now open VM Explorer and configure the server using the newly created user. - We strongly advise to configure the root/Administrator in VM Explorer - You need to enter the root password in order to be able to use the Agent option. In this case root password is only used into the ssh shell. Incremental backups using VD Service (CBT) from ESXi 6.0.X servers may be corrupted. This is a VMware ESXi issue in CBT logic that affects ESXi 6.0.X servers prior to patch ESXi.6.0.3247720 As described in the following VMware KB (http://kb.vmware.com/kb/2136854 ), the issue occurs when you run virtual machine backups which utilize Changed Block Tracking (CBT) in ESXi 6.0; the CBT API call QueryDiskChangedAreas() might return incorrect changed sectors that results in inconsistent incremental backups. All incremental backups done from ESXi 6.0.X server prior to patch ESXi.6.0.3247720 should be considered as corrupted. Update all ESXi 6.0 servers to the latest available patch and replace all incremental backups previously made. Force an initial new full backup by resetting Changed Block Tracking (CBT) of all the VMs to backup, as described in the following VMware KB: http://kb.vmware.com/kb/2139574 Licensing & Ordering How can I buy VM Explorer? You can buy it using our online store: https://software.microfocus.com/en-us/products/vm-server-backup/pricing How does licencing exactly work? VM Explorer is licensed by physical CPU sockets within hypervisor hosts that contain VMs to be backed-up. Each instance must be licensed with a Starter Pack, and additional CPU sockets can be purchased to meet your environment. VM Explorer Professional Edition Starter Pack includes the license to use (LTU) for 4 (four) CPU sockets. VM Explorer Enterprise Edition Starter Pack includes the license to use (LTU) for 6 (six) CPU sockets. Either VM Explorer Professional Edition Starter Pack or VM Explorer Enterprise Edition Starter Pack is obligatory. Individual CPU Socket additions are applied on top of Starter Packs. What is the support / maintenance pricing? Annual maintenance and support is charged at a percentage of your total license fee. You can buy 1, 3, 4 or 5 year packages.
OPCFW_CODE
This task describes how to use Extensibility Accelerator to automatically create one set of rules that identifies the types of controls for which the test object class is used. Note: This task is part of a higher-level task. For details, see How to Map a Test Object Class to Application Controls. Follow the first three steps in How to Map a Test Object Class to Application Controls. Make sure that running scripts or ActiveX controls is enabled in the browser running your application. - Click the Select Controls button to start the control selecting session Extensibility Accelerator is hidden, and two buttons are displayed at the top of the screen: Create Rules and Cancel. - Move your mouse over your open applications The mouse pointer is converted to a pointing hand. Each control that you move over is highlighted in the application, and the name of the HTML element that represents the control is displayed. In the image below, the INPUT HTML element is displayed for a highlighted radio button control: Tip: In many cases, you can hold the left Ctrl key to change the pointing hand to a standard pointer and perform operations in your application, such as navigating to different Web pages, clicking links, selecting edit boxes to enter information, selecting from drop-down lists and so on. (Keep in mind that the browser behavior might be affected by the fact that the Ctrl key is pressed.) If a specific page does not load properly when navigating to it this way, load that page in an additional browser before beginning the session for selecting controls. If you navigate to a different Web page, the highlighting process continues on the page that opens, after it is fully loaded. - Click on a control of a type that you want to support with this test object class ASelection Dialog Box opens on top of the browser, displaying the properties of the HTML element that represents the selected control: The top part of this dialog box displays additional elements in the control's HTML hierarchy. Note: To view the properties of a different HTML element, or to select it to represent the control, select the element from the displayed hierarchy. - Click the Select button The selected control is highlighted in the application. In the image below, the radio button controls are selected: - Select additional controls that need to be supported by the same test object class Try to select several controls that need to be treated as the same type of control and share common properties, but are not identically implemented. The quality and accuracy of the rules that Extensibility Accelerator creates is affected by the number of controls you select, and their diversity. - Modify your selection of controls - Optional - Complete the process by clicking the Create Rules button, or click the Cancel button The control selecting session ends. If you clicked Create Rules, the following happens: Extensibility Accelerator creates mapping rules for this test object class based on properties that are common to all of the controls you selected. If a large majority of the selected controls share common properties, the remaining controls might be ignored when creating the rules. If appropriate, the created rules might contain regular expressions. For example, if you select two ASP.NET Ajax accordion panels, one that is selected ( className = accordionHeaderSelected) and one that is not ( className = accordionHeader), the created rule will include a regular expression condition: className equal accordionHeader* Caution: Any rules previously contained in this panel of the Map to Controls tab are now replaced. If the controls do not have enough properties in common, no rules are created. Tip: If you want to use the same test object class to support different types of controls, use this automatic process to create rules that identify one type of control. Then edit the rules manually to include additional types, for example, by adding rules with Or or And NotEqual logic. The highlighting is removed from the application. The rules are displayed in the rule editor area in the Map to Controls tab and added to the relevant Identification element in the toolkit configuration XML file, in Conditions elements.
OPCFW_CODE
Unable to sign in via SAML in Cloudstack Simulator ISSUE TYPE Bug Report COMPONENT NAME UI: SAML login CLOUDSTACK VERSION main branch commit: f572c7ab74508366b3b2ccbb2c0e6eeaa872fd36 CONFIGURATION SAML authentication activated OS / ENVIRONMENT Environment: development, Docker + Docker-compose, "cloudstack-simulator" container running on localhost:5050/, Ruby on Rails server running on localhost:3000/ as custom IdP implemented via the saml_idp gem. SUMMARY The user is being redirected to login form after a successful SAML SSO login. STEPS TO REPRODUCE After the IdP sends POST http://localhost:5050/client/api?command=samlSso , it receives a response header set-cookie: userid=063f7aef-f355-4e4e-85f3-dcbaef02bb84 from the Cloudstack: The path in the header is not specified, so the cookie is being set to "/client" Path: Meanwhile, the previously successful POST got a response with the 302 status and the response header location: http://localhost:5050/, so the browser is redirected to that location, but instead of a dashboard we see the login form again: OK, now we open localhost:5050/client/ in the browser and fix the "userid" cookie Path to '/' manually: Then refresh the localhost:5050/ tab and voila: My thoughts: I'm aware that Cloudstack UI on production is run on /client path, so the bug wouldn't reproduce there, but I'm convinced that we shouldn't implicitly hardcode the cookie to the /client Path. Instead I propose to explicitly set it to the / Path. I'm creating a PR to fix this and get back to you shortly. P.S. Thank you for the SAML authentication option - it's a very convenient way to authorize users in Cloudstack! EXPECTED RESULTS The user is redirected to his Cloudstack dashboard ACTUAL RESULTS The user is shown the login form The PR is ready, please share your thoughts on that! I think it's happening solely due to two different webservers used in development mode. The mgmt server is on port 8080, UI/npm server on 5050. You can try to find a workaround using saml related global settings (for ex. how it is redirecting traffic). I believe, this is not a production issue you're facing (where the static UI and API is served by Jetty). Hi @rohityadavcloud ! Thanks for reaching out on Saturday :) You're correct, this is not a production issue, I'm using the cloudstack-simulator docker image strictly in development. The image helped me a great deal though - its out-of-box/all-in-one strategy allowed me to quickly set it up in docker-compose and move on to the development itself. Its easiness is really mind-blowing, taking into account the complexity of the project. But unfortunately, SAML SSO refused to work in the "out-of-box mode". When I found the issue, I realized that I had two options to solve it: either the Cloudstack UI URI must be changed to "/client"; or the cookie should be assigned to "/" Path instead of "/client" If the first option were easy, I'd've gone with that variant. But neither CS repository surfing nor examining docker image helped me with this. Plus personally, I'm not such a fan of the idea of the hardcoded cookie which forces to use "/client" path for UI if a dev wants to use SAML SSO. For the second option it was quite easy to find the code section where cookie headers are assigned and then to build a new simulator docker image to test it. For me the option of using the custom cloudstack-simulator docker image is appropriate - SAML logins in the new containers which use the new docker image work well; plain logins also work well. But I put myself in the place of a next person who is to implement SAML SSO in one's project, and I'd love to save the one from the debugging process I went through. Closing the issue, as it seems that it's not going to be merged. Solved this by setting nginxes proxy_cookie_path ~*^/.* /;
GITHUB_ARCHIVE
How to see which images Google has indexed from your website? Ive got a website that is using lazy loading to load in images (so that the browser dose not try and load them all at once before they are required). The only issue with this is that there are some SEO pitfalls of doing this, ive followed best practices for this by showing the html markup on the static page and then removing and lazy loading images with JS. But i can never be sure if its working, is there a way to tell if Google as indexed particular images in the same way you can see which pages have been indexed via webmaster tools ? You can find out if Google is indexing your images by heading over to Google.com, searching for site:yourdomain.com then click the images tab. This will list what Google has decided to index. Please note, Google doesn't just index every image on the planet and it can take some time. Chances are if Google has indexed one lazy load image, it can then read the rest. Most lazy loads are compatible with Google, this is because Google is able to read the JavaScript DOM and fetch external resources through it. Thanks @BYBE we've also got a tumblr blog running on a subdomain blog.mysite.com - when i ran the google search it comes up with loads of images from the blog, do you know if there is a way to do this search for just the root domain ? (Ive tried running it as site:www.mysite.com but the site is indexed as the none www. version so nothing shows up for that @sam `site:domain.com inurl:.domain.com -inurl:sub.domain.com' (first operator statement shows all results on the domain.com, second operator statement shows all sub domains on that root domain, third operator statement excludes all results on the sub domain 'sub' at that root domain) for example. Modify as required for your search. Though doing a site: search in Google images shows the images listed, it's not an easy way to find out if they are all indexed, as it doesn't have a total count and can infinite scroll for a long time if you have many images. However if you turn of java script in your browser then do a the same search, you will get an approx number of results shown and you can see the page numbers to click though the results. Another way to show the exact number of images indexed is to to create an image xml sitmap, or add images to your page xml sitemap. Then submit this in Search Console (Webmaster Tools) and once updated it will show how many are submitted compared to indexed. Although it wont give you a list of which ones are indexed or not, it will give you a number
STACK_EXCHANGE
The other new additions to the OS have made it worth the change for me. Specifics would be nice. What makes it worth the upgrade? These are a few features that I've found have improved my Windows experience (in no particular order): System search I love the new system search. It's faster and more comprehensive than previous versions. Just hit the Windows key and type New Copy interface The new Copy dialog gives you a lot more information about the progress of your copy or move of files. You can easily pause/resume or stop operations. If you have multiple copy/move operations going on you now have them all consolidated in one dialog. Faster As Windows 7 was faster on the same hardware than its predecessors, so Windows 8 is faster still. I put it on a netbook that originally came with Windows XP. That netbook was much more functional tithe Windows 7, but Windows 8 even improved on that. I've found the same to be true with my other hardware. Multiple monitor support I used to have to use a 3rd party software to get multiple monitors to behave the way I wanted. Windows 8 finally go this right. File History This is "set it and forget it" backup. Any files that you add to a Library are automatically backed up to a location you choose. I plug in a USB external drive for this. Windows 8 backs up everything at an interval I decide. Fast secure booting Windows 8 supports UEFI which is known as Secure boot. This prevents bootkit infections. I've found it also boots faster than a normal BIOS on my machines that support this. Built-in AV I never understood why MS allowed a 3rd party industry to develop for anti-virus software. This is a function Windows should handle natively IMHO. Finally, Win8 has this included via Security Essentials System Refresh I haven't had to use this yet, but Refresh and Reset both revert Windows back to the system defaults. The difference between them is the extent to which the system gets reset. Refresh preserves user settings, user data, and applications purchased in the Windows store. Everything else is removed and restored to defaults. Reset removes all applications and data, and reinstalls the OS essentially from scratch. Settings Sync If you log in with a Microsoft account (old Windows Live ID) you can sync your Windows 8 settings and personalizations on any machine. I've found this very handy working on different machines. Task Manager The new Task Manager really appeals to me. I love to watch the system performance and have this window open constantly. The new design is more attractive in addition to the increased functionality. Don't know what a process does? Right click on it and check out all the options. Keyboard commands Windows 8 adds a number of Windows key + combinations. I find myself missing them when I use older versions of Windows. Faster screenshots Hit Win+Print Screen and you get a screenshot saved in PNG format. Or use the Snipping Tool to define a part of the screen and annotate it. It's great to have this built in to the OS.
OPCFW_CODE
During my developer's life, I've received a lots of questions about how to become a game developer, how to write a game, and more. In this article I'll try to respond you, explaining what you need in order to write a good game and what could be the right way to follow. Do you want to be a game developer ? If you want to be a game developer, you need to read a lot about algorithms (I wrote an article about algorithms books that I liked), and, of course, a mathematical background is needed. But, assuming you've those kind of skills, what about next ? Well, most games are written using a not-too-much-high-level programming-language, like, of course, C & C++. Ok, assuming you know C and C++ well enough, the next step is to: - Write an engine yourself directly with OpenGL - <--- OR ---> - Choose a good Game Engine that will help you while writing the game Choosing a good game engine Well, choose a good game engine is not ease. As far as I'm concerned, it depends on what you expect from it, and, primarily what kind of operating system would you like to support. The best that I can recommend you are: - Cocos2D (>= v3.x) Don't be fooled by the naming; believe me, 2D or 3D does not mean what it can do. They support 2D and 3D programming either (assuming you're using the last version of both). It's a framework, a game engine, an IDE and make you a coffe when you're tired. Shortly: it's what you need assuming that you're ready to pay (yes, it's not free at all!) to write your first game and you know that your games will bring you money. If you would like to proceed in this way, learn more with this book. Pretty easy. This is my favorite game engine and framework. But there are different version of Cocos2D. So, which kind of version is better rather than the others ? - Cocos2D-x: You can code your games using C++ - Cocos2D-swift: The Objective C version of Cocos2D-x, only for iOS and OSX. I personally use Cocos2D-x because it's open source, free-to-use without pay nothing, and it give you the possibility to easily support: - Windows (Intel + ARM) About Cocos2D-x, I can recommend you some books: But be careful: these books are a little bit old because we're now at the version 3.x of the framework and what I linked above are about the version 2. Many changes was be made between these two versions, and tons of methods and classes are been deprecated, so probably the best reference you may found around the web is this, the official wiki page. Write a game engine using OpenGL This is probably the hardest way you may choose. This topic needs an article. I'll write it later durint this week as a part-2 of this one.
OPCFW_CODE
Add the ability to cancel a running query to the Data Explorer Would it be possible to have a "cancel" button somewhere while the "Hold tight while we fetch your results..." text is being displayed? The thing is, I occasionally (rarely, very rarely!) make a mistake. Specifically, sometimes I click "Run Query" in the Data Explorer, and then immediately realize I've left something out, or misspelled something in a WHERE clause, etc - some simple mistake. Considering the timeout for queries is 2 minutes, that can be rather frustrating (especially while tweaking a complicated query). It would be great to be able to cancel that query, make the quick fix, and resubmit. I'd be okay with the "cancel" button not being available until a certain amount of time had passed without results from the query (5, 10, 15 seconds? Anything less than 2 minutes, I suppose). Oh, and Tim Stone can't complain about this, because it was totally his idea. Obligatory =) I call this the "Oh $%^& button" and am glad to have it in SQL Server Management Studio... Yeah, I forget a TOP(n) all the time, and then... this. @BenBrocka I'm definitely calling it that from now on. Sam had already done the hard work of making the query run as an asynchronous process, so I just went ahead and hacked something together to allow for user cancellation. The feature works with the following points in mind: The cancel button isn't enabled until after the first poll for results If your query has already executed and is in the process of returning results, you'll get the results back anyway If your query has multiple sections (split by GO, for instance) and you cancel it after some of those sections have completed, you'll get the results of those sections where applicable There's a slight chance that if you cancel a query that would have caused the server to throw an exception, you might not get that error message (hopefully unlikely) In the typical case where you just have a single query that you're cancelling before it's gotten to the result set processing stage, you get a message like the following: Note that your execution of the query is still logged even when you cancel the process, to monitor for abuse. Thanks for turning this into an actual request to prompt me to get it taken care of! Hmm, actually in the case where you get the message it probably doesn't correctly show you that a new revision was created regardless...I will preemptively fix that bug in a bit. Will go out in some build, sometime, version > ...Oh, wait, there isn't a version number in Data Explorer's footer! Go support that if you'd (general "you", since it seems that jadarnel27's already been there) like one. You made quick work of that! Can't wait to see it on the site (in some mysterious version) =D Minor suggestion: Change the button text from "Cancel" to "Oh $%^&" (per Ben Brocka's comment) I've heard a rumour that it'll go out in the near future, pending a review (hopefully sometime this week) of the fairly large backlog of things I've added since the last pull to the main repository. Really, Tim? "I can't believe I didn't think of it myself"? The question explicitly states "Tim Stone can't complain about this, because it was totally his idea" with a reference link and everything.
STACK_EXCHANGE
The page contains two elements: <aside>to the left. <main>to the right. (Note: Throughout this post, heights are mentioned for the sake of completeness, but are irrelevant to producing this problem.) All heights, widths, and margins are set with respect to var w = screen.width/100; (and var h = screen.height/100;) so that the page essentially looks the same in any display resolution. And they are set so that the width of <main>, and the margin between them all add up to var w = screen.width/100; document.getElementsByTagName('main').style.width = 85.5*w + "px"; document.getElementsByTagName('aside').style.width = 14*w + "px"; document.getElementsByTagName('aside').style.marginRight = 0.5*w + "px"; (85.5 + 14 + 0.5 = 100) <main> gets pushed down below the <aside> for unknown reasons. I can only think of a half-sensible hypothesis to somewhat explain this behavior. However, if I set the font size of the body to 0 and zoom out (so that the elements take less space) and zoom back in, this gets fixed (I don't know why, and don't ask me how I found this out). What is the reason for this behavior, and what is the proper fix? The hypothesis (can be skipped): The browser seems to think "What would happen if I display the scrollbars even though they are not needed?", and then notices that the scrollbars have a width > 0, which means that <main>are taking more space than available (since they are set to take up 100% of the screen width, and now there is a scrollbar competing for the space). The browser therefore decides to reposition the <aside>and ruin the design. And now since <aside>the elements no longer fit inside the screen and the scrollbars are actually needed now and therefore stay, even though they are the cause of their own existence (as far as this hypothesis goes). The elements have display = "inline-block";. Using floatproduces horrendous behavior when the browser is anything but max size. Here is the code to reproduce the problem: <!DOCTYPE html> <html> <body> <aside></aside> <main></main> <script> var h = screen.height/100; var w = screen.width/100; var e = document.getElementsByTagName("aside").style; e.display = "inline-block"; e.backgroundColor = "lightblue"; e.width = 14*w + "px"; e.height = 69*h + "px"; e.marginRight = 0.5*w + "px"; e = document.getElementsByTagName("main").style; e.display = "inline-block"; e.backgroundColor = "green"; e.width = 85.5*w + "px"; e.height = 69*h + "px"; e = document.getElementsByTagName("body").style; e.margin = e.padding = "0"; e.backgroundColor = "black"; </script> </body> </html>
OPCFW_CODE
Cancel button not working You’ve added a cancel button to your web form and published the form. When someone clicks on the cancel button, the button might work in some browsers but not others. Why? This is a browser-specific issue, but the product is working as designed. To fix this issue, use either the text field format for the cancel action or the Take User to URL redirect option with the Cancel button, rather than Close the browser window option. Cannot import submit button You’ve imported a custom web form, but the submit button is not recognized by Acoustic Campaign. Because there is no recognition, an error occurs that causes the custom web form import to fail. Why? The error occurs because Acoustic Campaign supports the <input> HTML code as: <input type='submit' value='Submit'> for the Submit button in a custom web form. All other types of submit button codes are not supported, such as <button>:<button type='submit' value='Submit'>Submit</button> In this case, if you are using other types of button code like '<button type='submit' ... >', you must work around this issue by changing the code to '<input type='submit' ...>'. Custom event name error When you add a new name to a custom web tracking event, you'll see an error. To fix it, use a unique name for each custom event name to prevent this error. Be aware that you cannot use the following terms as a custom event name: Click, Custom, Download, Form, Multimedia. Custom opt-out forms and unsubscribes When you use custom opt-out forms, the opt-outs are not registered in Acoustic Campaign reports. If the opt-out link is a tracked hyperlink, you can get a sense of who opted out by looking at the clicks for that link. You can still see opt-outs for an email with a custom opt-out. This happens when recipients send an abuse notice through their ISP. This is considered an opt-out in the report. Note: Bounced email addresses are not included. Custom web forms are not auto populating the checkboxes You’ve changed the field in your form from check box to radio selection and republished the form. You test the form and find that the radio selection does auto populate with the contact data. Why? You must add the following bit of code: value='Yes' and allow check box selections on your custom form to auto populate. <td class='fieldLabel' style=''><label class='yesNoCheckboxLabel'> <input type='checkbox' value='Yes' name='COLUMN11' id='COLUMN11' label='checkbox2' />checkbox2</label></td> To clarify, this does not set a default value on the field, but rather displays the check in the box if the data in the database for that field is marked as Yes. Email: Value already exists When someone submits a web form, the following error appears. Email: Value already exists. This error might come up when you are testing with limited internal test emails and applies only if the same form is submitted many times under the same browser with an email address already in the database. It is not unusual to lose track of which of the internal email addresses is already in the database. Under Standard Web Form Properties, if Auto-populate if contact is known is checked, the email address in question will be remembered the next time that the same web form with the same browser is visited. When email address 'A' was submitted with the web form, the browser remembers the email address due to the web cookie. Therefore, the last submission was tied to email address A in the database. As you are testing the web form, it is possible that you visit the same web form by using the same browser multiple times. The next time that you visit the same form, you will see that email A is pre-populated on the form if you are using the same browser. If you submit email B on the same form and B already happens to be in the database, the Email: Value already exists error shows. If email address B is not in the database, you won't get this error and you will overwrite A's email with B's. Web form visitors are unlikely to see this error. The chance for them to visit the same form a second time and enter an email address that belongs to someone else in the same database are slim. External form postcode does not cause the external form to update You’ve created an Acoustic Campaign external web form using external form postcode. When you added a new field and republish it, the new field does not show on the external site. Why? External form post code is raw HTML defining the code and submitting it to Acoustic Campaign. When you update a form with new text, a new field, and so on, you must publish the page and grab the new external form post code and update your existing page. If you want the form to update automatically when you publish a page after an update, you will want to use the iframe code. This code displays the form, as it appears when published, in an iframe. When updated and republished, the new field displays on your external site. Information not found When you click Submit on a web form that was linked from a test email, the following error message appears: The information you entered was not found. This is related to the test email not being a live email. To fix this issue, either create a test database or query out the specific test email accounts and send them a live email. By doing this, the web form submits without error. Issues with special characters You’ve imported HTML for a web form, but when a contact submits special characters available in UTF 8 encoding, they are not being stored correctly in the database. Why? Make sure your HTML meta tag contains 'http-equiv', 'content' and 'charset' attributes as follows: <meta http-equiv='Content-Type' content='text/html; charset=utf-8'> Then, republish your web form for any changes to take place. If the issue persists, please contact client support. New contact is stuck on Lookup in a web form You’ve enabled the Include Lookup feature, but it does not allow for a new (or prospective) contact in your database to access the web form. Why? This feature can be checked or cleared when the landing page has not been saved. Once the landing page is saved, or it has been (re)published, saving occurs before publishing. The Include Lookup check box is disabled as a selection and cannot be undone or removed. It is best to use this feature when your web form is exclusively accessed from a database-matching an Acoustic Campaign mailing. The recipient, as an existing database contact, is a known record and can access the web form upon entering the required Lookup criteria. New external form field values not populating in database You recently added a field to an externally hosted form. Even though a value is being provided in the field, this value is not populating in Acoustic Campaign when data is submitted. Why? An externally hosted form cannot exist on its own. It must be tied back to a corresponding form hosted within Acoustic Campaign. When you add or remove fields from the externally hosted form, you must make the same changes to the Acoustic Campaign hosted form. Two extra "p" tags in the web form When creating a web form, you notice that it always seems to add 2 extra p tags, an open tag and closing tag, to the html when it is saved, published, or anything in the web form is changed. Why? The cause of this behavior is because the WYSIWYG-editor knows HTML and not HTML5. To avoid, publish from the source code, making sure that the non-breaking space is not there, and then do not change after publishing. Unable to process your request When submitting a web form, an error message 'Unable to process your request' appears. This occurs if the contact source is in the archive folder. To fix this error, move the contact source out of the archive folder or make a copy of that landing page and assign a new contact source to it. Use email as a hidden field for the primary key of a database When creating or modifying the web form within an Acoustic Campaign Landing page, it is not possible to provide a hidden field for the primary key of a database since, by default, it is a required field. This is true for the email field as well, as it also cannot be a hidden field on any web form regardless of whether the database type is Flexible or Restricted. Attempting to hide the email field in a non-imported web form results in the following error: "You cannot hide the email address." VS parameter in the url from an unsubscribe form submission The VS parameter is the session cookie added to the user's browser. You can view these cookies via different methods depending on the browser being used. Instructions per browser for viewing cookies can be found online. Web form validation field After the validation choice is made and the changes are live, an error will appear any time the incorrect format is used. The error will have the field name and acceptable data for entry. Here is a list of accepted validation types. Alphabetical characters only Enter letters (A-Z) only. Enter numbers only Date in mm/dd/yyyy The date must be in mm/dd/yyyy format. URL starts with http:// or https:// URL must start with http:// or https:// Phone number in 123-456-7890 The phone number must be in 123-456-7890 format. For the Custom Regular Expression, refer to the Customizing web forms article. Zipcode in 12345 or 12345-6789 A zipcode must be in 12345 or 12345-6789 format. Web form background color not showing You've changed the background color of your web form, but the color is not showing for an external form postcode. To fix this, within the source code, you'll write the style code, within the form tag, for the background color of the new landing page. Web form displays %%FORM::DEFINITION%% You open your landing page site and the web form displays %%FORM::DEFINITION%%. You also notice that the form is no longer viewable or editable. How can you fix this? To fix this issue, follow these steps: - Make a copy of your landing page site. - Open the landing page site and click the web form. - Go to Source View. - Look for <p>%%FORM::DEFINITION%%</p> - Remove everything after <p>%%FORM::DEFINITION%%</p> between the div tags.
OPCFW_CODE
if you are running Active Directory (AD) in your organisation, you can set up Contensis to integrate directly with it. Users don't need to remember multiple passwords, administrators don't need to set up extra user accounts for everyone to be able to login to Contensis and everyone can login automatically to Contensis if they are logged into their computer on the domain whilst using Internet Explorer. The AD service will run by default at a pre-specified time. The steps listed here are how you can change the settings to better suit your needs. All settings below need to be changed in the Global Settings area which can be found by clicking on View Management Console in the Project Explorer window. You must restart the application cache after making changes to these settings for your changes to be carried over to the published website. This setting specifies whether the AD update process is enabled or not. By default it is turned off. The following global settings are shared with the Active Directory Synchronisation service. This setting is the domain that your Active Directory install is running on. We strongly recommend you use the pre windows 2000 version of your domain or the short domain in this setting. So our AD domain is contensis.co.uk so our setting is set to Contensis. This is the user name that will be used to read the Active Directory listings and carry out the synchronisation. This user needs read privileges in the domain. We would recommend this user not have an expiring password as synchronisation might stop without notice. This is simply the password for the user above. Specify the AD user properties to update in Contensis If the AD Update is turned on, the following properties are updated on the corresponding AD user record by default: - Account Disabled - Account Locked - Email Address - First Name - Last Name - Telephone Number - Job Title - Division (mapped to company name in Contensis) Any of these fields can be excluded from the AD update by changing the value of the DirectoryServices_DisabledActiveDirectoryProperties_CMStoAD setting in Global Settings. The value of this setting is a bit field array but rendered as a decimal. To set the value, refer to the following list: - None = 0 - Account Disabled = 1 - Account Locked = 2 - Email Address = 4 - Title = 8 - First Name = 16 - Last Name = 32 - Telephone Number = 64 - Job Title = 128 - Department = 256 - Division = 512 - Password Never Expires = 1024 An example value of 192 => (64+128) would disable the update of the Job Title and Telephone Numbers. There are two settings that allow you to customise the AD Update behaviour, DirectoryServices_ADToCMSMappings and DirectoryServices_ADToCustomMappings. These settings are both also used by the AD Synchronisation service to update Contensis user records from AD. For further information see Active Directory integration Custom Mappings. User profile field accessibility In the following screens: - The User Profile screen of the CMS - A published web page which has the User profile web control - A published web page which has the Who’s Who Record web control (applies only if your CMS has the Who’s Who module) the following fields - First Name - Last Name - Telephone number - Job Title - Company Name will be disabled if either: - The AD Update is enabled, and the field is included in the global setting DirectoryServices_DisabledActiveDirectoryProperties_CMStoAD - The AD Update is not enabled, AD Synchronisation is enabled, and the field is not included in the global setting DirectoryServices_DisabledActiveDirectoryProperties. Note: The above fields are not disabled in the User Management screen of the Management Console. Make changes to global settings When any of the above global settings are changed in the CMS, you will need to: - In the Management Console / Project Setup / Publishing Servers screen, click on CMS Config for the relevant publishing server, and then click Save and Publish. This will ensure that the global settings are updated on the publishing server. - Restart the application pool for the relevant published website in IIS on the publishing server. This will ensure that any pages in the published website use the new global settings. - Any users of the CMS will need to click Reset Application Cache (in the Management Console) for the new global settings to take effect on the CMS User Profile screen. You need to restart the Active Directory service each time you update the settings above. This ensures the new configuration options get picked up by the service and everything works as you intended.
OPCFW_CODE
Are you starting an independent process with Python, but encountering numerous code errors? If so, this article can help you fix those errors and get the desired results. Have you ever found yourself stuck trying to solve a code problem while starting an independent process with Python? From syntax errors to unexpected indentations, code errors can create a lot of frustration. Fortunately, there are steps you can take to resolve code errors quickly and efficiently. In this article, we’ll discuss practical tips for solving code errors when starting an independent process with Python. First, it’s important to review your code thoroughly and look for any typos or missed punctuation marks. Even a small mistake can cause a syntax error, which can be difficult to spot. Next, try to break down your code into smaller pieces and test each section individually. This can help you identify the specific part of the code that’s causing the issue. In addition, consider using a debugging tool or automated testing framework to help you identify the source of the error. This can save you time and make it easier to locate the problem. Finally, if you’re still having trouble, don’t hesitate to ask for help. Reach out to your network of developers or post your issue on a forum or discussion board. By following these tips, you can troubleshoot code errors quickly and efficiently. So, if you’re starting an independent process with Python and experiencing errors, use the advice in this article to get back on track. Ready to start fixing code errors in starting an independent process with Python? Read this article to get the tips you need to resolve errors and get the results you want! to Fixing Code Errors in Starting an Independent Process with Python When starting an independent process with Python, it’s possible to encounter code errors. These code errors can range from syntax errors to mistakes in logic with the code. It’s important to be able to fix these errors so the process can continue running. There are several techniques that can be used to help fix code errors when starting an independent process with Python. In this article, we’ll discuss these techniques and provide helpful tips for fixing code errors. Check the Syntax The first step in fixing code errors when starting an independent process with Python is to check the syntax. When the syntax of the code is incorrect, the program won’t be able to run. The most common syntax errors are typos and incorrect punctuation. To check the syntax of the code, simply review the code and make sure all the punctuation is correct and all the capitalization is correct. If you encounter any typos, make sure to correct them before proceeding. Double Check the Logic In addition to checking the syntax, it’s also important to double check the logic of the code. This means making sure that the code is doing what it’s intended to do. One of the best ways to check the logic of the code is to use a debugger. A debugger allows you to step through the code line by line to see how the code is being executed. This can help identify any logic errors that may be causing the code to fail. Check for Missing Libraries Another common cause of code errors when starting an independent process with Python is missing libraries. If a library is not installed correctly, the code won’t be able to run correctly. To check for missing libraries, you can use a tool such as pip to see which libraries are installed and which are missing. Once you know which libraries are missing, you can install them to ensure that the code will run correctly. Run the Code in a Virtual Environment If the code still isn’t running correctly, it’s a good idea to run the code in a virtual environment. A virtual environment is a separate environment from the main environment that can be used for testing. This allows you to test the code in an isolated environment to ensure that any errors are not caused by external factors. To create a virtual environment, you can use a tool such as virtualenv or pipenv. Replace the Code with a Library If the code is still not working correctly, it may be time to replace the code with a library. There are many libraries available that can be used to replace code with. These libraries can be used to simplify the code and reduce the number of lines of code. This can help reduce the amount of errors in the code and make it easier to debug. Check for Common Mistakes It’s also important to check for common mistakes when fixing code errors when starting an independent process with Python. Common mistakes include using the wrong type of data, using the wrong operator, forgetting to include necessary arguments, and using the wrong function. To check for these mistakes, review the code and make sure it’s using the correct data types, operators, arguments, and functions. Re-run the Code Once the code has been modified, it’s important to re-run the code to make sure the errors have been fixed. To do this, simply run the code again and see if the errors have been resolved. If the errors have been resolved, the code should continue running correctly. If the errors have not been resolved, it’s time to go back and review the code to see what else needs to be changed. Use a Different Software If all else fails, it may be necessary to use a different software to fix the code errors. There are many software packages available that can help with debugging and fixing code errors. These software packages can offer more advanced debugging tools and allow you to quickly identify and fix code errors. Examples of these software packages include Visual Studio Code, PyCharm, and Wingware. Fixing code errors when starting an independent process with Python can be a challenging task. Fortunately, there are several techniques that can be used to help fix the code errors. By checking the syntax, double checking the logic, checking for missing libraries, running the code in a virtual environment, replacing the code with a library, checking for common mistakes, and re-running the code, it’s possible to fix code errors and keep the process running smoothly. If all else fails, it may be necessary to use a different software package to help with debugging and fixing code errors. Source: CHANNET YOUTUBE Jie Jenn
OPCFW_CODE
Any data sent over the Internet is divided into smaller segments called packets. After reading this article you will be able to: Copy article link In networking, a packet is a small segment of a larger message. Data sent over computer networks*, such as the Internet, is divided into packets. These packets are then recombined by the computer or device that receives them. Suppose Alice is writing a letter to Bob, but Bob's mail slot is only wide enough to accept envelopes the size of a small index card. Instead of writing her letter on normal paper and then trying to stuff it through the mail slot, Alice divides her letter into much shorter sections, each a few words long, and writes these sections out on index cards. She delivers the group of cards to Bob, who puts them in order to read the whole message. This is similar to how packets work on the Internet. Suppose a user needs to load an image. The image file does not go from a web server to the user's computer in one piece. Instead, it is broken down into packets of data, sent over the wires, cables, and radio waves of the Internet, and then reassembled by the user's computer into the original photo. *A network is a group of two or more connected computers. The Internet is a network of networks — multiple networks around the world that are all interconnected with each other. Theoretically, it could be possible to send files and data over the Internet without chopping them down into small packets of information. One computer could send data to another computer in the form of a long unbroken line of bits (small units of information, communicated as pulses of electricity that computers can interpret). However, such an approach quickly becomes impractical when more than two computers are involved. While the long line of bits passed over the wires between the two computers, no third computer could use those same wires to send information — it would have to wait its turn. In contrast to this approach, the Internet is a "packet switching" network. Packet switching refers to the ability of networking equipment to process packets independently from each other. It also means that packets can take different network paths to the same destination, so long as they all arrive at the destination. (In certain protocols, packets do need to arrive at their final destinations in the correct order, even if each packet took a different route to get there.) Because of packet switching, packets from multiple computers can travel over the same wires in basically any order. This enables multiple connections to take place over the same networking equipment at the same time. As a result, billions of devices can exchange data on the Internet at the same time, instead of just a handful. A packet header is a "label" of sorts, which provides information about the packet’s contents, origin, and destination. When Alice sends her series of index cards to Bob, the words on those cards alone will not give Bob enough context to read the letter correctly. Alice needs to indicate the order that the index cards go in so that Bob does not read them out of order. She also should indicate that each one is from her, in case Bob receives messages from other people while she is delivering hers. So Alice adds this information to the top of each index card, above the actual words of her message. On the first card she writes "Letter from Alice, 1 of 20," on the second she writes "Letter from Alice, 2 of 20," and so on. Alice has created a miniature header for her cards so that Bob does not lose them or mix them up. Similarly, all network packets include a header so that the device that receives them knows where the packets come from, what they are for, and how to process them. Packets consist of two portions: the header and the payload. The header contains information about the packet, such as its origin and destination IP addresses (an IP address is like a computer's mailing address). The payload is the actual data. Referring back to the photo example, the thousands of packets that make up the image each have a payload, and the payload carries a little piece of the image. In practice, packets actually have more than one header, and each header is used by a different part of the networking process. Packet headers are attached by certain types of networking protocols. A protocol is a standardized way of formatting data so that any computer can interpret the data. Many different protocols make the Internet work. Some of these protocols add headers to packets with information associated with that protocol. At minimum, most packets that traverse the Internet will include a Transmission Control Protocol (TCP) header and an Internet Protocol (IP) header. Packet headers go at the front of each packet. Routers, switches, computers, and anything else that processes or receives a packet will see the header first. A packet can also have trailers and footers attached at the end. Like headers, these contain additional information about the packet. Only certain network protocols attach trailers or footers to packets; most only attach headers. ESP (part of the IPsec suite) is one example of a network layer protocol that attaches trailers to packets. IP (Internet Protocol) is a network layer protocol that has to do with routing. It is used to make sure packets arrive at the correct destination. Packets are sometimes defined by the protocol they are using. A packet with an IP header can be referred to as an "IP packet." An IP header contains important information about where a packet is from (its source IP address), where it is going (destination IP address), how large the packet is, and how long network routers should continue to forward the packet before dropping it. It may also indicate whether or not the packet can be fragmented, and include information about reassembling fragmented packets. "Datagram" is a segment of data sent over a packet-switched network. A datagram contains enough information to be routed from its source to its destination. By this definition, an IP packet is one example of a datagram. Essentially, datagram is an alternative term for "packet." Network traffic is a term that refers to the packets that pass through a network, in the same way that automobile traffic refers to the cars and trucks that travel on roads. However, not all packets are good or useful, and not all network traffic is safe. Attackers can generate malicious network traffic — data packets designed to compromise or overwhelm a network. This can take the form of a distributed denial-of-service (DDoS) attack, a vulnerability exploitation, or several other forms of cyber attack. Cloudflare offers several products that protect against malicious network traffic. Cloudflare Magic Transit, for instance, protects company networks from DDoS attacks at the network layer by extending the power of the Cloudflare global cloud network to on-premise, hybrid, and cloud infrastructure.
OPCFW_CODE
#ifndef TERMOX_DEMOS_FOCUS_FOCUS_DEMO_HPP #define TERMOX_DEMOS_FOCUS_FOCUS_DEMO_HPP #include <memory> #include <termox/painter/color.hpp> #include <termox/system/system.hpp> #include <termox/widget/focus_policy.hpp> #include <termox/widget/layouts/horizontal.hpp> #include <termox/widget/layouts/vertical.hpp> #include <termox/widget/pipe.hpp> #include <termox/widget/widget.hpp> #include <termox/widget/widgets/label.hpp> namespace demos::focus { inline auto focus_box(ox::Focus_policy policy) -> std::unique_ptr<ox::Widget> { using namespace ox::pipe; /// Focus_policy to string auto to_string = [](ox::Focus_policy p) -> wchar_t const* { switch (p) { using namespace ox; case Focus_policy::None: return L"None"; case Focus_policy::Tab: return L"Tab"; case Focus_policy::Click: return L"Click"; case Focus_policy::Strong: return L"Strong"; case Focus_policy::Direct: return L"Direct"; } return L""; }; /// Remove tab focus from \p p. auto const narrow = [](ox::Focus_policy p) { switch (p) { using namespace ox; case Focus_policy::None: case Focus_policy::Tab: return Focus_policy::None; case Focus_policy::Click: case Focus_policy::Strong: return Focus_policy::Click; case Focus_policy::Direct: return Focus_policy::Direct; } return ox::Focus_policy::None; }; // clang-format off auto box_ptr = ox::layout::vertical ( ox::hlabel(to_string(policy)) | name("l") | align_center() | fixed_height(1) | ox::pipe::focus(narrow(policy)), ox::widget() | name("w") | ox::pipe::focus(policy) ) | bordered(); box_ptr | children() | find("l") | on_focus_in([w = box_ptr->find_child_by_name("w")] { ox::System::set_focus(*w); }); box_ptr | children() | find("w") | on_focus_in( [&w = *box_ptr]{ w | walls(fg(ox::Color::Red)); }) | on_focus_out([&w = *box_ptr]{ w | walls(fg(ox::Color::White)); }); // clang-format on return box_ptr; } /// Build a focus app demo and return the owning pointer to it. inline auto build_demo() -> std::unique_ptr<ox::Widget> { using namespace ox; using namespace ox::pipe; // clang-format off return layout::horizontal( layout::vertical( focus_box(Focus_policy::Tab) | height_stretch(3), layout::horizontal( focus_box(Focus_policy::Strong), focus_box(Focus_policy::Tab) ) ), layout::vertical( focus_box(Focus_policy::Strong), focus_box(Focus_policy::None) ), layout::vertical( focus_box(Focus_policy::Click), layout::horizontal( focus_box(Focus_policy::Strong), layout::vertical( focus_box(Focus_policy::None), focus_box(Focus_policy::Tab) ), focus_box(Focus_policy::Tab) ) | height_stretch(2), focus_box(Focus_policy::Strong) ) ); // clang-format on } } // namespace demos::focus #endif // TERMOX_DEMOS_FOCUS_FOCUS_DEMO_HPP
STACK_EDU
#include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <string.h> #include <sys/stat.h> #include <iconv.h> //for gbk/big5/utf8 bool IsTextUTF8(const char* str,int length) { int nBytes=0;//UFT8可用1-6个字节编码,ASCII用一个字节 unsigned char chr; bool bAllAscii=true; //如果全部都是ASCII, 说明不是UTF-8 for(int i=0; i<length; ++i) { chr= *(str+i); if( (chr&0x80) != 0 ) // 判断是否ASCII编码,如果不是,说明有可能是UTF-8,ASCII用7位编码,但用一个字节存,最高位标记为0,o0xxxxxxx bAllAscii= false; if(nBytes==0) //如果不是ASCII码,应该是多字节符,计算字节数 { if(chr>=0x80) { if(chr>=0xFC&&chr<=0xFD) nBytes=6; else if(chr>=0xF8) nBytes=5; else if(chr>=0xF0) nBytes=4; else if(chr>=0xE0) nBytes=3; else if(chr>=0xC0) nBytes=2; else return false; nBytes--; } } else //多字节符的非首字节,应为 10xxxxxx { if( (chr&0xC0) != 0x80 ) return false; nBytes--; } } if( nBytes > 0 ) //违返规则 return false; if( bAllAscii ) //如果全部都是ASCII, 说明不是UTF-8 return false; return true; } std::string CodeConvert(char *source_charset, char *to_charset, const std::string& sourceStr) //sourceStr是源编码字符串 { iconv_t cd = iconv_open(to_charset, source_charset);//获取转换句柄,void*类型 if (cd == 0) return "iconv open error"; size_t inlen = sourceStr.size(); size_t outlen = 255; char* inbuf = (char*)sourceStr.c_str(); char outbuf[255];//这里实在不知道需要多少个字节,这是个问题 //char *outbuf = new char[outlen]; 另外outbuf不能在堆上分配内存,否则转换失败,猜测跟iconv函数有关 memset(outbuf, 0, outlen); char *poutbuf = outbuf; //多加这个转换是为了避免iconv这个函数出现char(*)[255]类型的实参与char**类型的形参不兼容 if (iconv(cd, &inbuf, &inlen, &poutbuf,&outlen) == -1) return "iconv error"; std::string strTemp(outbuf);//此时的strTemp为转换编码之后的字符串 iconv_close(cd); return strTemp; } //gbk转UTF-8 std::string GbkToUtf8(const std::string& strGbk)// 传入的strGbk是GBK编码 { if(IsTextUTF8(strGbk.c_str(),strlen(strGbk.c_str()))==false) return CodeConvert("gb2312", "utf-8",strGbk); else return strGbk; } int GBK2UTF8(const std::string &strGBK, std::string &strUTF8) { #if defined(_MSC_VER) && (_MSC_VER >= 1400) int nRet = -1; if (strGBK.empty()) return nRet; int nLenOfWide = MultiByteToWideChar(CP_ACP, 0, strGBK.c_str(), strGBK.size(), nullptr, 0); if (nLenOfWide <= 0) return nRet; wchar_t *pWideData = new wchar_t[nLenOfWide + 1]; memset((void *)pWideData, 0, nLenOfWide + 1); nLenOfWide = MultiByteToWideChar(CP_ACP, 0, strGBK.c_str(), strGBK.size(), pWideData, nLenOfWide); if (nLenOfWide <= 0) { delete[] pWideData; pWideData = nullptr; return nRet; } //--------------------------------------------------------------------------------- int nLenOfMultiByte = WideCharToMultiByte(CP_UTF8, 0, pWideData, nLenOfWide, nullptr, 0, nullptr, nullptr); if (nLenOfMultiByte <= 0) { delete[] pWideData; pWideData = nullptr; return nRet; } char *pUTF8Data = new char[nLenOfMultiByte + 1]; memset((void *)pUTF8Data, 0, nLenOfMultiByte + 1); nLenOfMultiByte = WideCharToMultiByte(CP_UTF8, 0, pWideData, nLenOfWide, pUTF8Data, nLenOfMultiByte, nullptr, nullptr); if (nLenOfMultiByte <= 0) { delete[] pWideData; pWideData = nullptr; delete[] pUTF8Data; pUTF8Data = nullptr; return nRet; } strUTF8 = std::string(pUTF8Data, nLenOfMultiByte); delete[] pWideData; pWideData = nullptr; delete[] pUTF8Data; pUTF8Data = nullptr; nRet = 0; return nRet; #else //strUTF8 = any2utf8(strGBK,std::string("gb2312"),std::string("utf-8")); strUTF8 = GbkToUtf8(strGBK); return 0; #endif }
STACK_EDU
The new Data Miner stand-alone app (built on top of Portfolio123 API) is now available for use. Data Miner is a Windows application for non-programmers. It can run thousands of unattended operations with ease, speed and reliability. Currently it features several data mining operations, such as rolling screens , rank performance tests, and rank downloads. Data Miner can also be used to download point in time factors (data license required). We’ll be adding several operations soon, so let us know what you think. In addition, we’re also releasing it as an open source project so you can create your own versions or, if you like, contribute to the official release. This is version 1.0 so bear with us. We think it’s worth releasing it now because it has many nice features that can help you run comparisons between FactSet & Compustat. You can download Data Miner in the link below. Be sure to download the samples and read the pdf. Thank you! This has awesome potential (whether I end up being able to use it or not). I have downloaded it and I have been able to use one of the samples (Ranks-inlined ranking system). I have a question about labels. None of your samples provide labels: i.e., the returns. Is that something that can be obtained without a data provider license? Ultimately to be useful I will need the returns (or labels for supervised learning) and I will have to learn the indexing method to concatenate the returns of a ticker (for a specific week) with the ranks (for that week). How is this indexed? I do not see what I normally consider an index. Will the P123 UID function as an index? Ideally, the data would have a hierarchical row index of the date and the ticker for download. The factor ranks would be the column index (along with the label or returns of the next week). Ultimately, I would probably prefer to download the data and run it though Jupiter Notebooks, Colab or Spyder. I could probably even hire a graduate student to help me with this if need be. So the details of how to do this may not be important in this thread. Anyway, this is great! And thank you in advance for any information. If I cannot ultimately use this that is probably okay: the price I pay for not taking enough courses in programming. Although, I think you will be rewarded for making this usable for the average graduate with a finance degree (at the undergraduate level). I think you will want to attract people who want to run econometrics models that they learned getting undergraduate finance degrees which may not have involved a lot of programming. For now my only question is whether a license is required to get data on returns (the label for supervised learning). If a license is requried, I will probably continue using what P123 already offers without spending a lot of time on learning how to use this addition now. In the “RanksPeriod” example a member can substitute the name of a ranking system that they have already created as well as a universe they have already created. This can be done over an extended period as the name implies (with dates in the column). That is a lot of information that can be downloaded all at once. It looks like this has a lot of potential. Some data wrangling will be required with version 1.0 but a lot of information can be downloaded already and there seems to be a lot of potential for the future version. To specify this, you’ll need to put the expression in quotes: [font=courier new]“#AnalystsCurQ”[/font] If the expression also has double quotes in it, you’ll need to prefix those quotes with a backslash for it to work: [font=courier new]“FRank("#AnalystsCurQ")”[/font] I can’t get your second tip to work, take this example it gets this error: 2020-05-12 22:50:41,089: API request failed: Element type “StockFormula” must be followed by either attribute specifications, “>” or “/>”. (on line 2) YAML uses a few special characters, documentation on how to deal with them when present in property values (eg formulas) can be found in the README.txt on dropbox. A new release that addresses a bug exposed by Quantonomics’ example has also been uploaded.
OPCFW_CODE
Alarm clock randomly speeds up after 30 years. Just curious Curiosity question of a 30-year-old alarm clock that kept perfect time till lately. 18 months ago, in 2021, it started randomly speeding up. Runaway minutes then back to perfect time. With this alarm clock I was under the assumption that it may be dependent on grid frequency or harmonics or something causing interference in my house; I switched all off in the house and it still did it. Out of curiosity, I spoke to the power company and they sent someone over to place a monitoring device over 7 days, yes, I was surprised! It happened several times during the monitoring, however the power company said nothing is wrong. Put the clock away (I don't throw much out) and came across it 7 months ago, Nov 22. Plugged it in and left it—coming back a week later and still perfect time. Perfect time until a week ago, June 23. First it was a few minutes than quick runaways into hours. Nothing on the premises changes or has changed and it can happen at any hour of the day or night. As can be seen in the picture, there is not much to it. 28-pin IC, resistors, capacitors, diodes, transistor and transformer. Operating voltage is 240 VAC 50 Hz and all components look OK. Test the electrolytic capacitor for ESR and value. Components change their values over time, electrolytic capacitors are often named first. Chips in plastic can degrade, too. My suspicion would be that whatever is used to keep time, be it a counter on the grid frequency or a Pierce oscillator with a quartz crystal, is aging physically, and that leads to spurious counts of "clock ticks". It might not be an environmental unwise move to discard that clock - I, a couple of years ago, got rid of my childhood alarm and FM radio clock, after having it run a week on my energy meter. Turns out it uses nearly 5 W continuously, which does explain the warmth. 1 W over the course of a year at 0.33 €/kWh is pretty much 3€, so 5 W is 15€. Compare that to energy cost that is included in the street price for new alarm clocks that maybe use 3 W less, and it becomes quite likely that upgrading your alarm clock is the environmentally wise thing. (It's by the way something to do to your parents, if they have enough money to buy a new fridge: when visiting them, leave your energy monitor running on their decades old fridge, and pick it up next visit. Compare to energy consumed by better isolated newer fridge according to EU-standardized testing methods. Discuss return on investmentment over the course of 2 years.) is it running exactly 20% fast? If you tap it while it's on, does that affect the time? If so, it could be a loose solder joint. These clocks often use the line frequency for their time base as it avoids the cost of a crystal oscillator and has quite good long term accuracy. If that's the case with this particular clock it may be noise on the power lines causing it to count erratically, perhaps there is something plugged in nearby that has a switching type supply creating high frequency noise on the power lines. This wouldn't have been a problem when the clock was originally made, there were few switchers back then, but now they are ubiquitous and their quality can vary quite a bit. You could try plugging it into a power strip that contains a line filter and see if that clears up the problem. If the problem gets worse the longer the clock is plugged in it's probably a problem with one of the components in the clock itself and not outside interference. Some clocks do use a crystal or ceramic resonator oscillator and this could be going bad causing intermittent spurious oscillations. You should be able to see a crystal or resonator on the circuit board if it's one of these types of clock. Or it could be the clock IC itself going bad. The power supply filter cap is also a possible source of problems, in a 30 year old device it should probably be replaced anyway. A ferrite clip around the power cable might do the job too, no need for a full power strip https://www.amazon.co.uk/Dreamtop-Clip-Ferrite-Suppressor-Diameter/dp/B01MG8GQ1F/ref=sr_1_5?crid=WM1W7BX2BC81&keywords=ferrite+clip&qid=1686270458&sprefix=ferrite+clip%2Caps%2C131&sr=8-5
STACK_EXCHANGE
Cuda Error Is Not A Function Or Static Data Member But I had that GitHub no longer supports old versions of Firefox. workaround to this here: https://bugs.archlinux.org/task/49272. Sign in to comment Contact GitHub API with cmake support history, see e.g. weblink and it is tempting to disregard the rest. Sep 14 '12 at an account? Harshhemani commented Jun 6, 2016 Tried running with THEANO_FLAGS="nvcc.flags=-std=c++11 -Xcompiler -D__CORRECT_ISO_CPP11_MATH_H_PROTO" failed the same problem… Share this:TwitterFacebookGoogleLike this:Like Loading... It may be 16.9 in terminated. I tried reinstalling the drivers and changing the running linux another tab or window. FYI, I'm not a function or static data member". I was suffering from the same issue. Is my 100% (8/8), done. Then I assume he did not pass the flag to - can't perform that action at this time. - Remote: Compressing objects: p such that p-2 and p+2 are composite numbers? - You signed in with that GitHub no longer supports old versions of Firefox. - refresh your session. - If you saw tons of warning, - Related cuda Post navigation Previous PostReacquiring your ‘gaming mojo'Next PostAnother CUDA post - how to for free to join this conversation on GitHub. - Abergeron closed this Jun 23, 2016 Sign up terminated. The OP needs to show us how he is calling the \text in plain tex? said our scripts are also not yet tuned to support it. However, setting neither THEANO_FLAGS="nvcc.flags=-std=c++11 -D__CORRECT_ISO_CPP11_MATH_H_PROTO" nor THEANO_FLAGS="nvcc.flags=-std=c++11 with error: gcc: error: unrecognized command line option ‘-Xcompiler’; did you mean ‘--compile’? Is this Is this The key was to run the makelocalrc script kernel to see if a newer kernel woult do it. If that still results in an error I'd need change correct? Kaleb 2016-06-27 11:07:13 UTC #3 weird. Neither give "removeDup is not Anshumang commented Sep 28, 2015 Thanks @psychocoderHPC and @ax3l for following up...I wanted to use teaching attitude wrong? Compilation C++11: Is there a standard definition you're looking for? In this case, it was the later errors thay Thanks have a peek at these guys Compilation 224000410262305023-123906641113383570 Request unsuccessful. Is it dangerous Now, PGI complains about a can't perform that action at this time. So @cloudhan 's work around doesnt seem to work for me GrimKriegor to see the make VERBOSE=1 output of the compiler invocation. So either CUDA_PROPAGATE_HOST_FLAGS must be set to ON or 22:14 To elaborate... Never a good idea to be developing code on the newest releases http://cbsled.com/cuda-error/cuda-bus-error-10.html The failure to include stack made removeDup confusing luarocks build ./rocks/cutorch-scm-1.rockspec [sudo] password for root: Cloning into 'cutorch'... Checking of Linux and gcc, unless you want to find problems before we do. Text editor for printing C++ code Are old the missing header files: ideone.com/pyYd3 . A related upstream bug report, seems some compilation option should Sep 14 '12 at you to it error: identifier "nullptr" is undefined. terminated. CUDA 6.5 does not support C++11. Should I be Terms Privacy Security Status Help You Thanks. realized that CPU-rendering was enabled by default. And the CUDA-option in the user-preferences is missing, so Blender this content X window How do I debug an emoticon-based URL? I'm about to automate Browse other questions tagged c++ void removeDup(std::vector Probably running $PICSRC/configure -c"-DCUDA_NVCC_FLAGS='-std=c++11'" Helmholtz-Zentrum Dresden - Rossendorf member ax3l commented Sep 28, 2015 I feel your pain Reload to XCompiler as nvcc must know about that! Is this THEANO_FLAGS="nvcc.flags=-std=c++11 -D__CORRECT_ISO_CPP11_MATH_H_PROTO" and it runs successfully.
OPCFW_CODE
May 15, 2016 And it looks like a designed in assertion of the language appropriate from your quite beginning and i think–I assume Java kind of skipped the boat on that but it is really not way too late. Virtually opposite of channel tunnel so we can–can seem more than to France from there. I progressive mechanical jobs starts at zero. As well as the query is exactly what regarding the dumb programmer which–by how is undoubtedly an Australian animal. In this article, it is possible to now not utilize it to manipulate your computer. This is certainly in just a reasonable limit, then what I am able to do as a server. And as said by http://www.aipmtonline.in/ begin typing In that situation, when you know, my revolutionary mechanical assignments drawing abilities. Which is why you do have a a lot less difficult time since you might have functionality sorts and closures in those people languages. What have ground breaking mechanical assignments we came? Ouch Let us compile, and let us make the pink world a tad more compact. Need to I learn Java? And what is the indicating of this. At the least should you master area of interest languages like French and innovative mechanical assignments End, you audio attractive or clever. It truly is effortless to learn. So, the query was in case you take a occupation to be a impressive mechanical tasks program engineer. And i consider that is correct. The initial is that to return the letters by having an es as opposed to just generally including digit to the innovative mechanical assignments sum. The column is really an even row was the width, but we have a random variety generator, exceed it with time, so that’s the elapsed time which was required to accomplish that. And you’ll find several troubles, and if I tender alter the spot and now we wish to learn what each of the matches are. Appears like I didn’t take away the cast that was resulting in difficulties. But one ability that was missing so far was in order to do items. There is the Java Basis Courses JFC a. Funds real is just not the first contact of youngsters with programming any more. 14 innovative mechanical initiatives is the sum in the digits on the amount 365, three, and in addition it sets a count.No challenge ‘ modern mechanical assignments But it’s shed facts. Afterwards, go ahead and just set up that, we’re gonna have issues with Android studio. It will be significantly nicer if we could choose a special photo for each and every car, ground breaking mechanical assignments so we transfer proper. And it truly is, it can be also the identical sort of, they’ve a number of fussy information. Let’s seem at the construction of credit score card and find out what we will do. For integers we will choose this technique over right here. And finally, you will find a–there’s a complete amount vs . like you know, if they have specified them diverse names or if there were no over–no edition of it. Below it truly is, then I can produce a subclass of the earth and–where I’m able to consume anything. That’s not a explanation innovative mechanical initiatives more than enough to understand the language.
OPCFW_CODE
Using VBA Range that is slightly different than what I am seeing I see some ranges that look like this before: Dim r as Range Set r = "A1:A3" However, I am trying to decipher some code from and older Excel file and am wondering how does this type of range select can be use saved in a variable Range( _ "3:3,5:5,7:7,9:9,11:11,13:13,15:15,17:17,19:19,21:21,23:23,25:25,27:27,29:29,31:31" _ ).Select I tried to do something more simple like: Dim r As Range Set r = range("3:3") I can see what this does but I keep getting an error. Does anyone have any ideas? I'm sorry I don't know what you are asking. The code does not cause an error. I am sorry I figured it out originally I was using 3:3.select vs 3:3 so it caused an error You can post an answer to your own question. It helps others who will come along in the future. First of all let me start by fixing what I did originally, I have 3 accounts didn't even know that. Dim r As Range Set r = Range( _ "3:3,5:5,7:7,9:9,11:11,13:13,15:15,17:17,19:19,21:21,23:23,25:25,27:27,29:29,31:31" _ ).Select Because I had the .Select when I was doing my code I kept getting an error when I should have just done Dim r As Range Set r = Range( _ "3:3,5:5,7:7,9:9,11:11,13:13,15:15,17:17,19:19,21:21,23:23,25:25,27:27,29:29,31:31" _ ) r.Select This is what works. I honestly did not have to make a variable a range and just used range().select, but I was trying to understand the selection portion of the code I am trying to decipher for work. Learning VBA and Deciphering code at the same time is confusing :)
STACK_EXCHANGE
Rings of $S$-integers are finitely generated as rings Let $K$ be a global field (number field or algebraic function field over a finite field), $\mathcal{V}$ the set of $\mathbb{Z}$-valuations on $K$, $S \subseteq \mathcal{V}$ a finite set. The ring of $S$-integers is the subring of $K$ defined as $$ \mathcal{O}_S = \lbrace x \in K \mid \forall v \in \mathcal{V} \setminus S : v(x) \geq 0 \rbrace. $$ I am looking to puzzle together references for the following statement: Let $R$ be a subring of $K$ such that $K$ is the fraction field or $R$. Then $R$ is finitely generated as a ring if and only if it is contained in some ring of $S$-integers. A reference for the full statement would be amazing. I have been able to piece together pieces from different references, but the part which is generally missing is that $\mathcal{O}_S$ is actually finitely generated as a ring (equivalently, a $\mathbb{Z}$-algebra) for any choice of $S$. Where should I look to find a reference for this statement? It feels like a statement from commutative algebra, but a minimal amount of number theory (respectively algebraic geometry) is needed to prove it, at least in the proofs I know of. On the other hand, there is no mention of the statement in any algebraic number theory books I consulted. Alternatively, if someone has a very short proof, that is also very welcome. I need the statement for an article, but writing out all the details of the number theoretic proof would fall outside of the scope of the article. I'm not sure "Archimedean valuation" makes sense. Archimedean norms on $K$ do not give rise to valuations. I am unsure why this question was downvoted. If there is anything I can do to improve upon it, please let me know. Probably because it shows no work and it's just a reference request for an elementary statement. Ok, I'll add some work. It's an elementary statement, I won't deny that. I've also tried quite some books already. $\newcommand{\order}{\mathcal{O}} \newcommand{\Z}{\mathbb{Z}} $Here is a short proof, assuming that the class group is torsion (a result for which you should easily find a reference). First, $\order = \order_\emptyset$ is finitely generated as a $\Z$-module, hence also as a $\Z$-algebra; let $a_1,\dots,a_k$ be generators. In the function field case, pick $v_0\in S$; then $\order_\emptyset = \order_{v_0}$. For every valuation $v\in S$, with $v\neq v_0$ in the function field case, let $x_v\in K$ be such that $v(x_v)<0$ and $w(x_v)=0$ for all $w\neq v$ ($w\neq v,v_0$ in the function field case). Such an element exists: in the number field case since the class group is torsion, and in the function field case by Riemann-Roch. Claim: $X = \{a_1,\dots,a_k\}\cup \{x_v : v\in S\}$ generates $\order_S$. Proof: Let $0\neq x\in \order_S$. By definition of $\order_S$, $x$ can only have negative valuation for $v\in S$, so there exists a product $y$ of the $x_v$'s such that $x/y$ has nonnegative valuation everywhere (except possibly at $v_0$), hence belongs to $\order$. So $x/y$ is a polynomial in the $a_i$ and therefore $x$ is a polynomial in the elements of $X$. Thx; this is more or less the proof I had in mind in case $K$ is a number field, although I am still hoping to find a reference to replace it. If $K$ is a global function field, what is your $\mathcal{O}$? I had in mind the case of number fields when I wrote the answer, but in any case $\mathcal{O} = \mathcal{O}_\emptyset$ so in the function field case it is the field of constants. Careful there; the argument which you sketch certainly does not go through if $\mathcal{O}$ is the field of constants $F$, as then the valuations of $K$ are certainly not visible as ideals of $\mathcal{O}$. I am not saying that you need a fundamentally different argument for global function fields, but I fear you need to twist it a bit (e.g. use Strong Approximation Theorem to fix a transcendental $t$ which has negative valuation precisely at the primes of $S$, then taking the integral closure of $F[t]$, ...)
STACK_EXCHANGE
Uli Wachowitz wrote: > 2004-06-02 20:28, Adam Nellemann wrote: >>such as MRTG or similar, providing that you have some always-on box on >>That being said, I can see the use of such a feature on m0n0wall > Me too >>Well, that is always an easy opinion to have, IF you are lucky enough >>to have access to one or more always-on box(en) and > Again, I agree. The fact that not everyone has the possibility to own > those 'always-on-boxes' has to be considered. I don't know right now > what to answer in that case >>IF you happen to know how to setup and use such tools! > If you don't know, you can always learn it. I mean, if you are > responsible for a firewall or your amount of traffic, you should have > enough ambition to learn that. The point I was making was this: Even though someone (such as me, or the original poster perhaps) choose to use m0n0wall, it does not necessarily mean that a large amount of traffic, nor a large number of hosts, are involved. Today, even us home users (and I understand you are one too) feel the need to secure our LAN from the "baddies" on the WAN. Thus a number of m0n0wall users will have the need for m0n0wall (and some also the need for monitoring their WAN usage), but perhaps not to the extent that they can justify taking the time to learn all sorts of (more or less) difficult-to-use tools. I guess this might even be their reason for choosing a "compount product" like m0n0wall, where they only have to learn a single interface, and can get help from a single mailinglist etc. >>typically don't seem to recognize the fact that many people do NOT >>have this option > Believe me, I recognize this fact. As I said above, I don't know what to > answer in this case. Maybe I'm a bit arrogant, but I'm just phrasing my Ok, fair enough. And no, I wouldn't say you come across as arrogant, at least not after reading your response to my post ;) >>This is especially true for a project like m0n0wall, which was hardly >>meant to be a tool for hardcore coorporate server admins > OK, point for you >>Yes, IF you know how to set it up and use it > Learn it See my point above! >>IF you have a box to run > Built one Well, personally I'm one of those who like to do so (I'm even lucky enough to have the money for it), but many do not. >>Apparantly, and not only have we heard it often before, but it is also >>quite a narrowminded way to look at things (IMHO, and no offence >>intended), m0n0wall in particular, which was never meant to be JUST a >>firewall, as there would then be only one page in the webGUI, namely >>the one with the firewall rules! > Mhh, if so, I might have misunderstood the intention of this project This, I guess, depends on what exactly your definition of the term "firewall" is, something which seem to differ a lot from person to person (and from firewall to firewall!) >>I'd suggest that you find such a product then, because m0n0wall >>obviously isn't it, seeing as it has NAT, Traffic shaping, DNS >>forwarder, DHCP server, DynDNS client, and... and... All of which >>can't be said to be strictly firewall related. > This depends on how you define 'firewall'. One could (and should) also > say, that a firewall is a concept, not only a box full of functions. That might be the right way to look at it. >>I accept the fact that I can't expect m0n0wall to have all and every >>feature I want or need, and more to the point: That it might have some >>that I don't need or want. I don't understand why certain people have >>such a hard time accepting this "fact of life"? >>Oh yes, and I'd like to apologise for being instrumental in >>perpetuating this discussion. Also, if any of the above come across as >>"flaming", I'd like to apologise for that too, > No, believe me, I'll never see answers like yours as flaming. We are all > different individuals with different points of view. As long as we > discuss thing in a fair and respectful way every opinion should be > listened to. I'm glad you feel this way (which is how I feel too). >>I'm perfectly happy with any additional feature m0n0wall gets, as long >>as the various security, storage, and other issues are taken into > The more features, the more points of failures. But i see your point. Only if said features are enabled. If implemented properly, a disabled feature shouldn't impact the functioning of the firewall, just like it shouldn't pose a security risk (potential or otherwise). >>why would I want to have yet another >>complex box, full of moving, noisy parts, > ecause it makes fun to assemble something like this? In my case that is a really good argument, but as I said earlier, this might not be the case for everybody ;) >>running in my diningroom > You need a seperate serveroom ;-) Hehe! I guess I do, but alas, all my rooms are already in use :( >>That, IMHO, is an option suited for admins of large cooporate >>networks, where uptime, stability and extreme and convoluted security >>measures are apropriate concerns. > Well, you've just described my Home-LAN Ah well, my LAN setup could perhaps also be said to be slightly overkill, considering my needs, since I too like to fiddle with these things. Again this might not be how everyone feels. >>Also, I still haven't heard any really good arguments against adding >>these things? As long as they do not pose a potential security risk or >>take up extreme ammounts of CF space or RAM, and can be disabled (or >>come in the form of user installable modules), > Avoiding security risks will become more and more difficult the more > features you add. Making features as modules would give the users the > freedom to decide what risk to take. I basically agree that any feature that isn't used by a majority of users should be implemented as a module (although, in that case, it would be nice if the m0n0wall modules were a bit easier to "plug'n'play" for the novice user) However, my point about a properly implemented "feature" not impacting security when disabled still applies. >>I'm not saying that the suggested feature, or any other, should be >>added without due consideration, just that there are very good >>arguments for not making m0n0wall a "firewall is a firewall is a > This depends on everyones personal point of view. Mine is, it is a tool > to secure my net, with VPN if I like, etc. If I want some colorfull, > noisy gizmos and fancy reports and bells'n wizzles, well, ok, I'll now > my way to get all this, but I simply don't like those fancy things on a > device which is 'merely' responsible for my protection. But I LIKE colorful gizmos, I'll even pay extra for them... ;) No, seriously: I completely agree with you on this (in relation to a firewall box at least). But I guess it will always be a tradeoff between what some would consider a suitably "clean, no-frills" firewall, and what others feel is a nice "all-in-one" solution. >>(I just hope I didn't offend too many people in the process?) > Same passes for me Hehe, I guess if anyone took offence, it will be their problem then, as we are both fine with this ;) To conclude: I think we basically agree on most things, with perhaps a few minor variations in how we like to see things done.
OPCFW_CODE
Humans utilise a programming language to communicate with computers and give them instructions on how to execute certain tasks. Using a specific syntax, these languages may be used to develop programmes that do certain tasks. A career in software development may be very rewarding. Programming languages are estimated to number in the hundreds of thousands. As a result, choosing which one to study might be difficult. New and better programming languages are always being added to the list. I have compiled a list of 5 programming languages based on factors such as career outlook, future demand, corporate needs, and the characteristics of various programming languages. Python is a popular programming language nowadays due to its usability and is an excellent option for beginners. It is a freely available, accessible programming language with effective extension packages and economic development, easy web service integration, user-friendly data structures, and graphical user interfaces for desktop applications. Inkscape, and Autodesk all utilise Python in their 2D and 3D imaging and animation work. Several popular video games have also been made using it, including, Vegas Trike and Toontown. It is very popular for programming games. Free CAD and Abacus, as well as major websites like, Instagram, YouTube, Pinterest, all employ Python as a programming language. The average yearly salary for a Python developer is 72,500 dollars. In order to create apps, Kotlin was created as a cross-platform scripting language. It has been adopted by more than 60% of Android developers worldwide. According to various well-known indexes, Kotlin is the 4th great growing programming language in the world. The following are some of Kotlin’s most notable features: For those who want to create a profession in Android app development, studying Kotlin in 2022 is the ideal option. It is possible to make as much as $171,500 per year as a Kotlin developer, with an average salary of $136,000 per year. For Development tools and online applications, Google created Go in 2007. In recent years, Go has gained in popularity as one of the quickest developing programming languages due to its simple structure and capacity to manage multiprocessor and networked applications as well as enormous codebases. Large-scale initiatives necessitated the creation of Go, often known as Golang. Because of its easy and contemporary structure and linguistic familiarity, it has grown in favour among many significant IT businesses across the world. Google, Uber, Twitch, and Dropbox are just a few of the numerous companies that choose Go as their programming language of choice. The average yearly compensation for a Go developer is $92,000, with the maximum income reaching $134,000. For its support of object-oriented programming in the 2000s, C# became a popular programming language. The.NET framework’s most popular programming language is C#. Creator Anders Hejlsberg claims that C# is closer to C++ than Java. Windows, Android, and iOS apps are ideally suited since it utilises Microsoft Visual C++’s application programming interface. Bing, Dell, Visual Studio, and Market Watch are just a few of the well-known websites that employ C# on the back end. C# developers may expect to make $68,500 a year in salary. Whether you are an accomplished programmer or just beginning to look into the profession, learning a new language is one of the finest methods to boost your programming career. Hopefully, you enjoyed reading our list of the top 5 programming languages.
OPCFW_CODE
Speed dial? This is the web and yes I know that. But this web site is so helpful that it needs to be more than a bookmark buried among others. It deserves something better than a save in Pocket, a sync across multiple devices, a share in Delicious or an annotation in Diigo. This site deserves old school treatment and the web equivalent to speed dial, create a shortcut on your desktop. So what is this Holy Grail of web sites? CanIUse.com. I use this site weekly, and sometimes daily. It is a well formatted and highly notated reference for browser support of front-end web technologies that includes desktop and mobile web browsers. Why is this site so awesome? Not only is the search zippy, the search input field is the default selected item when you load up the site. I don’t even leave my keyboard. Type “caniuse.com” followed by typing the item of the day, such as “flex”. Ding! I’m done. Clear visuals for browser support No reading articles and checking dates or wading through “I think this is possible” StackExchange posts. Simple data, simply presented. - Dark gray bar highlights the current browser version. - Little yellow boxes indicate the need for a browser pre-fix. - Light gray numbers connect expanded notes with browser versions. Sobering visuals for browser usage A very unassuming toggle towards the top (Current aligned vs. Usage relative) brings perspective to how popular older browser versions are. You can also select Show All to review a bunch more browser versions if you are feeling particularly geeky or nostalgic. Easily update generated CSS3 code When CSS3 hit the streets lots of cool generators popped up for shadows, gradients, etc. But most all of them haven’t been updated in years and still pump out code that is chock-full of browser pre-fixed lines. After a quick look at CanIUse.com I know exactly what I can pitch from the generated code. For example here is a code block from one of many online gradient generators: background: rgb(30, 50, 230); background: -moz-linear-gradient(30deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); background: -webkit-linear-gradient(30deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); background: -o-linear-gradient(30deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); background: -ms-linear-gradient(30deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); background: linear-gradient(120deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); But based on current browsers, I only need this: background: rgb(30, 50, 230); /* IE 6-9, Opera Mini */ background: -webkit-linear-gradient(30deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); /* Android 4.3 and older */ background: linear-gradient(120deg, rgb(30, 50, 230) 30%, rgb(90, 140, 250) 70%); Handy access to additional resources The Known Issues and Resources tabs at the bottom are little, consolidated gold mines of info to help you learn about the code. Sort out what to do with IE This is probably the biggest use of this site for me. Working with SharePoint, IE support and older versions of IE come with the package. CanIUse.com empowers me to quickly sort out what is and is not supported in IE, in what versions, and what to do about it. IE Conditional Comments used to be the go-to solution for CSS workarounds but they are no longer supported in IE 10 and higher. So going back to our original example, flexbox, I can quickly tell from CanIUse.com that if I am going to use flexbox properties with a SharePoint 2010 site, I really need to rethink that approach or use tools to emulate it in old versions of IE. If I want to use it in SharePoint 2013, I need to be aware of what IE versions I need to support and know what I can and can’t use since IE 10 only supports the 2012 syntax for flexbox. Since there are no conditional comments for IE10, I would need to carefully plan and test any execution of flexbox in IE10 and how it may effect other browsers. A quick note in closing… No, I wasn’t paid to write this blog post. This site is just truly a great resource!
OPCFW_CODE