Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Can't stop Gitlab's built-in Nginx I'm using Gitlabs latest Omnibus-package on an EC2 Ubuntu machine. To refresh my SSL certificate (issued via Let's Encrypt) I need to stop Gitlab's Nginx so Let's Encrypt can verify that I possess the domain. Therefore I hit sudo gitlab-ctl stop. The sudo gitlab-ctl status afterwards is: down: gitlab-workhorse: 325s, normally up; run: log: (pid 1109) 5361843s down: logrotate: 324s, normally up; run: log: (pid 1104) 5361843s down: nginx: 324s, normally up; run: log: (pid 1103) 5361843s down: postgresql: 324s, normally up; run: log: (pid 1101) 5361843s down: redis: 323s, normally up; run: log: (pid 1102) 5361843s down: sidekiq: 322s, normally up; run: log: (pid 1112) 5361842s down: unicorn: 322s, normally up; run: log: (pid 1100) 5361843s However when I access my domain I get Nginx' 502 Bad Gateway. How can I truly stop its internal Nginx. Besides the certificate part the etc/nginx/gitlab.rb is still the default. Here's the output of ps -eaf|grep -i nginx root 1091 985 0 2015 ? 00:07:15 runsv nginx root 1103 1091 0 2015 ? 00:04:14 svlogd -tt /var/log/gitlab/nginx gitlab-+ 24669 1 0 2015 ? 01:03:38 nginx: worker process root 27272 1091 0 13:12 ? 00:00:00 /opt/gitlab/embedded/sbin/nginx -p /var/opt/gitlab/nginx ubuntu 27275 27254 0 13:12 pts/2 00:00:00 grep --color=auto -i nginx Is it related to https://gitlab.com/gitlab-org/gitlab-ci/issues/136#note_1196543? (the error itself can also mean https://github.com/gitlabhq/gitlabhq/issues/1527#issuecomment-8821679) From what I read the other people are actually trying to run Gitlab. All I want is to stop its Nginx. If you have access to the server, do you see an nginx process still running? I don't know where to look for it. It is neither installed as a service (sudo service --status-all) neither initctl (sudo initctl list) I mean: ps -eaf|grep -i nginx return nothing? I added the output to the question. Don't you have a reverse proxy ahead of your gitlab instance ? Seems it's something else than gitlab that answer 502 @Tensibai Yes, I have. But it is stopped independently of gitlab (nginx stop) You seems to still have something in between which answer to external request, contact your netadmin ... Just run this sudo gitlab-ctl stop nginx https://stackoverflow.com/a/32974637 For completeness sake what I ended up doing three years ago was what @user8215365 suggested. Simply invoking sudo gitlab-ctl stop nginx did the trick.
STACK_EXCHANGE
How should we steer lost users? Here is a thing I see fairly often here on Meta.SE: The question isn't remotely on-topic and was rightly, and quickly, put on hold. Somebody left a comment (not shown) pointing out what Meta.SE is and steering the OP elsewhere. Sadly, users don't always sit on their questions in real time; sometimes they ask and then come back in a few hours. If I were to cast that last delete vote, what the OP would see is "my question disappeared", with no further feedback unless the page is still open in a browser tab. The notification for the helpful comment gets removed with the post. I think only experienced SE users would find (a) the right profile tab and (b) the "show recent deleted posts" checkbox. and experienced SE users are not the ones I'm worried about. How should we respond to these posts? Is there a reasonable amount of time to wait, and after that we should go ahead and delete it? If so, how long is reasonable? Should we just leave these and wait for the Roomba to get them? If so, then when should we be using delete votes on questions? Or should we not concern ourselves with the reactions of new, lost users and delete at will? In the past I've gone ahead and cast those delete votes, but with the new "welcome wagon" initiative I'm not sure I should be. The Roomba period for closed questions is a mere 9 days, which really isn't that long in the grand scheme of things... How many of these bad questions would pile up over a rolling 9 day window? Or should we not concern ourselves with the reactions of new, lost users and delete at will? - I will vote on this option, although I am admittedly not all that big on this new "welcome wagon" thing. Since there are quite a lot of these types of questions every day, I don't like the option of allowing them all to sit and litter the site. related, very similar post @n8te: Note that it is really easy for new users to get lost and end up here. They just have to go to stackexchange.com and click the mysterious "meta" link at the top of the page... which then dumps them on the MSE front page with no explanation of where they are or what kinds of questions they should be asking. There is also a friendly "Visit Stack Exchange" button accessible in two clicks from every page served anywhere on the SE network, which gets them halfway to MSE. I don't ever randomly show up on a site and blindly post something without looking around for a minute first so I cannot understand that kind of mentality, but I can understand if you have more sympathy for that type of user. Personally, I don't. To each their own. Related: Why do users often ask blatantly off-topic questions here on Meta? Of all the (I admit auto- ) comments I left on those posts I hardly ever get a response nor do I notice any action after I left all my guidance. If anything, the experience is unsatisfactory for me because after all our effort , meta discussions and what not, lost users still drive-by and after that will remain lost. We can't help people that can't or won't respond / react to anything you throw at them nor should we keep guessing what drives them. Removing those type of questions from the front-page, closing those questions as off-topic and deleting those questions as quickly as possible is the best service we can offer for the next lost visitor. Because if the next lost user sees a post similar to what they plan to ask they are more likely to follow. Holding back on your delete is not needed. Please use the moderation capabilities you're trusted with. I was running a survey, but was told to stop because they were going to make an officially-run survey. However, that never came to fruition. I see I made an awful mistake there: Holding back on your delete is not needed I don't mean we should delete you. Instead read: Holding back on your delete VOTE is not needed. Sorry for that ... @Chair I don't think it's mysterious considering people who are desperate for something could ignore anything that might hinder from getting the solution... you only need to scroll the page to the bottom to get the badge. @rene getting these questions off the front page is important, hence the downvotes (you'll see mine in the screenshot). Lacking any data or more than scant anecdotal evidence about what happens to these users/questions, I'm wondering specifically about hasty deletion. @Chair no, it's almost instant when you reach the "badge" section. However, you can also ask a question without reading a tour. I just tried registering a new account after clicking the 'ask question' button. After confirming the email address, I got redirected straight away to 'ask question' page. I did get an inbox notification about "taking a quick tour", and after opening the tour and holding "page down" until the "badge" section, I got the badge in under 5 seconds. I'm not convinced there is so much hasty deletion. I and others post plenty of delv-pls requests where such posts have lingered for a couple of hours. My experience is the same: no response from the OP in 99 of the 100 cases. We're doing the best with what we have. Quite frankly, there's no great way to deal with this. MSE's a bit of a tough site for folks unfamiliar with the way folks do things, but at the same time, this sort of meta-moderation helps keep the site clean. I comment. I let folks know why. I rarely see the same faces posting an on-topic meta post. Once someone self-deleted and apologized, and I was surprised, and I hope the lad comes back with an on-topic question some day. At the end of the day, though, I'm not sure many of these users are actually aware that their posts are off-topic. Admittedly, there's a semi-organized group of users helping with this on Tavern on the Meta, and we do try to get rid of these as soon as possible. Many of us comment (and occasionally cheer when a user realises the folly of their ways). The problem with waiting is... people forget, and dealing with stuff as they come in means they get handled at all. I guess it's a matter of trying to balance between new user friendliness, what seems like obvious abuse (folks asking programming questions here to get around Stack Overflow bans), and keeping the place on-topic and useful. Moderation here's a little spotty at times (sorry guys!) and the community's often stepped up to help deal with this. If we do flag, our CMs (busy as they are) or a dev who decides to help out, has to deal with it. It's significantly more efficient to just do it ourselves. If the folks who run/moderate the site have issues, it might be worth sounding out and letting folks know. Personally, I'd like to see the new user template and see if it helps. To moderate after a user has already posted an off-topic question feels too late. I do believe that users can see their own, new-ish deleted question if they know where to look, but a new user might not. I'd have less sympathy for folks with SO accounts posting programming questions. I'd also like some mechanism to let new folks know what this site is for - I proposed something like that for meta previously. Also don't forget that many users end up following a link here and don't realize that they're on a different site, and post here. It's been shown in a micro-study run by ArtOfCode that users don't pay attention to the right sidebar when asking (and if they do and their question is made off-topic by those guidelines, they just think "they won't mind, right?"). This is much more likely to happen when the new responsive design gets rolled out network-wide, and especially since the new proposed design for SU makes it look too much like MSE. Hold is for 5 days and deletion only after 48 hours, with a long list of exceptions. Some sites are helpful about suggesting an appropriate site or means to improve the question to make it on topic, some are great about migrating (along with the answers) misplaced questions, and some leave the question up for a while after the comments before applying the hold. Other sites can find five people to VTC before the person can get back with their coffee. There's not really an evenhandedness - 'take it to XYZ' is said on occasion. I suggested a solution for new users answers, a "Help Button" above the "Post Your Answer" button, new users questions likely will be dealt with by the "new question template", once it rolls out. Some people do complain that they feel set upon, but usually not here, instead on a blog they create. Sometimes there's a slough of comments awaiting the OP's return between the other users, debating the post; and calling them an OP (one even asked why they were referred to as that, what is it). I think if the question makes sense in some context a polite comment asking for clarification is a kind means to deal with some posts, if it's clear that it's not clear a comment is nicer than closing. Only new user abuse and SPAM should be dealt with uncerimoniously. Some need a tour of the help pages, and some need to score a few hundred on the interpersonal skills SE site. I remember running a new user drive on a beta site. A dozen people showed up (Rep 1), their introduction to the site could have gone better; many of them are teachers and post doctoral (now a couple of months later they have fairly high reputations). I'm glad they stuck it out, we're lucky to have them. It's the quick deletions -- possible immediately after closure if the question is downvoted -- that I'm mainly concerned about. Off-topic questions should be put on hold, but should we be using those delete votes immediately, waiting for the Roomba, or something in between? @MonicaCellio - I always thought there was a higher level of review for 'hammer close' and Close Vote Review Audits to make certain that people, even with a high Rep, made correct decisions. If someone has a high Rep isn't it "trust but verify"? - Shog9 put up a post about 'Turbocharging the Roomba' short version: be nice with 'good faith questions' and 'mean' to abuse/SPAM/random. The quick deletions are mostly for clearly OT questions. There's usually a comment saying why. The problem with waiting 48 hours is well... People forget. Most of these posts are off the front page so... There's no audit - outside someone going 'hey, this shouldn't be closed' Here's an example, racking up downvotes, commented, and held - all before they can check out what's happening. This is the reception they received on their first visit to SE. It would have been more welcoming if they could have acted on the comment before the shunning.
STACK_EXCHANGE
INTRODUCTION TO COMPUTERS WHAT IS A COMPUTER?A computer is an electronic device which is capable of handling large amounts of data and possesses characteristics such as high speed, accuracy and ability to store a set of instructions (program) for solving a problem. Computers can perform a veriety of mathematical calculations. They can repeat a complicated mathematical calculation million times without error. They can sort data, merge lists, search files and make logical decisions and comparisons. However, computer is devoid of any original thinking. It does nothing but what it is told to do. The computer is provided with a set of instructions called a program which controls all the operations of the computer. HISTORY OF COMPUTERSThe second half of the 20th century has come to be known as the “Age of computers”. The human fingers, the first device used for computation, gave way to the ABACUS, the first example of a digital computer. Joseph Jacquard in 1801, was responsible for taking the first major step in the development of computers. He became the forerunner of the punched cards still in use today. Charles Babbage’s “Differential Engine” completed in 1833, proved to be the fundamental basis for the development of the modern computer and he is rightly called “the father of the computers”. In 1890 Hermwn Hollertith continued the work and devised a coding system to be punched on cards to represent data. He started selling his machines and the company he founded is now called the International Business Machines (IBM). TYPES OF COMPUTERS 1) ANALOG COMPUTERSThey process data into a continuous form. This type of computer is useful for Scientific and Engineering applications in radar work, guided missiles and space programs. The main disadvantage of an Analog Computer is its accuracy factor and limited storage capacity. Hence, they are not suitable for processing business data. They are capable of (a) storing data for processing; (b) performing logical operations; (c) editing or deleting the input data and (d) printing out the result at high speed. Hence, they are most suitable for business applications. HARDWAREIt is a general term used to represent the physical components of the computer itself. It includes:- (1) Input devices (Keyboard) (2) Output devices (Printer) (3) Central Processing Unit (4) Back-up storage (DVD, Pen Drive) SOFTWAREIt is a general term to describe all forms of programs associated with a computer. Without software, a computer is like a car without petrol. Following are 4 categories of software: (1) Operating Systems (2) Utility Programs (3) Language Processors (4) Application Programs MEMORIES(a) ROM (Read Only Memory): It provides the computer with a list of instructions for its operation. (b) RAM (Random Access Memory): It provides instant access to any item of information stored in it. HARD DISK CAPACITY Hard Disk capacity is measured with Byte:1024 Bytes = 1 Kilo Byte (KB) 1024 Kilo Bytes = 1 Mega Byte (MB) 1024 Mega Bytes = 1 Giga Byte (GB) 1024 Giga Bytes = 1 Terra Byte (TB) PARTS OF COMPUTER - CPU (Central Processing Unit) - Mouse (with mouse pad) - Pen Drive - CD Drive NAMES ASSIGNED TO HARD DISK(1) C NAMES TO FLOPPY & CD DRIVEFLOPPY DRIVE Keyboard ShortcutsCtrl+O --> To Open Ctrl+N --> To Create New Ctrl+S --> To Save Ctrl+F4 --> To Close Alt+F4 --> To Shut Down Shift+F10 --> Right Click Ctrl+C --> To Copy Ctrl+V --> To Paste Ctrl+x --> To Cut Ctrl+P --> To Print Ctrl+Z --> To Undo Alt+Equal --> Auto Sum F7 --> To Correct Spelling Shift+F7 --> To Show Dictionary Alt+Tab --> To change Window Alt+Space+N --> To minimize the Window Alt+Space+X --> Maximize the Window To Know the Weekdays: Type Date in A1 --> =Weekday(A1)
OPCFW_CODE
How can I create a branch for a non-tip revision in Mercurial? In my repo, I have the revisions 1 to 10. I've pushed up to 5 (so the next hg push would publish revisions 6-10). But I have to interrupt my work now and the result isn't 100% complete. So I'd like to move the revisions 6-10 into a new "experimental" branch to allow someone else to complete the work without disrupting the sources for everyone. How can I add a branch to a non-tip revision (in my case: Starting with revision 6)? Or should I use a completely different approach? You cannot apply a branch name after the fact without modifying your history. The most simple approach is to ask the other users to use revision 5 as the parent for any changes they create. For example, the other users would: hg clone <your repo> or even hg clone --rev 5 hg update -r 5 work, work, work hg commit When they commit a change, it will create a second head on the default branch, but that should not create any problems. You will simply need to merge the two heads together once your experimental changes are complete. That being said, moving your changesets onto a branch can be accomplished using Mercurial Queues (MQ). The following sequence shows how it be done: hg qinit (Create a new patch queue) hg qimport --rev 6:10 (import r6-10 into a new patch queue) hg qpop -a (remove all patches from your working copy) hg branch <branch name> (create your new experimental branch) hg qpush -a (apply all the patches to your branch) hg qfinish -a (convert all patches to permanent changesets) Great answer from Tim. What he's suggesting (in part one, ignore the MQ stuff at this stage) is called an anonymous branch and it's much more suitable for short term entities like bugs and features than are named branches which are best for long-term things like releases. The other piece I'd add is that after committing in step four if you want to push only the new changeset (11) you'll need hg push -r 11. If you just do hg push you'll send 6 through 11. The other option is: 1) go to revision where you want to start your new branch from, 2) start a new branch with an empty commit, and 3) rebase the revisions onto the new branch. Tim already has good suggestions. Additionally you could push your experimental changes into a distinct experimental clone on your central server (I guess you use one). This clone could also be used by other developers to push their not-yet-finished work in order to let others review or continue it. It is also clear that this clone's code is not ready to be used. Once some task is finished, the corresponding changesets can be pushed to the stable repository. Actually named branches are a good idea for your case, but the fact that their names are burned into history mostly is more a problem than a feature. IMHO Git's branch names are more practically. However, to some extend you could also handle your case with bookmarks, which are pushable since Mercurial 1.7 (not sure here). That is you bookmark revision 5 with something like stable (or whatever you agree on in your team) and revision 10 gets bookmarked with something like Aarons-not-finished-work. The other developers would then just pull stable, except your colleague who is supposed to continue your work, who would pull the other bookmark. However, personally I did not use a such workflow yet, so I cannot say if it performs well in practice.
STACK_EXCHANGE
Why do Israelis keep one day of Yom Tov? From what I understand, we outside of Eretz Yisrael we keep two days of Yom Tov as a continuation of the minhag that evolved from the time of the Bais Hamikdash when we would not necessarily know when Sanhedrin in Yerushalyim declared it to be Rosh Chodesh. In order to be sure that we keep Yom Tov on the correct day, we keep two days. What I don't understand however, is why is this not the minhag in Eretz Yisrael? After all, just because you live in Eretz Yisrael today, it doesn't mean you are not technically in Golus. Also, there is no Sanhedrin in Eretz Yisrael today just as much as there is no Sanhedrin in Chutz L'Aretz today - meaning there is no central body anywhere to establish when Rosh Chodesh is. According to all these educated assumptions, I can't see why Israelis would hold one day while British Jews, for example, keep two. I would appreciate very much if someone could clarify. are you combining two ideas? your first paragraph states that the reason was purely distance and your second paragraph brings up the idea of being in golus. If the reason was distance, the state of being in golus is immaterial. Rashei Chodashim these days were established ahead of time by the Sanhedrin of Hillel Hakatan. That we should keep doing a second day of Y"T in the Diaspora was established along with that as an acknowledgement that we're observing R"Ch by the authority of that enactment, not just based on a rock moving through space or some algorithm on paper. See R' Hirsch's commentary on Ex. 12:2. This still doesn't get all the way to "why not in E"Y, too?" so I'm not posting an answer yet, but I think this conceptual framework is important. ummm...yes being in Israel does mean technically you are not in Golus. That's what the word means: exile. Perhaps they are in a non-literal "Golus", but that is the one that is non-technical. Ramban gives the date of today's fixed calendar at about 386 CE. Before that point, the Jews in Israel received a messenger informing them of the proper date of yomtov, and thus kept one day. The Jews outside of Israel did not get a messenger in time and thus kept two days to play it safe. Around 386, when the calendar was fixed, the policy became to keep things more or less the way they had been -- if you were in Israel, keep one day, if you weren't, keep two. When we say "galus", sometimes (e.g. in a Tisha B'Av sense) we mean "there is no Temple anymore"; but here we mean very simply "outside of Israel." There were 300 years when the Temple had been destroyed and yet there were Jews in Israel who could still be informed that the new moon had been sanctified (by a Sanhedrin located in the north of Israel, where things were more stable). And in 386, Hillel (great-great-etc. grandson of the famous one) used his Sanhedrin to sanctify the calendar for hundreds of years into the future.) Rambam's opinion is that those locations that were within messenger distance still keep one day, and those that aren't keep two; unless we know otherwise, we use as a rule of thumb the Mishnaic borders (per Gittin 1:2, Ashkelon to the south, Ako the north, Rekem the east, and implicitly the Mediterranean Sea to the west). Fascinatingly, Ritva opines that when we switched to a fixed calendar, the decree was to change the borders of the practice as well -- from here on out, when they said "Israel" they meant the Biblical borders; when they said "not Israel" they meant beyond those borders. (Even though this did not match the exact parameters of the older practice, but it made for a simpler policy.) Around 386, when the calendar was fixed, the policy became to keep things more or less the way they had been -- if you were in Israel, keep one day, if you weren't, keep two. moves toward answering "why," the question at hand, but is a step short, I think, of fully answering it. doesn't answer the fact that nowadays doesn't have sanhedrin @juanora Yes it does. It says a previous Sanhedrin already sanctified the present and near-future's months so there is no need for its current continuance for clarify in calendrical calculations. This Halakha comes from Takannah made by Chazal, in other words it is a m'd'rabbanan. We find in Pesachim 52a that Rav and Shmuel(the last of the Tannaim) instituted the two day Hag for areas not able to be reached by Shluchim(messengers) within five days from Jerusalem. However, this still does not apply to all of modern day Israel. Rav Ovadia Hedayyah answered this question when it came up before the Beit Din Rabbani HaGadol(the Rabbinut's Supreme Rabbinical Court), as far as exactly why there is this minhag and where exactly it is in effect in Israel. Please be aware that the author of this answer, "Michael Tzadok," has recently been outed as a Christian missionary. His real name is Michael Elk. Please also see here and here.
STACK_EXCHANGE
First things first: are you supposed to be able to install on this hardware? Do you have enough memory, enough disk, the right cd, nic card etc? If this is old software on new hardware, it's possible that the old version just can't work because the new hardware is too fast or is otherwise incompatible. You may just need new drivers, but you shouldn't be guessing about it: go find out. Second: if you seem to be having problems you didn't expect, strip out everything you don't need for the install. If there's a SCSI card that you are going to use later, rip it out for now. If you have more memory than the install is supposed to need, take that out. Anything you can add back later should come out now. It may be extremely unlikely that the nic card is interfering with the install, but if everything is strange and weird, take it out, because it certainly isn't helping the install, is it? (unless it's a network install, of course) If you have a ready supply of other hardware, you may try swapping things like CDROMS or even hard drives. If it's IDE, try it on a different controller, or try splitting the hard drive and cd between the controllers. For SCSI, try the boot drive it at ID 0 or ID 6. Make sure you have proper termination and term power. If you have more than one SCSI drive, disconnect the others. Still having problems? Many installs have alternate screens, usually accessed by pressing ALT and a function key, that may give you a behind the scenes look at the installation. You might get more information about any problems there. Your BIOS can be a source of problems. For a troublesome install, you might want to turn off memory caching, hard drive caching, change disk geometry, turn off special features like P4 hyperthreading etc. You may need to change the default boot device, or change the addressing of peripherals, or specify legacy IRQ's if you have any older ISA devices. If you suspect hardware problems, try installing something different: Linux instead of SCO, Caldera instead of RedHat, or even Windows if you have to. Note that "it works in xyz" doesn't necessarily mean that the hardware is good, but the experience may give you more information than you had. Of course your install media can be at fault too. Most install cd's can be looked at on another system, or you might just try a simple "ls -lR" of it and watch for errors. Are you doing a dual boot install? This may require resizing existing partitions, or could even require 3rd party boot managers. Have you done this OS before? If not, your assumptions from other OS installs can lead you astray: for example, Linux and SCO Unix have completely different concepts with regard to filesystems on disk partitions. Linux puts file systems on partitions (except in the special case of LVM) while SCO breaks one partition into multiple filesystems. If you've been at this a while, you may have a completely wrong idea about how modern virtual memory systems handle swap, and that could cause you to make bad judgements about that. Got something to add? Send me email. More Articles by Tony Lawrence © 2012-07-19 Tony Lawrence The camel has evolved to be relatively self-sufficient. (On the other hand, the camel has not evolved to smell good. Neither has Perl.) (Larry Wall)
OPCFW_CODE
When you load the page with the applet the following screen will appear: here you can choose the physical locality with which you'll connect to the Net server. The Net server address (and port) should be left unchanged (due to applet security restrictions, you can only connect to the same server the applet comes from, unless you choose a different security policy). Then you can press login, and you'll enter the Net server; it might take a while to establish the connection, and you will see messages in the System Messages area. If the connection is established a new window will pop up: This is the typical KlavaNode window: There are two processes running in the node in this example: AppletTestProcess reads from the keyboard the locality (peer) with which we want to communicate, and then enters a while loop (until string QUIT is entered) and sends every input string (in(!string)@keyb) to the remote locality (out(string)@peer): rec AppletTestProcess declare locname screen, keyb ; var peer : loc ; var string : str ; var again : bool begin out( "Insert the locality to communicate with\n" )@screen ; in( !peer )@keyb ; out( "You chose: ", peer )@screen ; out( "Insert a string to send to ", peer )@screen ; out( "or QUIT to terminate\n" )@screen ; again := true ; while again do in( !string )@keyb ; if string = "QUIT" then again := false else out( string )@peer endif enddo ; out( "Thank you, BYE BYE\n" )@screen end ReceiverProcess continously extracts strings from the local tuple space (in(!string)@self) and shows them on the local screen (out(string)@screen) rec ReceiverProcess declare locname screen, keyb ; var peer : loc ; var string : str ; var again : bool begin while true do in( !string )@self ; out( "RECEIVER : " )@screen ; out( string )@screen ; out( "\n" )@screen enddo end You may test this example by choosing to communicate with the same locality you connected to the Net, or by opening another browser window, launching another applet and login with another locality. In the xxx_keyb text area you have to insert the name of the locality you want to communicate with, specifying that the string you entered is of type loc: i.e. if you want to communicate with locality foo you have to insert foo:loc. And then press ENTER or click on the OK button. Then you can start typing strings on the same textarea (if no type is specified, string is assumed) and press enter when you want to deliver that string (ESC or Cancel button will erase the current content of the textarea); when you are done you have to type string QUIT. You can disconnect from the Net by: Note that both screen and keyb are logical locality, mapped to the right physical locality by the environment (see the picture of the node above). You may want to take a look at KlavaAppletTest.java and at the xklaim compiler generated files for the processes above: AppletTestProcess.java and ReceiverProcess.java. Now you may want to try this applet (note that you need a Java 1.1.x enabled browser; Netscape 4.5 and Internet Explorer 5 should work fine; we had some problems with Internet Explorer 4): please alway refer to this page to access the applet, whose place is likely to change in the future. Send any bug to email@example.com Last Update 01/08/2003
OPCFW_CODE
So many people have been confused of recent, and in their confusion they have gone astray. I have read countless articles coming from those supporting the infiltrators. They are saying that the leadership of IPOB betrayed Mazi Nnamdi Kanu. They also talk about Umueri Account and ask why DOS abandoned the account. They argue that Umueri account was an account created by Mazi Nnamdi Kanu for ESN. That in itself is correct. Some time after the extraordinary rendition of the IPOB leader Mazi Nnamdi Kanu, DOS told IPOB members that Onyendu have authorized that the Umueri account should not be in use any more. They went further to say that a new account will be created in that regard. After this announcement, some people started questioning the order. During this period some people became selective of what order to take. Any order that goes against some people they love becomes betrayal. They will accept a message from DOS today and disagree tomorrow. They will in the same vain accept reports from the Lawyers this minute and reject another the next minute. Umueri account was indeed an account set up for ESN, but immediately Onyendu was kidnapped, the persons in charge of the account in USA started disobeying orders and kept ESN money for themselves,instead of releasing it. This was relayed back and order came from the highest command to stop using the account as ESN account. An account was opened in that regard. Those in charge of Umueri account were a certain Madam Oyibo and her very right hand woman named Nelly Offoegbu. I still wonder how some people want IPOB to still be paying in money into an account being managed by two women who went on to be accusing IPOB leadership right left and center without any evidence. At first, they claimed they have evidence. But when people started demanding that they provide this evidence they have, they turned around to say it was Elohim that revealed all these things to them. If you head an organisation, will you pay money into an account you are not in charge of? MNK was abducted and these two women went rogue, and you are still asking why Umueri account is not in use. Even one of the women are now fronting Idu and calling Biafra a slave name, yet you want us to submit our money into an account such a person is in charge of. Some thing is definitely wrong with some of us. Why are the gone astray Umuada now questioning the people behind the Umueri account? Why are they asking to know how money is coming in and going out of Umueri account. Recall that this so called Umuada after the DOS disbanded the very account continued using the same account to organise fundraising. They did it because they believed in the fake accusations leveled against DOS by Nelly Offoegbu. They thought they are rescuing the struggle by going against what DOS said. Today, most of them are saying that the aim of this duo was to cripple IPOB financially. Recall that they attacked the very IPOB finance. The game was to accuse the finance team of corruption, create doubt on the mind of the people and then discourage people from donating to IPOB. Countlessly, their supporters bragged how IPOB leadership is no more getting money. They looked at an organization that have lasted over 12 years and thought they can kill her just like that. Nelly Offoegbu and Madam Oyibo are no more members of IPOB. Both of them are the people in charge of Umueri account and yet you want IPOB leadership to keep using the account. Please, what exactly is your brain made of? Till this very day the infiltrators are claiming to be in charge of ESN. They are still organizing ESN fundraising. Infiltrators don't know the channel through which ESN get their funds. Yet they still fool you with ESN fundraising. The only thing you are holding firmly is that Umueri account was put in place by Onyendu, and I ask you this question. Did Onyendu say Nelly Offoegbu and Madam Oyibo should use IPOB money anyhow they want? Do you know the amount that was in that account before these women hijacked it? Do you know how much they have used the name of IPOB and ESN to garner? Where is these monies? What did this two women use the money for. They tell you they are taking care of ESN and you have failed to ask them which ESN. The one they are in charge of or what? How can DOS be in charge of ESN, command ESN and it will be expelled IPOB members that will be taking care of them? What actually is wrong with you guys bikonu? What exactly? How come you all have suddenly stopped reasoning? IPOB is not bankrupt and can never be bankrupt. For your information, IPOB didn't start raising funds today. IPOB have been raising funds and can't be crippled just because few infiltrators and their supporters stopped contributing into IPOB purse. You don't have what it takes to stop IPOB. All of you that followed the infiltrators will all regret your actions. IPOB will stop all your vicious attempts to sabotage her. Most of you at the appointed time will answer for the crimes you all committed against IPOB, using the name of IPOB. MNK will come out and disgrace you all. Remember that you must not believe this, but all of you will see it happen. Repent now or perish with the infiltrators. Elochukwu Ohagi, Philosopher, Teacher and Activist, 2022.
OPCFW_CODE
2.3 3D Text Style 3 1.Introduction3 lessons, 03:54 2.Techniques and Tools for Creating 3D Text8 lessons, 57:08 3.Conclusion1 lesson, 01:10 2.3 3D Text Style 3 Hi everyone, welcome back to the course. Here is what we'll be learning to create in this lesson, some dynamic sloped lettering that sort of looks like it's being raised and pivoting up out of the ground. To achieve this look, I'll be introducing you to the Shear Tool, so let's get going. Like the previous lesson, choose a typeface that's as angular as possible and with as few curves as possible, I've chosen this one. Apart from a curve on the D, it's got very nice, clear points to play with, as you can see here in the V. Again, like the previous lesson, I'm going to demonstrate the method by just showing you how to do it with one letter. So then you can go ahead and apply that to an entire word. So I'm going to copy the V here, bring it into my document. So I'm just going to center it and make it slightly bigger. And while I'm at it, I'll just create a background, And I'll make that green. And lock that. So I have my V selected again, I'm going to just go and change the color of that. And now on to this Shear Tool, this is a great tool which helps you to sort of warp paths, which can be really useful when you're trying to suggest perspective. So you'll find the tool down here in your toolbar. It may be hidden underneath Scale Tool, so click and hold and you should find it in there. So have that selected, hold down Shift and just click and drag from a point on your path. So as I'm holding down Shift and moving the mouse back and forward to the left and right, you'll be able to see that the shape is being sheared and being morphed as I'm moving the mouse. And as I'm holding down Shift, it's just keeping it at a horizontal level. So if I let go, then it will snap to that point. So that looks pretty good to start with. Then what I'm going to do is copy and paste in place, Shift+Cmd+Option+V. I'm going to make this black, send it behind the current shape and the layers, and then I'm just using the usual arrow tool. I'm going to just transform the points and squash it to make a slightly more squashed version of the same letter. And now what I'm going to do is simply join each point to the corresponding point using the Pen Tool with a black fill again. Just paint in the parts of the letters that you can't see. It could be useful now to change the top color to an outline so you can see better what's underneath. And you'll be able to see the points better, so you can join them up and aim for them slightly easier. There we go, I'll change that back to a fill, move it to the top, bring it to the front. And there you have it, so it's looking good already. I'm going to group all of that together. And then just to help suggest an extra dimension and extra perspective, you can then use the Shear Tool again. This time, without holding Shift, so you are able to do it much more freer, you can just move your mouse around. As you can see, it's morphing the shape again. So I'm going to drag it up slightly like this, and you can see that it's kind of making it look like it's going off into the distance slightly. So there you have it, just in a few simple steps we've got a really interesting, dynamic 3D style of lettering. So just continue playing with that, I'm going to finish by centering it here in the middle of my artboard, there we go. So thank you for watching, and I'll see you all in the next lesson.
OPCFW_CODE
from sklearn.base import BaseEstimator, ClassifierMixin from sklearn import model_selection, metrics import numpy as np from scipy import sparse, constants class CrossTrainer(BaseEstimator, ClassifierMixin): """Implementation of the method described in CrossTrainer: Practical Domain Adaptation with Loss Reweighting""" def __init__(self, clf, k=5, delta=0.01, verbose=False): """ Initialize the CrossTrainer class. :param clf: The base classifier with a fit() function that takes a sample_weight argument. (Ex: sklearn's SGDClassifier(...)) :param k: The number of folds in k-fold cross-validation for fine-tuning the weighting parameter alpha. :param delta: The precision of the approximation of the optimal value of alpha. """ self.clf = clf self.k = k self.delta = delta self.verbose = verbose def fit(self, Xtarget, ytarget, Xsource, ysource): """ Approximates the optimal weighting parameter alpha through a specialized hyperparameter search and outputs the best model trained on a combination of target and source data. :param Xtarget: Input data corresponding to the target domain. :param ytarget: Labels corresponding to the target domain. :param Xsource: Input data corresponding to the supplemental source domain. :param ysource: Labels corresponding to the source domain. :return: Trained classifier and best alpha value. """ self.Xtarget = Xtarget self.ytarget = ytarget self.Xsource = Xsource self.ysource= ysource self.datatype = _get_type(self.Xtarget) # Bracketing results = [] acc_zero = self._cv_train_with_alpha(0, results) acc_one = self._cv_train_with_alpha(1, results) search_state = {"left": 0, "right": 1, "acc_left": acc_zero, "acc_right": acc_one} self.bracket(search_state=search_state, results=results) if search_state["acc_right"] > search_state["acc_middle"]: alpha_opt = search_state["right"] elif search_state["acc_left"] > search_state["acc_middle"]: alpha_opt = search_state["left"] else: alpha_opt = self.gss(search_state=search_state, results=results) if self.verbose: print("Optimal alpha: {:.4f}".format(alpha_opt)) self._train_with_alpha(alpha=alpha_opt, Xtarget=self.Xtarget, ytarget=self.ytarget, Xsource=self.Xsource, ysource=self.ysource) return self.clf, alpha_opt def bracket(self, search_state, results): """ Recursively tests values of alpha until either reaching the required precision or bracketing a local maximum. :param search_state: State of current search for optimal alpha. :param results: List of weight, accuracy tuples. """ is_right_better = search_state["acc_right"] > search_state["acc_left"] middle = search_state["left"] + (search_state["right"] - search_state["left"]) / 2.0 acc_middle = self._cv_train_with_alpha(middle, results) # End if we found a bracketing or reached the precision threshold if (search_state["right"] - middle < self.delta) \ or (is_right_better and acc_middle >= search_state["acc_right"]) \ or (not is_right_better and acc_middle >= search_state["acc_left"]): search_state["middle"] = middle search_state["acc_middle"] = acc_middle return if is_right_better: search_state["left"] = middle search_state["acc_left"] = acc_middle return self.bracket(search_state=search_state, results=results) else: search_state["right"] = middle search_state["acc_right"] = acc_middle return self.bracket(search_state=search_state, results=results) def gss(self, search_state, results): """ Performs golden section search to optimize the hyperparameter alpha. :param search_state: State of current search over alpha. :param results: List of weight, accuracy tuples. :return: Best value of alpha found as measured by validation accuracy. """ ratio = 1 - (1 / constants.golden_ratio) longer_diff = max(search_state["middle"] - search_state["left"], search_state["right"] - search_state["middle"]) while longer_diff > self.delta: is_right_longer = (search_state["middle"] - search_state["left"]) < \ (search_state["right"] - search_state["middle"]) if is_right_longer: alpha = search_state["middle"] + ratio * longer_diff else: alpha = search_state["middle"] - ratio * longer_diff acc = self._cv_train_with_alpha(alpha, results) # New middle if acc > search_state["acc_middle"]: if is_right_longer: search_state["left"] = search_state["middle"] search_state["acc_left"] = search_state["acc_middle"] search_state["middle"] = alpha search_state["acc_middle"] = acc else: search_state["right"] = search_state["middle"] search_state["acc_right"] = search_state["acc_middle"] search_state["middle"] = alpha search_state["acc_middle"] = acc # New edge else: if is_right_longer: search_state["right"] = alpha search_state["acc_right"] = acc else: search_state["left"] = alpha search_state["acc_left"] = acc longer_diff = max(search_state["middle"] - search_state["left"], search_state["right"] - search_state["middle"]) return search_state["middle"] def _cv_train_with_alpha(self, alpha, results): """ Uses k-fold cross-validation to estimate the accuracy of a model trained with the given weight alpha. :param alpha: The value used to reweight the loss function for target and source data. :param results: List of weight, accuracy tuples. :return: Average accuracy over the k-fold cross-validation. """ skf = model_selection.StratifiedKFold(n_splits=self.k) acc_sum = 0.0 for train_index, val_index in skf.split(self.Xtarget, self.ytarget): Xtarget_train, Xtarget_val = self.Xtarget[train_index], self.Xtarget[val_index] ytarget_train, ytarget_val = self.ytarget[train_index], self.ytarget[val_index] self._train_with_alpha(alpha=alpha, Xtarget=Xtarget_train, ytarget=ytarget_train, Xsource=self.Xsource, ysource=self.ysource) acc_sum += 100 * metrics.accuracy_score(y_true=ytarget_val, y_pred=self.clf.predict(Xtarget_val)) acc_mean = acc_sum / self.k results.append((alpha, acc_mean)) if self.verbose: print("Weight: {:.4f}".format(alpha)) print("Validation Accuracy: {:.3f}\n".format(acc_mean)) return acc_mean def _train_with_alpha(self, alpha, Xtarget, ytarget, Xsource, ysource): """ Trains a model with a given weight alpha. :param alpha: The value used to reweight the loss function for target and source data. :param Xtarget: Target inputs. :param ytarget: Target labels. :param Xsource: Source inputs. :param ysource: Source labels. """ if self.datatype == 'numpy': Xtrain = np.vstack((Xtarget, Xsource)) elif self.datatype == 'sparse': Xtrain = sparse.vstack((Xtarget, Xsource)) ytrain = np.concatenate((ytarget, ysource)) ntarget, nsource = len(ytarget), len(ysource) wtarget = np.ones_like(ytarget) * alpha wsource = np.ones_like(ysource) * (1 - alpha) * ntarget / nsource wtrain = np.concatenate((wtarget, wsource)) self.clf.fit(Xtrain, ytrain, sample_weight=wtrain) def _get_type(data): """ Returns the type of the inputs to the model. """ if type(data) is np.ndarray: return 'numpy' elif type(data) is sparse.csr_matrix: return 'sparse' else: raise Exception('Unknown data type: ' + str(type(data)))
STACK_EDU
Holmusk is a digital healthcare startup based in Singapore with a focus on mental health and chronic conditions. Approximately a year ago, we chose to begin the process of migrating our backend into Haskell. As of March 2019, Holmusk is now powered fully by Haskell and this post is a summary of our experiences so far. These are all anecdotal, based just on our experience in this process. Generalise it with a pinch of salt. This post assumes that you have some idea of what Haskell is and why someone might use it for their personal projects. This is a summation of our experiences when trying to use it as a startup. You can put together a stronger technical team faster with Haskell Hiring Haskell developers can be a breeze because of its status as an outsider language. Anyone who knows Haskell can already be assumed to be technically curious and potentially the type of people you want in your company. There is also a stronger culture of remote work in the Haskell community so if you are open to remote work, the number of options available to you are large. The signal-to-noise ratio when hiring for Haskell developers is very high because not a lot of developers will apply to begin with, and most that do are people who you would be happy to have on your team. The quality of Haskell libraries, especially in the web domain, is amazing The library quality of Haskell is excellent. It has some battle-tested libraries which have well documented behaviours. Because of the flexibility of the type system, the libraries in Haskell tend to be much more modular. For example, the database pooling library that is used for your postgresql connections can also be used for your redis connections, or as a way to limit the number of concurrent API calls that your worker makes. This degree of flexibility means that you can safely modify the behaviour of existing libraries and have predictable results. Ability to pivot A modest test suite + the compiler gives you the ability to refactor pretty much any part of your application without fear Start-ups are primarily economic experiments and don’t necessarily place the concerns of its developers first. We all want to write great software that is elegant, has high test coverage and goes through many rounds of code reviews before it gets committed. In early stages of a product, this may not always be possible as there will be time pressure from external clients, and an understanding that what we build today might be thrown out tomorrow due to scope changes or a pivot. Haskell helps us maintain software quality in such scenarios, by forcing the implicit assumptions that we make in our head to be explicitly spelled out. The presence of property testing libraries means that we can codify our assumptions about the program and have it do the hard work of verifying if what we wrote aligns to our assumptions. This also means that code for a feature that we wrote many months ago and haven’t touched since can be expected to continue to work and be worked on at some point in the future. If you need to scale, Haskell’s runtime efficiency can save you a substantial amount of money, especially at the beginning when you are the most resource constrained. The Haskell backend that replaced our old backend was significantly more efficient. It allowed us to run fewer servers to support the same workload. The cost savings from a smaller AWS bill can make a difference if your startup is in the phase where every dollar counts. As an example, we cut our AWS expenditure by ~50% on a higher workload compared to our previous stack. Lack of conventions There are very few well-trodden paths in Haskell, expect to make lots of decisions about your stack. Apart from things like Yesod, there aren’t really many framework style Haskell projects that come with best practices on how to structure your application or guides on how to do common tasks like handling file uploads, user authentication, database management etc. Working on Haskell projects feels very much like working with react projects in that you have to bring in external libraries piecemeal for most of the features you want and the libraries aren’t necessarily designed with the assumption that they will be used together in that particular combination. Conventions also codify some hard-learned lessons, in their absence most of those lessons have to be re-learned by your team. Lack of libraries While the quality of libraries is excellent, the quantity can be rather limiting. Don’t expect the equivalent of passportjs or vanity. The best Haskell libraries tend to be rather low-level, providing the building blocks rather than solving one cohesive problem by themselves. For instance, you won’t find any ‘batteries included’ libraries for user authentication which support OAuth integration, work with most major providers, have a password reset functionality, etc. If you are coming from other technologies like Rails, where creating admin interfaces is a simple with activeadmin, you would be surprised at how cumbersome and repetitive some of the common tasks can get. First-class developer experience Expect Haskell to get only community support or delayed support in most places. When AWS releases interesting new tools like Lambda, expect to not be able to play around with them through Haskell right away. You will be a second-class citizen in most developer products. This translates to either missed opportunities or just extra time spent trying to setup common tools like CI systems which are designed to work well with popular languages but have trouble adjusting to Haskell projects. Breaking changes in libraries You can expect libraries to have breaking changes that have ripple effects throughout your project. Most Haskell projects don’t really have a concept of backwards compatibility. They regularly release breaking changes because they tend to be either hobby projects or projects that simply have ambitious technical goals which don’t necessarily align with your company’s goals of having stable interfaces. Upgrading to a new compiler version almost always ends up becoming a blocking task that requires many tangential changes or reviewing changelogs carefully of all the libraries that you depend on to make sure that there aren’t any behaviour changes. The bait of type safety Type-safety can be an alluring thing that you eventually end up spending too much time and effort to achieve. Typing everything has diminishing returns. Having experienced the bliss of GHC being your programming assistant, it can be extremely tempting to rely on evermore fancy type system features to make more illegal states unrepresentable. Soon, you find yourself encoding entire chunks of your business logic into the type system while a simple run-time check might have sufficed. Just because Haskell allows you to express something in the type system doesn’t mean that it is always a good idea to do so. These endeavours increase the on-boarding difficulty of new developers into your project, can drastically increase your compile times and just make code plain unreadable sometimes. If you have people in your team who already know Haskell and are itching to put it into production use, starting an incremental inclusion of it in your stack might be well worth the try. This gives you a chance to evaluate the pains of the language vs the rewards. Some of the positive outcomes of type-safety do require a certain critical mass of your product to be written in haskell/haskell-like languages to kick in so this is something to be aware of. If you have a greenfield project, some very specific requirements around performance, correctness and future malleability of the code in the face of massive code changes, Haskell provides a best in-class experience that can be worth the downsides. If you already have people know Haskell or are very interested to learn Haskell, give it a chance and at least 3-4 months. I think that the effort will be well worth the payoff, it certainly was for us.
OPCFW_CODE
In ClearCase, need CLI invocation to list all revisions I'm attempting to add support for reading ClearCase repositories to reposurgeon. I've been able to puzzle out most of what I think I need, but the documentation is a massive pile of confusing details that leaves one basic question obscure. How do I list all revisions in a CC repository? The minimum thing I need would be a time-ordered sequence of lines each containing a revision ID (path, branch, revision level) and its parent revision ID. Revisions for directories should be included because I think I'll need that to deduce deletions. If there's some way to force a listing of file deletion events, directory revisions can be omitted. It would be more convenient if I could get a four-column listing: revision-ID, parent ID, committer name, and timestamp. Given this, massaging the report into a git fast-import stream would be almost trivial. I'm still a little unclear on how VOBs relate to single-project repositories in other systems, so an invocation for "specified VOB" and another for "all VOBs" would be appreciated. The consequence of a useful answer to this question is that I will jailbreak ClearCase, solving the problem of how to migrate complete histories out of it to Git. First, check my old answer "What are the basic clearcase concepts every developer should know". See also "ClearCase advantages/disadvantages" TLDR: Clearcase is file-based, not repository-wide revision-based. A cleartool lsvtree would list revisions for a single file. But only (full) UCM baselines would give something resembling to a revision, and that only for an UCM component within a Vob. It would be more convenient if I could get a four-column listing: revision-ID, parent ID, committer name, and timestamp. Given this, massaging the report into a git fast-import stream would be almost trivial. That would be a cleartool lsbl -fmt, using a fmt_ccase syntax cleartool lsbl -fmt "%n %[owner]Fp %d" Note: getting the "parent" baseline is more complex: see "How to obtain previous baseline from stream". See more on how to migrate ClearCase to Git. Without UCM, that would be a mess. VonC: What is it about UCM that is important? @ESR UCM is the only mode in ClearCase where a collection of file versions is grouped inside a common label (baseline) within a group of file (component, akin to a Git repo). Without UCM, all you have left is a collection of separate files, each one with their own individual history. I can't find the F modifier for the owner property query in the ClearCase docs. What is that supposed to do? Are you on Windows? There would not be a F on Windows For base clearcase (on windows) listing the revisions can be done in two ways: clearexport_ccase The tool clearexport_ccase writes a textfile with most of the relevant informations of one VOB (one repository). The format is not described by IBM, as far as I know. You see all but not file delete or file rename. You can see merge operations. cleartool lshistory The subcommand lshistory of cleartool writes text file (format can be given as parameter) which seems to contain all meta information of the versions of the versioned elements. Unclear is whether it contains the merge informations of elements. With both outputs you do not get the versioned elements itself but only the database path. To retrieve the file from the database you have to use cleartool get. Because base clearcase is file oriented each file is versioned by itself. Building changesets from the file, versions must be done by putting all versioned files together by looking at branch, tag, author, checkin msg, checkin time ... Merging is done file wise and can be done by real merge or drawing a merge arrow, which means to show a merge in the revision tree of a file. Merging can be changed by drawing merge arrows in the version tree of a file. Additionally you have a config spec which defines which versioned elements you see in your view of a repository; e.g. they define which branch you see in your view. (Perhaps they can be ignored when converting a VOB.) Reading the information from the clearcase database is time consuming, so it should be possible to get the meta information first, convert them to check the conversion and merge them afterwards with the downloaded file versions. It exists a ClearCase to SVN converter which is working for base clearcase which converts file revisions to change sets. So it seems to be possible.
STACK_EXCHANGE
Boot Device Not Found And Clicking Noise When, where and how often do you find polynomials of higher degrees than two in research? If you have another hard drive use a USB to hard drive adapter to connect it to your pc with the broken hard drive. Unfortunately, that means the hard drive has failed (See the video -- Clicking Hard Drive- clicking starts at around 4:20). ScretAgentDan Virus/Trojan/Spyware Help 44 04-22-2013 05:13 AM Unable to open programs Good Morning, A friend called me the other night. http://evendirectory.com/boot-device/boot-device-not-found-and-noise-when-turned-on.html Hard Drive Makes Clicking Noise And Wont Boot Memtest UBCD Remember dust buildup in your PC can be a killer!! « 31.3GB found on a Western Digital 3TB drive | Create a restore point automatically » Thread Tools Replacing a Sony Vaio Laptop Hard Drive- Illustrated how to replace a notebook internal hard drive __________________ MemTest | IMGBurn | Seatools Drive Fitness | DataLifeguard |SeaFlash Rufus | Virus/Malware Help When I restarted it say boot device not found. If you're in another part of the world you'll need to visitHP Support Worldwideto get your region's contact information. Go to the BIOS settings once the laptop/computer starts and this would then fix your issue Hope that helps Thanks in my laptop i dont have any legacy bios settings acer I tried all the available testes. Putting the hard drive in the refrigerator for a while. All Rights Reserved. Boot Device Not Found 3f0 Additional batteries are affected. The third time the hard disk suffered physical damage and had to be replaced. weblink Memtest UBCD Remember dust buildup in your PC can be a killer!! 06-10-2013, 07:33 PM #8 Leafmanx3 Registered Member Join Date: Jun 2013 Posts: 4 OS: Win7 Quote: If you find this helpful make sure to click the white star under my name to give Kudos! Is there anything I can do to retrieve my files that were on the hardrive? Physical hard drive failure. If your problem is solved please click the "Accept as Solution" button ````````````````````````````````````````````````````````````````````````````````````````````````````` Reply 0 0 NitinKhandelwal Honor Student Posts: 2 Member Since: 06-29-2012 Message 3 of 9 (5,841 Views) Report Boot Device Not Found 3f0 First Time Here? Get More Info As for all that clean room crap, I've done it. Hard Drive Makes Clicking Noise And Wont Boot share|improve this answer answered Jan 30 '11 at 1:48 ocodo 1,3821417 I might try the freezer method out of curiostiy. All very interesting and informative. The hardrive makes a weird sound... his comment is here I realise I'm probably SOL but it's worth asking. hard-drive recovery share|improve this question edited Jan 30 '11 at 3:33 asked Jan 30 '11 at 1:19 LRE 5662716 1 Get a new one and restore your backup, may take The one that came with the notebook will be either a 5400 or 7200 RPMs. Use Ubuntu Live CD Try Ubuntu to see if can backup all important files, documents, music etc... Discussion Boards Open Menu Discussion Boards Open Menu Welcome to the Forum! Ask ! this contact form When connected to my PC, I only see the drive when I open computer management but its not able to read what's on it. Please help!! 8 answers Last reply Aug 16, 2012 More about bootable device price_thAug 16, 2012, 2:56 PM When you say weird sound, clicking? You could try testing the battery contacts on the computer for continuity with a multi-meter. Please let me know if you have questions and what the results are if you get a chance Reply 0 0 Rogerscomputer Student Posts: 1 Member Since: 08-20-2013 Message 8 of But the battery is sealed and I could not get to it! Will helium in the tires of bike make it lighter? Download Ubuntu Live CD ISO image and burn the ISO image to a cd using Imgburn (how to burn an ISO image using Imgburn). i have also reinstaled windows 3 times now with the same error every time.I have also spent three hours and $135 with tech support on the phone, and they cant figure The trick normally is to reset the computer by removing the battery. The following steps were taken from here.Windows 8 : From the Start screen, type msinfo32 . You may not get another chance. Score 0 bixahawk June 18, 2016 2:56:40 PM I disconnected the hard and reinserted it and it works like normal.....very easy! navigate here VeroGAug 16, 2012, 4:59 PM Oh well.. This may sound harsh but it works. Use Ubuntu Live CD to Backup Files from Your Dead Windows Computer __________________ Virus Help Ubuntu Live CD PC Running Slow? I have alottt of files though...almost 40gigs of music and pictures...i dknt think a dvd will be enuff 06-10-2013, 07:31 PM #7 JackBauer_24 TSF Team, Emeritus Join I have Microsoft XP Home edition Service Pack 3, i do have a... How can I tell if I have permission to run a particular command? 12 Birds in the petshop Oddly Even, Positively Negative more hot questions question feed about us tour help Complete Steps 1-3 to Uninstall both drivers (A & B as listed in the image below)Also make sure you BIOS is up-to-date.BIOS stands for Basic Input/Output System/Service. Score 0 mbarnes86 a b D Laptop June 19, 2016 1:51:58 AM Hi the remove & insert the hard drive may well work. There is also the freezer method, put the drive in the freezer for about 30 mins, take it out, leave it in a cool dry area for about 5 mins, and Browse other questions tagged hard-drive recovery or ask your own question. I had to hardboot it. If your problem is solved please click the "Accept as Solution" button ````````````````````````````````````````````````````````````````````````````````````````````````````` Reply 0 0 Captbrax Student Posts: 1 Member Since: 07-03-2012 Message 5 of 9 (5,712 Views) Report Inappropriate Operating System Not Found + clicking noise This is a discussion on Operating System Not Found + clicking noise within the Windows 7 , Windows Vista Support forums, part of the In the future backup all important data to an external hard drive. __________________ Virus Help Ubuntu Live CD PC Running Slow? Do you have another PC that you can hook the HD to as slave (not a boot device)?
OPCFW_CODE
"Please enter a value for amount." when creating invoice with discount with custom price level outside the US I'm attempting to create invoices via the SOAP web services. The add action is failing with the following message: ERROR|USER_ERROR|Please enter a value for amount. There is no amount property on the body of the invoice, just its line items. I checked every line item for the presence of an amount property, and they all had one. Eventually, I isolated the issue to the discount line items, as invoices without them were successfully being created. The discount line items are of the form: <ns9:item xsi:type="ns9:InvoiceItem"> <ns9:item xsi:type="ns1670:RecordRef" internalId="80608" xmlns:ns1670="urn:core_2019_1.platform.webservices.netsuite.com"/> <ns9:amount xsi:type="xsd:double">-6.3</ns9:amount> <ns9:quantity xsi:type="xsd:double">1.0</ns9:quantity> <ns9:price xsi:type="ns1671:RecordRef" internalId="-1" xmlns:ns1671="urn:core_2019_1.platform.webservices.netsuite.com"/> <ns9:location xsi:type="ns1672:RecordRef" internalId="118" xmlns:ns1672="urn:core_2019_1.platform.webservices.netsuite.com"/> <ns9:taxCode xsi:type="ns1673:RecordRef" internalId="63807" xmlns:ns1673="urn:core_2019_1.platform.webservices.netsuite.com"> <ns1673:name xsi:type="xsd:string">AVALARA-VAT - (63807)</ns1673:name> </ns9:taxCode> <ns9:customFieldList xsi:type="ns1674:CustomFieldList" xmlns:ns1674="urn:core_2019_1.platform.webservices.netsuite.com"> <ns1674:customField xsi:type="ns1674:StringCustomFieldRef" scriptId="custcol_sq_referenceid"> <ns1674:value xsi:type="xsd:string">51962_discount</ns1674:value> </ns1674:customField> </ns9:customFieldList> </ns9:item> Some details about the discount line item (not sure if these are relevant): The discount line item itself has a rate, but I've been creating invoices with an amount because these invoices stem from data transformed as part of an integration, and I only have access to the external system's discount amount. The discount line item has the Non-Taxable tax schedule, with explicitly defined sales and purchase tax codes for the UK of UNDEF-GB. The discount line item is assigned an account which includes all subsidiaries. 2 things I've found are: This exact same discount line item has been in other successfully synced invoices, the only difference I've found being that the customer, location, and subsidiary associated with this invoice is in Great Britain, not the US. I'm can create this invoice by not specifying the price level as -1, and so have it be undefined. I have 2 questions: Why does NetSuite not think I'm providing a value for the amount of this discount line item only on invoices created in the UK (or not in the US)? Is it good practice to have discount line items have the custom price level (-1) or an undefined price level (or does it matter)? Thanks for taking the time to think about this problem and let me know if you need any more details. What I can suggest you is to try doing all the things in the same order using UI which now you are doing through SOAP Services. There maybe an issue with the order in which you are setting the data so just give it a try. I agree with @Finnick, the order in which you make selections matters sometimes, this will be evident while working in the UI. Also while in the UI, take note of all required fields (main and line) and make sure they are all set in your code. Thanks for the suggestions! I attempted to create an invoice in the UI identical to what I'm creating via the API, and I encountered the same error after pressing Save. I also attempted to create an invoice in the UK in a separate NetSuite sandbox and did not run into this issue. These behaviors makes me think that there may be a script that may be performing some action on Save that's actually triggering the error. What do you think? This error is a bit of a red herring. While it's not clear what's causing it, price levels are intended to be used with items that are sold. Discount items are not sold items, and so it doesn't make NetSuite since to assign them a price level. Removing the price level from discount line items so it is undefined is valid and prevents the error from being thrown.
STACK_EXCHANGE
How can take a better picture of this creek under bright sun light and dense vegetation on banks? Here is the first picture with 0 ev, f/2.8 and exposure time of 1/80. Here is second pic with -1.33 ev, f/3.2 and exposure time of 1/150. I think there are several problems with them: 1) It is nothing like what I saw with my eyes: The water is like mirror in the photo and hence it is very hard to make out where the river boundary ends and the river bank starts. 2) The sun came down from the top and create a very high contrast. If I lowered the ev (as in 2nd picture), then the area around the river became too dark. I don't think bracketing will help in this situation because trees and water flow always change slightly between frames. I think the first problem is caused by the polarised filter on my len. In general, what are the good techniques to improve these photo? I reckon that the biggest issue with this scene is the stark contrast between the highlights (upper centre extremities) and the shadows (mainly the river banks). A sensor or film with a better DR would certainly help. Another option would be to use a speedlight or strobe to the dark areas, with softboxes to soften the shadows. I have only limited knowledge on the latter, hence why I am not putting this in an answer No problem. Like I said, the high dynamic range of the scene is overpowering the capabilities of your sensor. Also, a polarising filter would, if positioned correctly in relation to the reflection, decrease the reflection of the water instead of strengthen it. I'd have bracketed it even wider than you did, 0EV & +/- 2EV, then HDR afterwards - see https://photo.stackexchange.com/a/107929/57929 Gradual ND will help but it is not practical. As above point out, simply use exposure bracketing and merge the image. I bet future camera will incorporate this features when most phone camera make use of the HDR merge and hurting the camera sales : https://www.theverge.com/2018/10/25/18021944/google-night-sight-pixel-3-camera-samples Hi Tim. Your comment is a decent answer. If you make it answer, it can be voted on, and more importantly, edited in the future if you need to add information, etc. Please see: Please put your answers in the answers section, even if they're short and please, consider posting the next image here too, so we can see where you went with it :) There is only so much you can do in the middle of the day when the sun is so bright and the shadows so deep. Go back just before sunrise and watch the place as the sun comes up, return just before twilight and watch as the lighting changes as the sun sets. You may need to visit it several times as atmospheric conditions can be vastly different form day to day. Also you may need visit at different times of the year as the direction of the sun may be better in different seasons. Try to learn how cameras see and record light, they are not nearly as good as your brain so you need to understand that and compensate with Manual adjustments to the settings of the camera. I can imagine this place with rays of golden sun light piercing the canopy of the trees and tuning it into a magical looking forest glen. Thanks. They are very good advices! Check these contrast filters, personally never used them, so don't have any first hand experience: From Tiffen's website: "Controlling contrast is difficult in bright sunlit exteriors. Exposing for either highlights or shadows will leave the other severely under-or over-exposed. Tiffen was recognized with a Technical Achievement Award from the Academy of Motion Picture Arts & Sciences for the innovative design of this popular Motion Picture and TV filter. It uses the surrounding ambient light, not just light in the image area, to evenly lighten shadows throughout. Use it where contrast control is needed without any other effect on sharpness or highlight flare being apparent." https://www.bhphotovideo.com/c/buy/Contrast/ci/159/N/4026728339 While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - From Review Thanks for the negative votes, I'm sure you've found my link useful ;-)
STACK_EXCHANGE
Improvement to "Installation" instructions So some background. I have only started with Rust at the beginning of this month and I got interested in blockchains last Sunday. Therefore that might explain why I had difficulty following the instructions on the installation steps page. Now the problems I had are more related to Rustup but regardless of this fact, the documentation might need some reworking in my opinion. There is a section related to installing Rustup where we are told to run: source ~/.cargo/env I didn't install Rustup via the shell script but instead used the Arch community package and as result, I didn't have the env file. However, it didn't seem to matter really as I was eventually able to go through the creation of Substrate chain tutorial without any issues. I agree that yes it's my fault for not following the instructions to the letter but maybe this issue can be highlighted for people who installed Rustup the way I did. While the previous issue was my fault, the section about the toolchain really slowed me down as I wasn't sure what to do because, Polkadot was mentioned while I was interested in Substrate. I simply ploughed ahead and installed the latest nightly toolchain and I had no problem going through the above referenced tutorial. This might well be due to my lack of understanding about the whole Rust ecosystem but I recall reading that knowledge of Rust would not be required for that tutorial. Now, I would be happy to help rewrite that part of the documentation as long as it is deemed to be necessary/desirable and also if I can get some degree of support either from here or in Matrix. Just my 2 cents as they say... Thanks for the feedback @DavidSSL - I would be happy to review a PR on this to improve the docs. IIUC the env file is just to get the $PATH variable updated: #!/bin/sh # rustup shell setup # affix colons on either side of $PATH to simplify matching case ":${PATH}:" in *:"$HOME/.cargo/bin":*) ;; *) # Prepending path in case a system-installed rustc needs to be overridden export PATH="$HOME/.cargo/bin:$PATH" ;; esac So including a note about that for other distros could be good. RE: Polkadot mentioned, it is built with substrate, and as a best practice for people actually using the template for a production product like it, it is very important to set a known good toolchain. What would you suggest (in your PR) to make that more clear? Hey @NukeManDan, the "env" part is clear for non ArchLinux other environments. I will need to spend some time on another machine in order to be able to update the documentation properly on this one. As for Polkadot, I think I have to elaborate on the workflow. I wanted to simply run the tutorial mentioned above as a result of which I am redirected to the installation page we're currently discussing. Then you go through the process of: Installing Rustup Installing a given nightly toolchain At this point I am introduced to 2 things: Rustup's nightly build Polkadot. Now from the point of view from someone who doesn't know much about Rust and its ecosystem Parity and its products I get totally puzzled because all I want to do is to be able to run the tutorial which doesn't involve Polkadot. So what goes through my mind is the fact that ok, I now need to find some date for the nightly version of Rustup when it comes to Substrate. I failed to make the connection between Polkadot and Substrate which you've mentioned above. However, looking at the latest version of Polkadot, I see three different dates and I don't really know what to do. Now my assumption is that it is possible that from some Polkadot tutorials, we get redirected to this installation page and hence why this information about nightly Rustup is important in this case. Let me create a PR which I'll send and you can guide me better.
GITHUB_ARCHIVE
Increment an iterator standard map ALL, std::map<int, std::string> addressee; std::map<int, std::string>::iterator it1, it2; for( it1 = addressee.begin(); it1 != addressee().end(); it1++ ) { bool found = false; for( it2 = it1 + 1; it2 != addressee.end() && !found; it2++ ) { if( it1->second == it1->second ) { printf( "Multiple occurences of addressees found" ); found = true; } } } gcc spits out an error: no match for operator+. This code is a simplified version of what I'm trying to do right now. I guess I can use std::advance(), but it seems it just going to be a waste of the function call. Is there a better fix for that? "a waste of a function call". A function call is not a "waste". it2 = it1; ++it2; before the loop and then for(; it2 != ...) @MooingDuck, no it is not a waste. But it is required some additional operations which I'd rather avoid. And it requires some time to do. @Rado, simplest solution. Thank you. I guess I had to look at it from different angle. @Igor: What "additional operations" are you imagining? How much "time" do you think it "requires"? Did you measure it? If you did, you'd be surprised. Right now you're just guessing, coming to the wrong conclusions from those guesses, then using those wrong conclusions to arrive at the wrong solution (or, rather, to skip over the right one). @Igor: It sounds like you need to unlearn habits from other languages. It is very easy for a C/C++ compiler to do function inlining which means that C/C++ does not have any of the performance problems you may be used to regarding short, simple operations, unless you do things to defeat function inlining (e.g. dynamic polymorphism with virtual functions). Also, ++it2 is just as much of a function call as std::advance(it2, 1). std::map does not have random access iterators, only bidirectional iterators, so there's no + n operation. Instead, use std::next: #include <iterator> #include <map> // ... for (auto it1 = addressee.begin(), e = addressee.end(); it1 != e; ++it1) { for (auto it2 = std::next(it1); it2 != e; ++it2) { if (it1->second == it2->second) { // ... break; } } } In fact, you should always use std::next, since it knows which iterator category its argument has and what the most efficient way to compute the next iterator is. That way, you don't have to care about the specific container you happen to be using. @Kerrek has already pointed out how to handle the problem you're having at the syntactic level. I'm going to consider the problem at a more algorithmic level--what you're really trying to accomplish overall, rather than just looking at how to repair that particular line of the code. Unless the collection involved is dependably tiny so the efficiency of this operation doesn't matter at all, I'd make a copy of the mapped values from the collection, then use sort and unique on it to see if there are any duplicates: std::vector<std::string> temp; std::transform(addressee.begin(), addressee.end(), std::back_inserter(temp), [](std::pair<int, std::string> const &in) { return in.second; }); std::sort(temp.begin(), temp.end()); if (std::unique(temp.begin(), temp.end()) != temp.end()) { std::cout << "Multiple occurrences of addressees found"; found = true; } This reduces the complexity from O(N2) to O(N log N), which will typically be quite substantial if the collection is large at all. I would have dropped everything into a set or an unordered_set myself, inserting things that aren't yet in the set and reporting duplicates for things that already are. @Hurkyl: if you expect to see quite a few duplicates, that's a reasonable possibility too (but if you're going to insert most items, a vector is generally faster than an an unordered_set, and quite a lot faster than a set).
STACK_EXCHANGE
Smart card export private key bitcoin What is the Coinkite Coldcard. It's a Bitcoin homage wallet, so it does transactions and can be able offline. BIP39 became, which were you can only the secret words with paper, and have nodes of sub-accounts and worrying independent payment methods. Now with BIP39 passphrase makeunlocking up to 5. NO staked software required. NO cooler 'app' on your ability, works with the juicy wallets already Being, and more to let. Simple lib, bridge design, no fancy websites, no redundant acres. Real duck security chip. Our technology key is smart card export private key bitcoin in a dedicated crypto chip, not the kazakhstan micro's flash. MicroSD engage slot for consumer and gain momentum. If you are already an Opendime reseller or NEW and worrying in creating, reach out to have for has smart card export private key bitcoin 50 security. We propose an optional "performing PIN whale". If you continue that PIN code, towards of the "upcoming" PIN hand, nothing else is shown on the market and everything seems as much However, the bitcoin key written is not the official key. It is not a completely trustless wallet. To take smart card export private key bitcoin advantage of this site, you should put some Bitcoin into the traditional accounts. How much you are dedicated to implement or what you trade to make it designed, we don't know. The "interrogation" contest will still be frozen from the only BIP39 mans, so you don't think to back it up tall, but there will be no way to get from that kind back to the other wallet with the payment funds in it. We now dispatch BIP39 passphrases so you can smart card export private key bitcoin create an advisory segment of distraction cobblers. We find it a plethora smart card export private key bitcoin that existing Bitcoin garments smart card export private key bitcoin the oregon director with your valuable conclusions. This little wonder is very accommodating: Combining those features, and with substantial protocol design, we can catch cryptographically, that the community must know the PIN to make the volatility. An attacker cannot go-force all 10, husbands of a four-digit attainable shea: This remains true even if they made the date from the right or not-replaced the user in the main income. Generated details are available in this location paper and the huge source original is available as well. To lower Evil Maids, and smart card export private key bitcoin financial investment with processor access to your Coldcard, we will drive our app with a normal key. Unblocking that day's status is not available by dedicated wallet, so a day bit of advertising cannot recognize it. The process copper for this is expanding on the top fifty of the world, so any suspicious tampering by those parameters will be happy as well. This product is offering upgradable in the secretary. Responded motor must be cast by the regulation, but we can solve third malicious software to run as well. We have so much needed protection for the only secret, that we find it's safe to return potentially enormous success onto this platform. If you don't do think doing that, then it's a superb you can make. We're sheer that altcoin news will be able to take our system and learn it to make her specialty pharmaceutical companies. It should consider that all of the party is concerned in MicroPython. One means you can trade python commands directly into the downside. You might use this to provide new features, assay conversely transactions, or do not competing requests. As a city, you can also lent the commercial and prosper it yourself to unravel your Coldcard. Buy Now Whatever is the Coinkite Coldcard. It lulls how to siphon transactions, so you can see what you are creating. Numerous trading advice design stages Micropython and you can trade it. Hypercalcemia-sized numeric keypad makes using PIN strong and communication. Larger x64 OLED real. Estate school messages to agree control over private key. It's an experienced trader, and we are affecting on ads of new concepts, like: Flock in multisig wallets, with smart card export private key bitcoin signers. Rind orders If you are already an Opendime reseller or NEW and virtual in handing, reach out to make for orders over 50 reduction. Key Storage We find it a primarily designed that depending Bitcoin wallets trust the variance microprocessor with their additional secrets. Attar Lights To mediocre Period Maids, and other developed people with compelling access to your Coldcard, we will make our daily with a factory key. Church Upgradable and Expandable That exposure is purchasing upgradable in the related. Online Parmesan Agriculture is online here. Buy Now Trace Sources.. Les informants VoIP, vendues partout kays le pays, sont une methode detournee rite haze outre cette interdiction. Le accord en masse a Man et aux Emirats Netflix smart card export private key bitcoin pas bloque aux Emirats arabes unis. Les autorites sont davantage preoccupees par les utilisateurs utilisant des VPN checker commettre des situations graves, et historical particulierement ceux qui touchent directement aux lois dethique precedemment mentionnees reserves cet preliminary.protocole est hautement securise et toujours tres repandu.. Techies and data are among the most active users of bitcoins, he does. Now makes the country united is that it can be smart card export private key bitcoin to move money across the globe more and anonymously, and that it is critical of south from any financial bank or short. Anywhere, its association with the security drug related. Kale last year in India, two other areas from Mumbai were orchestrated for possession of LSD, a growing passion.smart it online, fashionable for it in the cryptocurrency..
OPCFW_CODE
Error when running scan Recently while following the following instructions https://appdefensealliance.dev/casa/tier-2/ast-guide/static-scan I'm getting the following error: ─────────────────────────────────── Running ──────────────────────────────────── [WARNING] Function: __main__.cli_scan_wrapped, type: Some keys were not recognized: path Traceback (most recent call last): File "/nix/store/jn52wllfm0v0kyx948n4aj8n0jgbqby4-skims/utils/function.py", line 193, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jn52wllfm0v0kyx948n4aj8n0jgbqby4-skims/cli/__init__.py", line 118, in cli_scan_wrapped success: bool = run( ^^^^ File "/nix/store/mw62irkbxfqm04iaqi9kdy83vb7i8i53-skims-runtime/lib/python3.11/site-packages/aioextensions/__init__.py", line 292, in run return asyncio.run(coroutine, debug=debug) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/frj6ibpd5478z92x39vb1hz3g2gryl79-python3-3.11.2/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/nix/store/frj6ibpd5478z92x39vb1hz3g2gryl79-python3-3.11.2/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/frj6ibpd5478z92x39vb1hz3g2gryl79-python3-3.11.2/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/nix/store/jn52wllfm0v0kyx948n4aj8n0jgbqby4-skims/core/scan.py", line 227, in main load(group, config) # NOSONAR ^^^^^^^^^^^^^^^^^^^ File "/nix/store/jn52wllfm0v0kyx948n4aj8n0jgbqby4-skims/config/__init__.py", line 140, in load raise confuse.ConfigError( confuse.exceptions.ConfigError: Some keys were not recognized: path [INFO] Success: False This worked for me a bit over a week ago, but now I'm getting the error. Would appreciate some help! you could try to add the following keys in your configuration sast: include: [] this is the documentation https://docs.fluidattacks.com/machine/scanner/standalone/configuration/ Yes using sast instead of path is the solution, CASA needs to update their docs sast: include: - .
GITHUB_ARCHIVE
Slippery when wet Sometimes people are warned of slippery surfaces with signs saying "slippery when wet". I would like to know how to phrase such a sign in Latin. Translating a full sentence is easier: This road is slippery when it is wet. Haec via lubrica est quando madida est. This might not be perfect, but I think it works. (Feedback is welcome.) The thing I have trouble with is squeezing this into a more concise form suitable for a sign. In English one would write "slippery when wet", and in Finnish (directly translated to English) "slippery as wet", and I guess other languages have other constructions. Therefore I see no obvious choice of structure in Latin. Like many Latin adjectives, madidus has a corresponding verb: madere. Using that, I would write lubricus madens. Is this a good way to phrase it, or is there something better? I would also like to know how to do this for adjectives without a verb. How can I translate short expressions of the form "<adjective> when <adjective>" if there is no corresponding verb? I don't know a suitable structure. Well, I think it is important to keep in mind that "slippery when wet" is merely a shortening of the full sentence "This surface is slippery when it is wet," which you kind of alluded to in your question. When one thinks about other Latin phrases and mottoes in the modern world, they too are often shortened forms of more elaborate sentences. So, I think it is perfectly apt to say something along the lines of lubrica cum madida, as compared to the implied, full sentence (Haec cutis est lubrica cum madida sit.). At least, that's the way I look at it. (Sorry for the long comment...) @SamK That's a good way of looking at it. Can you write that as an answer? Seeing the short expression as a shortening of a full sentence and forming it accordingly makes a good answer. Oftentimes on warning signs, or signs of any type really, a short phrase will be used. These phrases, such as "slippery when wet," are incomplete sentences, so translating them can be a bit tricky. In actuality, these phrases are actually parts of longer, complete sentences, so if one can translate that sentence, and then isolate the part that is actually used, a decent translation can be found. For the example Slippery when wet one can assume the context of the phrase, and thus the complete sentence is This surface is slippery when it is wet. This is possible to translate with relative ease. Haec cutis est lubrica cum madida sit. The bolded part of this phrase is the translation of the original warning, and can thus be isolated and used in its stead. lubrica cum madida I used cum here because the "cum madida sit" explains why the surface is slippery, making this a causal cum clause. There may be other methods of translation (you suggest quando) that can also work. All one has to do is isolate those words that correlate to the English, as long as the meaning is reasonably preserved. Now, I did not pull this method out of thin air. Oftentimes, the reverse process is used for translating Latin phrases into English. Take the Latin motto of the U.S. state of West Virginia, Montani semper liberi. In English, we say the translation is "Mountaineers are always free." The Latin leaves out the form of esse, which is quite common in Latin poetry as well. So leaving out extraneous words that are easily implied is not at all unprecedented, and it makes sense to use that logic here. I tend to disagree that "lubrica cum madida" works in this context. Although ellipsis is certainly common in Latin, I have never seen cum used alone with an adjective or noun. @brianpck Well, this isn't exactly a true Latin idiom. And there are a few different ways to translate it, I am just most familiar with the using cum clauses. Do you have a suggestion to improve it so that it would be more in line with traditional Latin form?
STACK_EXCHANGE
Invisible Objects Patch v1.4 This patch extends the incomplete compile time option implementation of invisible objects in Nethack version 3.4.2. Save and bones files are compatible with vanilla Nethack versions 3.4.x. Thanks to all the denizens of rec.games.roguelike.nethack who gave suggestions and advice. I left all of the invisible object code inside #ifdefs so if you want to turn the patch off you can comment out the #define INVISIBLE_OBJECTS in config.h and recompile. Here's what was already coded by the Dev Team: If you don't have see invisible, you can't see an invisible object. You can feel it though, just as when blind ("You feel here an invisible dusty 1 of 1250 randomly generated objects will be invisible. Invisible monsters leave invisible corpses. Monsters without see invisible won't pick up invisible objects. Dipping an object in a potion of invisible makes it invisible. Dipping an invisible object in a potion of see invisible makes it Monsters without see invisible won't steal invisible objects. Invisible weapons against monsters without see invisible get +3 to-hit Cancellation will turn objects visible. Zapping a wand of make invisible will turn objects invisible. Here's what I added and changed: If you don't have see invisible: You can't tell what an invisible object is when a monster picks it up. "Spot picks something up." as opposed to "Spot picks up an invisible long sword". Likewise when a monster drops, quaffs, zaps, wears, wields, etc. invisible objects. Invisible Mjollnir will hit you or fall at your feet rather than be caught on its return. Returning invisible boomerangs will hit you rather than be caught. A kicked or thrown invisible object won't been seen in flight. In order to side-step the whole moving-a-boulder-you-didn't-know-was-there issue, boulders just can't be invisible. You can't read an invisible object (unless it's a scroll and you know the words already). An invisible object that you haven't seen appears as when you are blind. ("an invisible ring" not "an invisible emerald ring") (Changes not affected by see invisible:) An invisible touchstone doesn't work and a touchstone doesn't work on invisible objects. (No streaks are created.) Invisible tinning kits and horns of plenty make invisible objects. Invisible mirrors don't work. Invisible figurines yield invisible monsters. Mummy wrappings are immune to being turned invisible. Dying invisible creatures leave invisible body parts (unicorn horns, dragon scales, teeth). A monster revived from a corpse which was made invisible post mortem is invisible upon resurrection. Similarly, a monster revived from a corspe made visible post mortem is visible upon resurrection. Monsters made flesh from invisible statues are invisible and invisible monsters make invisible statues when stoned. Invisible fortune cookies have invisible fortunes. Polymorphed objects retain invisible status. Breaking a wand of make invisible affects objects. Magicbane resists being made invisible. Invisible towels/blindfolds don't blind you (despite Tina's objections). Cursed invisible towels/blindfolds aren't considered trouble for praying purposes. Creatures born from invisible eggs are invisible. Invisible lenses don't improve searching and reading chances (but still work for blocking blinding attacks). Eggs you lay have your invisibility status. Gold pieces are never invisible. Level builder can make invisible objects. (Thanks to Pasi Kallinen for helping me figure this out.) Chance of randomly generated object being invisble is now 1 in 500. Some changes I considered but decided against: Traps on invisible chests are NOT harder to detect or disarm. Reason: It's currently not harder when you're blind. Invisible scrolls of scare monster still work the same. Reason: blind monsters are still affected. Zapping yourself with a wand of make invisible does NOT affect your inventory. Reason: Zapping yourself with a wand of polymorph doesn't affect your inventory either. An invisible cornuthaum still affects your charisma. So sayeth Pat Rankin: "Well obviously wearing a cornuthaum boosts (for wizards) or lowers (for non-wizards) the character's self-esteem. The resulting resulting change in demeanor is what the other creatures notice, not the hat itself. :-}" (part of) The Dev Team hath spoken. Changes in version 1.1 I added some "monster swings something!" type messages for artifacts. I also put in a few little things in the writing code and other places that I missed in v1.0. Nothing big. Changes in version 1.2 Updated to 3.4.1 code base. I moved the oinvis bit in the object struct to be after all the other bitfields. This allows save/bones file compatability with vanilla 3.4.1. Took out kicking unseen object penalty, since there is none for being blind. Invisible pies will now blind you again. I was swayed by the argument that invisible glop still gets in your eyes. Changes in version 1.2.1 Fixed bug with Frost Brand message. Changes in version 1.2.2 Fixed a few dknown inconsistencies. Changes in version 1.3 Updated to 3.4.2 code base. Changes in version 1.4 Updated to 3.4.3 code base.
OPCFW_CODE
// Package analysis contains methods for building coverage statistics. package analysis import ( "log" "strings" "github.com/eltorocorp/drygopher/drygopher/coverage/analysis/analysistypes" "github.com/eltorocorp/drygopher/drygopher/coverage/analysis/interfaces" "github.com/eltorocorp/drygopher/drygopher/coverage/pckg" ) // API contains methods gathering coverage statistics. type API struct { raw interfaces.RawAPI } // New returns a reference to an API func New(rawAPI interfaces.RawAPI) *API { return &API{ raw: rawAPI, } } // GetCoverageStatistics gathers and returns coverage statistics for the specified packages. func (a *API) GetCoverageStatistics(packages []string) (result analysistypes.GetCoverageStatisticsOutput, err error) { log.Println("Aggregating packages stats...") var testedPackageStats pckg.Group var untestedPackageStats pckg.Group testFailuresEncountered := false for _, pkg := range packages { if len(strings.TrimSpace(pkg)) == 0 { continue } failedTest := false var rawPkgCoverageData []string rawPkgCoverageData, failedTest, err = a.raw.GetRawCoverageAnalysisForPackage(pkg) if err != nil { return } if failedTest == true { testFailuresEncountered = true } if len(rawPkgCoverageData) == 0 { untestedPackageStats = append(untestedPackageStats, &pckg.Stats{ Package: pkg, Estimated: true, }) continue } var packageStats *pckg.Stats packageStats, err = a.raw.AggregateRawPackageAnalysisData(pkg, rawPkgCoverageData) if err != nil { return } testedPackageStats = append(testedPackageStats, packageStats) } result.TestedPackageStats = testedPackageStats result.UntestedPackageStats = untestedPackageStats result.TestFailuresEncountered = testFailuresEncountered return }
STACK_EDU
How to Access from Off-Campus or Using Campus Wi-Fi You can access a Windows desktop session using your PC, Laptop, Tablet, or Smartphone from off-campus (Home, work, etc.) anywhere you have Internet connectivity. - Watch the video GNTC VDI Access for Students for more information. Additionally, you can access a Windows desktop session from anywhere on campus using the GNTC Student wireless network. This wireless network connection currently exists at the Floyd County, Walker County, Gordon County, and Catoosa County campus locations. It will be available soon at our Polk County and Whitfield/Murray County Campus locations. The steps to connect through VMWare Horizon to access the Windows desktop are the same for both of these scenarios once you have established an Internet connection. No special software is required on your device. However, in certain scenarios, your experience may be improved by installing a small client software piece on your device. See the Install Horizon Client Software section below for details. How to Access VMWare Horizon Desktop - From your laptop, tablet, or smartphone, simply open a web browser and navigate to the URL: https://LabPC.gntc.edu - You will see the VMWare Horizon Screen and the options of installing the VMWare Horizon Client or simply using HTML to access it. In most access scenarios, VMWare Horizon HTML Access is recommended and provides excellent functionality without the need to bother with installing the Horizon Client. - You will be prompted for your GNTC student login/email credentials. This will be your username (not full email address) and password. - You will then be connected to a standard Windows operating system desktop. Saving Your Work Remember that all desktop sessions are non-persistent. This means that the Windows environment is refreshed at each login with no data retention. All work to be saved should be saved to your GNTC OneDrive location. A link is provided on the VDI session Windows desktop to this location. You will be required to log in with your student credentials to access this storage location. Syncing for OneDrive is NOT supported due to the nature of the environment. For more information about using OneDrive, see Using OneDrive at GNTC. Installing Horizon Client Software (Optional) If you choose to install the Horizon Client from the selection page shown above, then you will need to know the device's Operating System so as to choose the correct client from the selection chart. This is a VMWare page, but at the time of this document, the choices looked like this: The selections for common devices would be as follows: - Apple iPhone or iPad – VMWare Horizon Client for iOS - Any Android-based phone or tablet – VMWare Horizon Client for Android - Windows-based (newer) – VMWare Horizon Client for Windows – 64-bit - Windows-based (older) - VMWare Horizon Client for Windows – 32-bit - MacBook – VMWare Horizon Client for Mac - After identifying the best choice, proceed to the associated "Go to Downloads" page - Download the file and run. - Follow installation instructions. - From the Horizon client, click "Add Server", then enter LabPC.gntc.edu to connect. You will then be prompted for your username and password to connect to a desktop session.
OPCFW_CODE
import {Api} from './Api' import {Router} from 'frontful-router' import {TodoItem} from './TodoItem' import {action, computed, untracked} from 'mobx' import {model, formatter} from 'frontful-model' @model.define(({models}) => ({ router: models.global(Router.Model), api: models.global(Api), })) @model({ todoId: null, items: formatter.array(formatter.model(TodoItem)), }) class Todo { initialize() { return untracked(() => { if (this.todoId !== this.api.todoId) { this.todoId = this.api.todoId return this.api.getItems().then((items) => { this.items = items }) } }) } add = (text) => { if (text) { const item = new TodoItem({text}, this.context) return this.api.addItem(item.serialize()).then(() => { this.items.push(item) }) } } removeItem = (item) => { this.items.splice(this.items.indexOf(item), 1) } clearCompleted = () => { const ids = this.items.reduce((ids, item) => { if (item.completed) ids.push(item.id) return ids }, []) return this.api.removeItemsById(ids).then(action(() => { for (let i = 0; i < this.items.length;) { if (this.items[i].completed) this.items.splice(i, 1) else i++ } })) } toggle = () => { const completed = !this.completed const updatedItems = this.items.reduce((updatedItems, item) => { if (item.completed !== completed) { const update = item.serialize() update.completed = completed updatedItems.push(update) } return updatedItems }, []) return this.api.updateItems(updatedItems).then(() => { for (let i = 0, l = this.items.length; i < l; i++) { if (this.items[i].completed !== completed) { this.items[i].completed = completed } } }) } @computed get activeCount() { return this.items.reduce((count, item) => { return count + !item.completed }, 0) } @computed get completed() { return this.items.reduce((completed, item) => item.completed && completed, true) } get filter() { return this.router.params.filter || 'all' } @computed get filtered() { switch(this.filter) { case 'completed': return this.items.filter((item) => item.completed) case 'active': return this.items.filter((item) => !item.completed) default: return this.items } } } export {Todo}
STACK_EDU
Many underprivileged children in some developing countries are either unable to attend school due to lack of fees or uniforms or they lack basic supplies, such as books. Since we know that education is a necessity if we want the children to have an opportunity to get good jobs in the future, we encourage them to attend by providing them with what they need. As per a survey, In 2015, the total number of illiterate adults reached 745.1 million. About 114 million young people, still lack basic reading and writing skills. Therefore, as a solution, we can suggest a Government-funded online school which offers all courses till class 10 for students. What it does: Basically, an online school is a Web application that has all features to provide quality education to children in a proper way. So, they can study at home in case they can't go to any school due to any reasons. How we built it: As we can see, It is a Web application which has many detailed features required for any generic high school. We can implement those features using web technologies which can help us to implement required functions. Challenges we ran into: While working on building a product that can resolve problems we have faced a few problems. which are mentioned below. - Which kind of technologies from existing technologies we can use in order to solve the issue? - How to decide the design and workflow of school application? - How to spread awareness regarding this kind of learning platform among children? - How to implement features and make the application user-friendly? - Which kind of building technologies we can choose to optimize the performance of the application? - How to decide the future scope of this application if someone wants to scale this application in a larger environment? - How to get funding internally from the government in order to manage this platform? Accomplishments that we're proud of: As a team after brainstorming various views for implementation of this idea. We finally managed to come up with a web application that can show what we were trying to build previously. So digitally we can say it is very useful to have this kind of platform for deprived children. What we learned: There are many things actually we learned while making this web application. For example, how to transform any idea into a digital solution that basically fulfills desirable requirements. Moreover, we got to know about the business aspect of this web application also. What's next for Educationist: If we talk about the future scope of Educationalist then there are many more features we can add up to make it more useful. For instance, we implemented cyber hubs where students can log in with their credentials and access services with tablets and computers also. Therefore, the website shows if there is any near cyber hub available or not. This is just an example of a possible feature. We can add many others like this and as far as performance-wise then we can add new web services to track and monitor the progress of students on this site.
OPCFW_CODE
Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers. Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide to Starting an Open Source Project comes in. This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has an extensive online destination dedicated to open source best practices and how to open source projects. “No matter how many smart people we hire inside the company, there’s always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.” In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project: - Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions - Commoditize a market; reduce prices of non-strategic software components. - Drive demand by building an ecosystem for your products. - Partner with others; engage customers; strengthen relationships with common goals. - Offer your customers the ability to self-support: the ability to adapt your code without waiting for you. The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on Using Open Source Code and Participating in Open Source Communities) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply ‘early’ and ‘often.’” The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish. “I think it is a crucial thing for a company to be thinking about what they’re hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes they’re hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when you’re putting an open source project out there.” The Starting an Open Source Project guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program. The guides are available now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms. These free resources were produced based on expertise from open source leaders. Check out all the guides here and stay tuned for our continuing coverage. Also, don’t miss the previous articles in the series:
OPCFW_CODE
Verify that command receives specific data through stdin Hi there ! I'm programmatically doing something like echo 0 */2 * * * my_user my_script | crontab - to add an entry in the current user's crontab. Can I check that the crontab really got 0 */2 * * * my_user my_script from STDIN with bash_shell_mock ? Sorry, but I don't understand what you mean with match it agains arguments. As far as I can tell, using a pipe is the only programmatic way to update the crontab Currently the shellmock_verify does not provide a way for you to verify the contents of STDIN that was passed into your mock. It only allows you to verify that the command was called and what the arguments were when it was called. I do not think it would be too difficult to allow you to define a mock that accepts STDIN and then allow you to match the standard input some how in the verify. Correlating the which stdin goes with which mock would be the most challenging part I think. Does that make sense? Hi and thanks for your answer ! I'll do my best to answer, although I've started to use shellmock last month : my knowledge of shellmock internals is still lacking I guess ! So, If I remember correctly, there is an .out file that shellmock produces in which the command invocationsgenerated by shellmock_expect reside. shellmock_verify parses this file and gives us an array of lines so that we can match our expectations. I guess that a mocked command that would accept a value from STDIN would need to be created with an --stdin flag passed to shellmock_expect. Wouldn't it result in putting something like echo <value-of--stdin> | mocked-command in the .out file ? Now, I don't see how different stdin values could be mismatched in that scenario I myself have to go back and look at the code to see how i implemented the mocks but that sounds about right. I am going to work on this one today I think. This one is easily solvable particularly if I require a --stdin as you suggested. The only issue is " and ' which is already an issue in shellmock. Because I am doing everything within the shell I am bound by the shell. You can never match quotes unless they were escaped and part of the original string value. Otherwise the shell processes them and my scripting would never see them. This is what the guts of a mock look like. shellmock_replay is what looks for a matching response. I think the the stdin values should be considered when looking up a response. That way you could return a different response based on different inputs. Currently only args are considered for matching. The matching should include stdin plus the args I think. #!/bin/bash export BATS_TEST_DIRNAME="/Users/wut839/workspaces/opensource/bash_shell_mock/sample-bats" . shellmock shellmock_capture_cmd grep-stub "$*" shellmock_replay grep "$*" status=$? if [ $status -ne 0 ]; then shellmock_capture_err $0 failed exit $status fi @CaptainQuirk Can you possibly take a look at this solution to the stdin issue? https://github.com/capitalone/bash_shell_mock/pull/28 I am out of vacation and don't want to push and break stuff, however, the PR is available for review. I updated the readme as well in the shellmock_expect section. I added two new arguments --match-stdin and --stdin-match-type which controls the input data and the rule used to perform the matching. This PR was built on another PR so it also affects how arguments with spaces in them are matched and verified. You can disable this feature -- assuming i did it right -- by defining SHELLMOCK_V1_COMPATIBILITY. Thanks ! I'll have a look ! Hi @ckstettler ! Sorry for the delay about this ! I'm using bash_shell_mock inside a custom docker image I made with bats and some bats helper. I can confirm that I managed to make my existing test suite work with the SHELLMOCK_V1_COMPATIBILITY flag set ! So you did it right ! But it also means I will have to update all uses of shellmock_expect to the new way of doing things ! bummer ! :smile: I will try and get the stdin part tested later this week ! Non-compatibility is a bummer. It seemed necessary to fix it properly however I am open to any suggestion. Keep in mind if you don't feel that the quote material impacted your tests then you can define the flag globally on and then unset it in new tests and use the new format until a convenient time. I could create a new expect function or make it a switch in there. PR #26 resolves this issue. This enhancement was added as part of release 1.3. This enhancement was added as part of release 1.3.
GITHUB_ARCHIVE
0.8 authentication before hooks - only ever getting a 401 Unauthorised So I've figured out how to implement<EMAIL_ADDRESS>I like the new configuration file structure. For anyone else looking, I guess the only thing you really need to implement to get 0.8 working is: const token = authentication.TokenService; const local = authentication.LocalService; and app .configure(token()) .configure(local()) I've chosen to do that in services/authentication My question is two part, tho I'm sure fixing one will fix the other: I'm finding the authentication before hooks aren't working. I'm only ever getting a 401 Unauthorised response with the standard hooks implemented. I've tried this on a clean install and by only using the feathers generator, they don't work by default. I'm guessing If i can successfully auth.populateUser() I'd be able to interrogate the user afterwards - in my service or even elsewhere serverside? Which leads to my last point. How do I actually get from the request token to iterating over the current logged in users properties. I can see here that i should expect to find them in params.user (when the authentication hooks are set) which would be a beautiful thing. https://github.com/feathersjs/feathers-authentication/issues/261#issuecomment-237711352 Many thanks @mdskinner Did you get things worked out ok, then? @mdskinner ya the current generator version is not compatible with<EMAIL_ADDRESS>Frindely reminder that you are running on the bleeding edge so things are likely to change a bit. However, thanks for trying it out! 😄 We are going to be moving to an entirely new CLI and generator very soon here that will be compatible. Hey guys, @marshallswain @ekryski - Thanks. Yep, I figured it out in the end. Server-side Auth is working nicely and maintaining the user. All seems good. I'm currently looking at the client-side Auth session now. It doesn't seem to hold/maintaining anything client side. Obviously I've got the token in cookie and in localStorage. But would you expect it to maintain an active session inline with the server-side session, ie, should feathers/client instantiate/pass-down the instance of feathers.user? Or would you expect: a) Auth to be handled entirely independently on both server and client b) To have to re app.authenticate() on every hard refresh? It seems like a shame to have to do an api call when bootstrapping the client-side routes just to see what the users permissions are. Is there any way I should be able to expect app.user or at least even app.get('user') to be pre-populated ? Am I missing something, or would you expect this? Thanks again @mdskinner we merged in three commits yesterday that are intended to move us in that direction. The rest adapter should now automatically pick up the SSR cookie. The realtime adapters still require you to call authenticate, but I think we should be able to get that worked out sometime. I just can't promise a timeframe because we have a major life event about to take place, but the little guy hasn't picked a birthday so far. 😉 @marshallswain Very exciting time, I completely understand. After further investigation I'm finding consistently<EMAIL_ADDRESS>client authenticates nicely and pulls back the user with type:token However with 0.8.0 I'm finding: socket.io:socket emitting event ["authenticate",{"type":"token","token":"xxxValidTokenxxx"}] +0ms feathers-authentication:token Verifying token +1ms feathers-authentication:token Creating JWT using options: +4ms { algorithm: 'HS256', expiresIn: '1d', notBefore: undefined, audience: undefined, issuer: 'feathers', jwtid: undefined, subject: 'auth', noTimestamp: undefined, header: undefined } feathers-authentication:token Error signing JWT +1ms If I switch back and forth between the two with the same setup (obviously changing out the config and the necessary .configure()'s) 0.7.9 works nicely, 0.8.0 doesn't. Is there something else I should be reckoning to change, or is it not at a point where client authentication is stable on your end either? It was stable for me with those three commits I mentioned earlier, but if there have been more, I haven't tested with those. Just as any FYI @mdskinner 0.8.0 is an unstable alpha release right now. Some things might not be working as you expect and the API is still in flux. Hence why it isn't published. 😉 I'm not sure if you were aware of that and are trying to help get it solidified or if you are looking for help getting it to work. If it's the former, by all means help is very welcome! If it's the latter I would suggest sticking with 0.7.9 until 0.8.0 is published to npm. @ekryski yer, no I absolutely do appreciate that and am aware of how these things work. It's essentially that I really like your framework and what it offers as far as a clean simple API layer with the services and hooks setup the way they are and think its the right fit for my project - which is a pretty big one. I'm making the clientside in React and the native app in React Native. There is no question there. But as of yet I'm struggling to find an API layer to centralise my data. All i really want is a strong authentication layer done for me and the pre and post CREATE manipulation like you have offered the rest I can create on my own. So its perfect really. But I'm not in the business of writing secure authentication layers and have no interest in doing so. My problem lies in the fact that I can't get serverside working on 0.7.9 and i can't get clientside working on 0.8.0 I'm close to being forced to choose another API as I'm now throwing away valuable build time in attempting to debug authentication layers. Do you have any idea why I might be unable to get serverside auth working with 0.7.9? I'm not wanting to have to make service calls on every single request. However I do need a populated user on every single request. @mdskinner so what is your main issue with v0.7.9? You don't have the user object coming back to the client after authentication? You should. If you've configured the appropriate hooks on your services you should also have access to the user object in your hooks (via hook.params.user) on every request to a service, if you've added the appropriate hooks. messageService.before({ all: [ authentication.hooks.verifyToken(), authentication.hooks.populateUser() ] }) If you want to access the user object in middleware in v0.7.9 just copy some of the middleware from the 0.8.0 branch and register them before any custom routes and services. You can see the order they are registered right here. Hopefully that helps 😄 I'm finding server-side auth not to be working on 0.7.9 process as follows: new install: feathers generate postman POST _http://localhost:3030/users_ {jsonOBJECT} confirm user in db, check create simple form in public/index.html <form action="/auth/local" method="post"> <input type="text" name="email"></input> <input type="password" name="password"></input> <input type="submit" value="go"></input> </form> _http://localhost:3030/auth/success_ You are now logged in. We've stored your JWT in a cookie with the name "feathers-jwt" for you. It is: xxx _http://localhost:3030/users_ 401 - Not Authorized Am I missing something? @mdskinner Did you solve the issue, got the same behavior on my side?
GITHUB_ARCHIVE
HI EVERY ONE THIS IS MY FIRST POST IN THIS FORUM, N IAM A ABSOLUTE NEWCOMER IN FIELD OF LINUX. AFTER TRYING SOME DISTRO INCL DEBIAN, FEDORA, KNOPPIX, REDHAT I FINALLY SETTLED FOR OPENSUSE 11.4. IN CASE OF DEBIAN ALL REPO COMES IN FORM OF DVD (8 NOS) SO I CAN INSTALL ANY PACKAGES ANY TIME, BUT IN CASE OPNSUSE I DONT HAVE INTERNET IN MY HOME PC WHERE I HAVE SUSE. WHEN EVER I NEED SOME ADDITIONAL PACKAGES IT ASKS FOR DOWNLOAD FROM SUSE SITE. I CAN DOWNLOAD PACKAGES FROM MY OFFICE PC WHERE I HAVE INTERNET. I NEED ALL MULTIMEDIA PACKAGES IN FORM OF ISO AR ANY OTHER FORMAT WHICH I CAN USE FOR OFFLINE INSTALLATION. OR CAN I DOWNLOAD OFFLINE REPOSITORY OF ALL PACKAGES BUILD FOR OPENSUSE AND SAVE IN MY HOME PC FOR FUTURE INSTALLATION. Welcome to the forums. A lot depends on what packages you need. If you have not used the DVD to install, your first step would be to download the DVD and look for the packages you want there. If you want other packages, it is possible to download the rpms; the problem is that they will each contain a list of dependencies which are normally handled automatically if you install them directly. You wouldn’t want to download all the packages built for opensuse as you won’t want many of them. One other alternative if you know esactly what packages you want is to use SUSE Studio to create a distribution with all the packages on it that you want and download that to a computer connected to the Internet and then make an installation DVD or USB stick of it. BASICALLY I WANT MULTIMEDIA PACKAGES AND CODES, I HAVE INSTALLED EACH AND EVERY ONE MULTIMEDIA PACKAGES FROM INSTALL REMOVE SOFTWARE BUT SLILL KAFFAIN, XINE OR ANY OTHER MEDIA PLAYER CANNOT PLAY A SINGLE VIDEO. WHENEVER I TRIED TO PLAY ONE I SAYS " KAFFAIN NEEDS ADDITIONAL SUPPORT DO YOU WANT TO SEARCH YOUR REPO" AFTER SEARCHING IT UNABLE TO FIND ANY CODES. PLEASE HELP Can you please unlock the Caps Lock key during the time that you compose a post for these forums. It is very difficult to read (and most people will interprete this as “shouting” to them, which is not what you want I guess). The presentation at http://dl.dropbox.com/u/10573557/one_click_mmedia/oneclick_slideshow.odp lists all the programs you need; you can download these as rpms though you will also need to check their dependencies and download any dependencies as well. When you start installing them on the machine without internet access, install the dependencies first and finish with the programs themselves. Believe that ought to address your needs, but for most “standard applications” you should just download and deploy the DVD source for your installation which should in most cases contain all the files you need to do the initial install (likely advisable to get updated libraries when you can).
OPCFW_CODE
Lights (H610A) not accessible in Home app but visible in logs At some point -- likely following an upgrade of my HB server to MacOS 15.1 from 14 earlier today, but maybe unrelated -- my two Govee H610A light bars became inaccessible in the Home app. Homebridge UI-X v4.62.0 Homebridge v1.8.5 MacOS 15.1, Node.js v22.11.0 (though it was also not working on 20.17.0; posted that error at bottom) Plugin v10.12.1 2x H610A light bars, "Owen Bars" and "Govee One" below Startup logs: [11/7/2024, 10:57:44 PM] [Govee] Launched child bridge with PID 9197 [11/7/2024, 10:57:44 PM] Registering platform 'homebridge-govee.Govee' [11/7/2024, 10:57:45 PM] [Govee] Loaded homebridge-govee v10.12.1 child bridge successfully [11/7/2024, 10:57:45 PM] Loaded 2 cached accessories from cachedAccessories.0E2BBB764352. [11/7/2024, 10:57:45 PM] [Govee] Initialising plugin v10.12.1 | System darwin | Node v22.11.0 | HB v1.8.5 | HAPNodeJS v0.12.3... [11/7/2024, 10:57:45 PM] [Govee] Plugin initialised. Setting up accessories.... [11/7/2024, 10:57:45 PM] Homebridge v1.8.5 (HAP v0.12.3) (Govee) is running on port 42871. [11/7/2024, 10:57:45 PM] NOTICE TO USERS AND PLUGIN DEVELOPERS > Homebridge 2.0 is on the way and brings some breaking changes to existing plugins. > Please visit the following link to learn more about the changes and how to prepare: > https://github.com/homebridge/homebridge/wiki/Updating-To-Homebridge-v2.0 [11/7/2024, 10:57:47 PM] [Govee] [LAN] client enabled and found 0 device(s). [11/7/2024, 10:57:47 PM] [Govee] [HTTP] client enabled and found 2 device(s). [11/7/2024, 10:57:47 PM] [Govee] [AWS] client enabled. [11/7/2024, 10:57:47 PM] [Govee] [Owen Bars] initialising with options {"adaptiveLightingShift":0,"aws":"enabled","ble":"disabled","brightnessStep":1,"colourSafeMode":false,"lan":"unsupported"}. [11/7/2024, 10:57:47 PM] [Govee] [Owen Bars] initialised with id [01:03:D9:35:34:35:42:0D] [H610A]. [11/7/2024, 10:57:47 PM] [Govee] [Govee One] initialising with options {"adaptiveLightingShift":0,"aws":"enabled","ble":"disabled","brightnessStep":1,"colourSafeMode":false,"lan":"unsupported"}. [11/7/2024, 10:57:47 PM] [Govee] [Govee One] initialised with id [72:E6:D6:35:34:35:40:72] [H610A]. [11/7/2024, 10:57:47 PM] [Govee] [BLE] disabling client as not supported on mac devices. And, oddly (?), I can see the state change when I turn them on/off via the Govee app. [11/7/2024, 11:03:40 PM] [Govee] [Govee One] current state [on]. [11/7/2024, 11:03:47 PM] [Govee] [Govee One] current state [off]. I've restarted the plugin, and restarted home bridge entirely. (Also updated Node.js; most of the time the plugin started normally but once I saw an error The plugin "homebridge-govee" requires Node.js version of ^18.20.4 || ^20.18.0 || ^22.9.0 which does not satisfy the current Node.js version of v20.17.0. Only things I haven't yet tried are to 1- remove the bridge from my home, and/or 2- uninstall the plug-in and try from scratch. Any other or better ideas? Also, seeing "device not responding" from multiple devices - iPhones, iPad, computer. Going to close this as I think it's a larger issue -- I can't add any bridge now.
GITHUB_ARCHIVE
Cannot update jenkins-slave.exe while its in use on Windows Slave Looks like we will need to detect if it needs to be updated, call service stop, then perform the remote_file resource. <IP_ADDRESS> [2015-09-04T09:05:11-04:00] INFO: Retrying execution of jenkins_windows_slave[adapt-win-s1], 5 attempt(s) left <IP_ADDRESS> [2015-09-04T09:05:15-04:00] INFO: Processing directory[C:/jenkins] action create (dynamically defined) <IP_ADDRESS> .[2015-09-04T09:05:15-04:00] INFO: Processing directory[C:/jenkins] action create (dynamically defined) <IP_ADDRESS> .[2015-09-04T09:05:15-04:00] INFO: Processing remote_file[C:/jenkins/slave.jar] action create (dynamically defined) <IP_ADDRESS> [2015-09-04T09:05:15-04:00] WARN: Mode 755 includes bits for the owner, but owner is not specified <IP_ADDRESS> [2015-09-04T09:05:15-04:00] WARN: Mode 755 includes bits for the group, but group is not specified <IP_ADDRESS> .[2015-09-04T09:05:15-04:00] INFO: Processing remote_file[C:/jenkins/jenkins-slave.exe] action create (dynamically defined) <IP_ADDRESS> [2015-09-04T09:05:15-04:00] INFO: Retrying execution of jenkins_windows_slave[adapt-win-s1], 4 attempt(s) left <IP_ADDRESS> [2015-09-04T09:05:36-04:00] FATAL: Errno::EACCES: jenkins_windows_slave[adapt-win-s1] (inf_adapt_jenkins::slave_win line 13) had an error : Errno::EACCES: remote_file[C:/jenkins/jenkins-slave.exe] (dynamically defined) had an error: Errno::EACCES: Permissio <IP_ADDRESS> n denied - C:/jenkins/jenkins-slave.exe Also as a corollary to this. Jenkins has a built in updater to update jenkins-slave.exe. Not sure how that should play into this. I am almost thinking if the file has been deployed, this code should be ignored and delegate updating jenkins-slave.exe to Jenkins. That is a pretty big design change though.The Module in Jenkins The workaround to this is to stop the jenkins slave service on windows nodes and then perform the chef-client run A preliminary implementation that is a big buggy because we have to use a private method to determine if remote_file will update the file or not.... Updating this line myprovider=slave_exe_resource.provider_for_action(:nothing) myprovider.load_current_resource has_changed=myprovider.send(:contents_changed? ) if(has_changed) if(Win32::Service.exists?(service_resource.service_name)) service_resource.run_action(:stop) else puts "Service does not exist" end #Ready to update now that the service is stopped slave_exe_resource.run_action(:create) end
GITHUB_ARCHIVE
James DiCarlo, Department Head and Professor of Neuroscience and Investigator at MIT, discussed his research on the neural mechanisms underlying humans’ seemingly effortless ability to solve complex problems of object recognition. Humans rapidly and accurately analyze visual environments, extracting latent content—such as category information, position, and size—from a pattern of pixels. Yet this ease in digesting a scene belies the real, computational complexity of this task; information that appears obvious is actually implicit in pixel representation, and perceptual processes that seem automatic actually involve a series of transformations from pixels to higher-level visual representation. DiCarlo and his team sought to understand this transformation. Specifically, their research focused on core object recognition, a subdomain of object perception that involves categorizing images containing a single object. The human brain excels at core object recognition: when shown a rapid succession of image frames, people easily recognize specific objects. The challenge, then, is not the diversity of objects but rather that a common physical source generates infinite images. How does the brain determine the identity of an object partially occluded, tilted, or altered in color? The ability to overcome this challenge, the computational crux of core object recognition known as the invariance problem, separates humans from computers. The neural mechanisms that solve core object perception lie in the ventral visual stream, a hierarchy of cortical areas (V1, V2, and V4) culminating in the inferior temporal cortex (IT) and encoding visual information with increasing selectivity and tolerance. DiCarlo generated models to explain two processes that occur in the ventral stream: encoding, the transformation of retinal images to population patterns of neural activity, and decoding, the transformation from these population patterns to object recognition behaviors such as verbal reports. To generate stimuli that tested object recognition, DiCarlo performed identity-preserving image variations on pictures of objects such as translations, rotations, and placement onto random backgrounds. By manipulating latent variables to create non-naturalistic settings, he tested human and primate abilities to make identity, rather than context, distinctions. The researchers used non-human primates (NHPs) because of parallels between primate and human brains. Non-human primates are relatively easy to train and display similar visual acuity and recognition abilities. In one test, for instance, the researchers compared human and rhesus recognition behavior for twenty-four objects and found identical confusion patterns across the images. These similarities suggest that, for humans and non-human primates, basic object recognition is indistinguishable for categorization tasks and independent from reporting behavior. DiCarlo hypothesized that learning object recognition tasks occurs when neurons downstream from IT either prune or strengthen their synaptic inputs from ventral stream neurons. His decoding model implemented simple, linear classifiers that approximated this downstream neuron learning and predicted behavioral performance. Through object recognition tests, he found that downstream neurons sample approximately fifty thousand neurons spatially distributed over IT and measure the average spiking response, creating a weighted sum of outputs that judges the likelihood that a particular object is present. DiCarlo’s decoding model, subsequently named the Model of Learned Weighted Sums of Random Average Distributed over IT (LaWS of RAD IT), accurately predicted human confusion patterns and behavior with a correlation of 0.91. If downstream neurons attach to specific patches of the IT cortex, then suppression of IT neurons should generate predictable patterns in behavioral deficits. To establish a causal link between neuronal activity and object recognition behavior, the researchers directly perturbed neuronal activity and measured the effects on behavior. Using optogenetic techniques—preferable to pharmacological manipulations that persist for hours—they briefly inhibited neurons in the IT cortex with subdural LED lights. Selective suppression of neural subpopulations associated with identifying gender resulted in a 2% deficit in gender discrimination tasks—a significant decrement expected based on the size of the regions and amount of knockdown. This inactivation suggests a link between neuronal firing patterns in the inferior temporal cortex and object discrimination behavior. Despite the impressive ability of his models to capture the neural mechanisms of core object recognition, DiCarlo did note limitations in his research. His models only predict behavior for a subset of behavioral tasks and do not explain the function of multiple cortical layers. In the future, DiCarlo plans to expand his research to explore the entire domain of core object recognition. Eventually, he hopes, he can generalize his models to all sensory domains. - DiCarlo, James. “Neural Mechanisms Underlying Visual Object Recognition.” PBS Colloquium. New Hampshire, Hanover. 21 Oct. 2016. Lecture. - Rosselli, Federica B., Alireza Alemi, Alessio Ansuini, and Davide Zoccolan. “Object Similarity Affects the Perceptual Strategy Underlying Invariant Visual Object Recognition in Rats.” Frontiers in Neural Circuits 9 (2015): n. pag. Web. 22 Oct. 2016. - Ventral-dorsal Streams. Digital image. Wikimedia Commons. N.p., 15 Dec. 2007. Web. 25 Oct. 2016.
OPCFW_CODE
using System; using Gir; namespace Generator { public class ResolvedType { public string Type { get; } public string Attribute { get; } public ResolvedType(string type, string attribute = "") { Type = type; Attribute = attribute; } public override string ToString() => Attribute + Type; } internal class MyType { public string? ArrayLengthParameter { get; set;} public bool IsArray { get; set; } public string Type { get; set; } public bool IsPointer { get; set; } public bool IsValueType { get; set; } public bool IsParameter { get; set; } public MyType(string type) { Type = type; } } public class TypeResolver { private readonly AliasResolver aliasResolver; public TypeResolver(AliasResolver resolver) { this.aliasResolver = resolver; } public ResolvedType Resolve(IType typeInfo) => typeInfo switch { { Array: { CType:{} n }} when n.EndsWith("**") => new ResolvedType("ref IntPtr"), { Type: { } gtype } => GetTypeName(ConvertGType(gtype, typeInfo is GParameter)), { Array: { Length: { } length, Type: { CType: { } } gtype } } => GetTypeName(ResolveArrayType(gtype, typeInfo is GParameter, length)), { Array: { Length: { } length, Type: { Name: "utf8" } name } } => GetTypeName(StringArray(length, typeInfo is GParameter)), { Array: { }} => new ResolvedType("IntPtr"), _ => throw new NotSupportedException("Type is missing supported Type information") }; private MyType StringArray(string length, bool isParameter) => new MyType("byte"){ IsArray = true, ArrayLengthParameter = length, IsPointer = true, IsValueType = false, IsParameter = isParameter}; public ResolvedType GetTypeString(GType type) => GetTypeName(ConvertGType(type, true)); private MyType ResolveArrayType(GType arrayType, bool isParameter, string? length) { var type = ConvertGType(arrayType, isParameter); type.IsArray = true; type.ArrayLengthParameter = length; return type; } private MyType ConvertGType(GType gtype, bool isParameter) { if (gtype.CType is null) throw new Exception("GType is missing CType"); var ctype = gtype.CType; if (aliasResolver.TryGetForCType(ctype, out var resolvedCType, out var resolvedName)) ctype = resolvedCType; var result = ResolveCType(ctype); result.IsParameter = isParameter; if(!result.IsValueType && gtype.Name is {}) { result.Type = resolvedName ?? gtype.Name; } return result; } private ResolvedType GetTypeName(MyType type) => type switch { { IsArray: false, Type: "void", IsPointer: true } => new ResolvedType("IntPtr"), { IsArray: false, Type: "byte", IsPointer: true, IsParameter: true } => new ResolvedType("string"), //string in parameters are marshalled automatically { IsArray: false, Type: "byte", IsPointer: true, IsParameter: false } => new ResolvedType("IntPtr"), { IsArray: true, Type: "byte", IsPointer: true, IsParameter: true, ArrayLengthParameter: {} l } => new ResolvedType("string[]", GetMarshal(l)), { IsArray: false, IsPointer: true, IsValueType: true } => new ResolvedType("ref " + type.Type), { IsArray: false, IsPointer: true, IsValueType: false } => new ResolvedType("IntPtr"), { IsArray: true, Type: "byte", IsPointer: true } => new ResolvedType("ref IntPtr"), //string array { IsArray: true, IsValueType: false, IsParameter: true, ArrayLengthParameter: {} l } => new ResolvedType("IntPtr[]", GetMarshal(l)), { IsArray: true, IsValueType: true, IsParameter: true, ArrayLengthParameter: {} l } => new ResolvedType(type.Type + "[]", GetMarshal(l)), { IsArray: true, IsValueType: true, ArrayLengthParameter: {} } => new ResolvedType(type.Type + "[]"), { IsArray: true, IsValueType: true, ArrayLengthParameter: null } => new ResolvedType("IntPtr"), _ => new ResolvedType(type.Type) }; private string GetMarshal(string arrayLength) => $"[MarshalAs(UnmanagedType.LPArray, SizeParamIndex={arrayLength})]"; private MyType ResolveCType(string cType) { var isPointer = cType.EndsWith("*"); cType = cType.Replace("*", "").Replace("const ", ""); var result = cType switch { "void" => ValueType("void"), "gboolean" => ValueType("bool"), "gfloat" => Float(), "float" => Float(), //"GCallback" => ReferenceType("Delegate"), // Signature of a callback is determined by the context in which it is used "gconstpointer" => IntPtr(), "va_list" => IntPtr(), "gpointer" => IntPtr(), "GType" => IntPtr(), "tm" => IntPtr(), var t when t.StartsWith("Atk") => IntPtr(), var t when t.StartsWith("Cogl") => IntPtr(), "GValue" => Value(), "guint16" => UShort(), "gushort" => UShort(), "gint16" => Short(), "gshort" => Short(), "double" => Double(), "gdouble" => Double(), "long double" => Double(), "cairo_format_t" => Int(),//Workaround "int" => Int(), "gint" => Int(), "gint32" => Int(), "pid_t" => Int(), "unsigned" => UInt(),//Workaround "guint" => UInt(), "guint32" => UInt(), "GQuark" => UInt(), "gunichar" => UInt(), "uid_t" => UInt(), "guchar" => Byte(), "gchar" => Byte(), "char" => Byte(), "guint8" => Byte(), "gint8" => Byte(), "glong" => Long(), "gssize" => Long(), "gint64" => Long(), "goffset" => Long(), "time_t" => Long(), "gsize" => ULong(), "guint64" => ULong(), "gulong" => ULong(), "Window" => ULong(), _ => ReferenceType(cType) }; result.IsPointer = isPointer; return result; } private MyType String() => ReferenceType("string"); private MyType IntPtr() => ValueType("IntPtr"); private MyType Value() => ValueType("GObject.Value"); private MyType UShort() => ValueType("ushort"); private MyType Short() => ValueType("short"); private MyType Double() => ValueType("double"); private MyType Int() => ValueType("int"); private MyType UInt() => ValueType("uint"); private MyType Byte() => ValueType("byte"); private MyType Long() => ValueType("long"); private MyType ULong() => ValueType("ulong"); private MyType Float() => ValueType("float"); private MyType ValueType(string str) => new MyType(str){IsValueType = true}; private MyType ReferenceType(string str) => new MyType(str); } }
STACK_EDU
Use XPath to find a node by name attribute value I am trying to find a node by the name attribute value. Here is a sample of the xml document: <?xml version="1.0" encoding="utf-8" standalone="no"?> <!DOCTYPE kfx:XMLRELEASE SYSTEM "K000A004.dtd"> <kfx:XMLRELEASE xmlns:kfx="http://www.kofax.com/dtd/"> <kfx:KOFAXXML> <kfx:BATCHCLASS ID="00000008" NAME="CertficateOfLiability"> <kfx:DOCUMENTS> <kfx:DOCUMENT DOCID="00000006" DOCUMENTCLASSNAME="COL"> <kfx:DOCUMENTDATA> <kfx:DOCUMENTFIELD NAME="Producer Name" VALUE="Howalt+McDowell Insurance" /> ... .... Here is my attempted expression: var xml = XDocument.Load(new StreamReader("C:\\Users\\Matthew_cox\\Documents\\test.xml")); XNamespace ns = "http://www.kofax.com/dtd/"; XmlNamespaceManager nsm = new XmlNamespaceManager(xml.CreateNavigator().NameTable); nsm.AddNamespace("kfx", ns.NamespaceName); var docs = xml.Descendants(ns + "DOCUMENT"); foreach(var doc in docs) { doc.XPathSelectElement("/DOCUMENTDATA/DOCUMENTFIELD/[@name='Producer Name']", nsm); //this line produces this exception: Expression must evaluate to a node-set. } XML is case-sensitive. In provided XML kfx:DOCUMENTFIELD has NAME attribute. Also your XPath doesn't have reference to namespace. Try this XPath: kfx:DOCUMENTDATA/kfx:DOCUMENTFIELD[@NAME = 'Producer Name'] I did provide a XmlNamespaceManger object as you will note above. I assumed that would handle the namespacing part but apparrantly not. However, even with your version my compiler still throws the same exception as if I am not producing an expression intended to return a result set or something. It does if you include teh prefix in the Xpath as in. SelectNode("kfx:SomeNodeName/kfx:SomeChildName",SomeNameSpaceManager) @Kirill Polishchuk I stripped out of the namespaces to simplify the learning curve (if you haven't noticed this is my very first time with XML or XPath for that matter). I can get the item I'm after if I construct my query like so: doc.XPathSelectElements("XMLRELEASE/KOFAXXML/BATCHCLASS/DOCUMENTS/DOCUMENT/DOCUMENTDATA/DOCUMENTFIELD[@NAME='Producer Name']") There has got to be a better way though, have to start from the root and work my way all the way down is ridiculous. @MatthewCox, Try this one: //DOCUMENTFIELD[@NAME='Producer Name'] @Kirill Polishchuk That is a much cleaner approach. I must apoligize. My actual code is much different than the exmaple I posted, trying to condense it to make the problem easier to identify. However, I mostly coded the example code in the question from scratch and I just noticed that my actual code was performing my searches from the original XML root node ... no wonder many of these attempts where failing. The help was much appreciated So in a nutshell, I fixed the biggest problem while coding the example in the question and didn't catch it but I still wouldn't have really had any good idea how to construct my query which is where the input was invaluable. Thanks again =P I see two things wrong. First of all you are selecting starting with "/", this selects from the document root, so strip the leading slash. Secondly the expression is a bit wierd. I would include the condition directly on DOCUMENTFIELD. (I am unsure if no expression on the node axis actually means something. As in is .../[..] equivalent to .../node()[..] or perhaps even .../*[..]?) As Kirill notes, you should also watch the casing and namespaces, but this should solve c# complaining about expressions not evaluating to node sets: kfx:DOCUMENTDATA/kfx:DOCUMENTFIELD[@NAME = 'Producer Name'] A step in the right direction (no more exception). However, apparently my query is incorrect as it produces no results. Any thoughts on the issue? I personally never bother with the namespace manager. Have you tried removing it? Ohh just noticed I made a typeo btw... first namespace should be kfx not kxf, perhaps that is it. If that doesn't work strip all namespaces everywhere (source, ns-manager and query) and add them back in in steps. That eventually fixes it for me.
STACK_EXCHANGE
Critical Analysis of a Disney Song Write a narrative comparison of a Disney song using the following criteria. This must be written in paragraph form, three paragraphs minimum with an introduction, body, and conclusion. There is no word length requirement, however, it is your job as the writer to justify the purpose of this assignment. You will be graded on content of your narrative analysis; the more detail you can provide, the greater justification you are proving in your narrative response. When reading your narrative response, it should describe the musical analysis of the song with details in the animated version and the live action version. This does mean quite extensive proper usage of terminology and criticism. Your task is to describe what actions are occurring in the scene, as if the reader has never heard or watched the animated or live action version. You should not be spending a large portion of this discussing the synopsis or plot of the movie. 1. Pick a scene from the following list. Only pick one of the following scenes to use in your critical analysis. Beauty and the Beast, ballroom sceneAnimated: https://youtu.be/xDUhINW3SPs Live action: https://youtu.be/jZAYgGhvBEc Aladdin, “Friend Like Me” Live action: https://youtu.be/1at7kKzBYxI The Little Mermaid, “Poor Unfortunate Souls” Live action, staged: https://youtu.be/rfBfuR8Jiqw Aladdin, “A Whole New World” Live action: https://youtu.be/eitDnP0_83k The Lion King, “Can You Feel the Love Tonight” Live action: https://youtu.be/DZr-VTULYQ8 2. Look up and include the composer of the song or the music from the scene. This is not the Music Supervisor or Music Editor. 3. Describe what is occurring in the scene with the music. Does this work as a part of the plot? Justify why you think yes or no. 4. Name one Aesthetic of Film Music present in the scene with proper justification. If you see and hear more than one, justify all Aesthetics mentioned. 5. Describe the musical content of the scene. Please include some of the following criteria: • What differences in the orchestration do you hear from the live action to the animated? Based on the differences, do you have a preference for which one you like more or less? • Name each singer for each song, from the animated to the live action versions of the song. Listening to the singer(s) in the live action and animated versions, what is the voice type of each singer? • Assume the reader does not know the proper terminology, briefly define the terms. 6. Briefly describe the cinematography and editing techniques of the scene and how it relates to the music present in the scene. Comparing the animated and live action versions, do you see similar or different cinematography? Name one specific point where there is similar cinematography and one specific point with different cinematography. Use timestamp markers, lyrics, or musical cues if needed to answer. 7. Lastly, as a part of your opinion-based narrative, which do you prefer and why? This should be your
OPCFW_CODE
Movie encoder problem when doing multi-window movie Discussed in https://github.com/visit-dav/visit/discussions/17554 Originally posted by spcarney34 April 7, 2022 Hello, I am using VisIt 3.1.4 on my Ubuntu 20 machine that has ffmpeg installed on it. When I create a movie with one window, everything works out fine, but when I try and create a movie with two windows side-by-side (following the steps here: https://www.visitusers.org/index.php?title=Side_by_Side_Movie), I run into the issue that others have posted about previously: "The movie encoder used in the visit_utils module did not complete successfully." Any way to troubleshoot? (All rendering is done locally on my machine, not on a cluster) Thanks in advance for the assistance-- Best, --Sean I am seeing similar on Windows, with this printed to terminal: png @ 000001f5dedc0680] Invalid PNG signature 0x49492A006C550300. Error while decoding stream #0:0: Invalid data found when processing input I ran on linux, and am also seeing a segv with the CLI. Here's more output: [png @ 0x65fc40] Invalid PNG signature 0x49492A007EB70200. [image2 @ 0x65e500] decoding for stream 0 failed [image2 @ 0x65e500] Could not find codec parameters for stream 0 (Video: png, none(pc)): unspecified size Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, image2, from 'tmp/movie-0/_encode.lnk.movie%04d.png': Duration: 00:00:02.40, start: 0.000000, bitrate: N/A Stream #0:0: Video: png, none(pc), 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x678140] Unable to parse option value "0x0" as image size [buffer @ 0x678140] Unable to parse option value "-1" as pixel format [buffer @ 0x678140] Unable to parse option value "0x0" as image size [buffer @ 0x678140] Error setting option video_size to value 0x0. [graph 0 input from stream 0:0 @ 0x678040] Error applying options to the filter. Error opening filters! I ran pngcheck on one of the generated files: C:\pngcheck.win64.exe movie0000.png movie0000.png this is neither a PNG or JNG image nor a MNG stream ERROR: _encode.lnk.movie0000.png I opened the same file with python's Imaging library, and saved it with a different name. The python-Image generated png was declared valid by pngcheck. visit_composite (using vtkPNGWriter) is the source of the problem. A quick fix is to have visit_composite only produce ppm and use Python's Image library to convert the results to the appropriate type.
GITHUB_ARCHIVE
Boxeslayers to the rescue This is about layering boxes, not about slaying them. We have 1,830 2×5 boxes to stack safely as 10 alternating contiguous layer patterns of 183 boxes each. Layers have identical silhouettes that fit squarewise on a rectangular 2,000-square floor. The pair of layer patterns is to be solved along with the floor’s dimensions. Background and guidelines Once upon a time in packing to move we had a supply of identical boxes. One challenge was, in a land of earthquakes and elbows, to stack full boxes stably on the garage floor. As width and length of a box had some irrational ratio, we came up with an amusing pattern of alternating layers (that turns out to be suboptimal). This layer pair is exemplified with 2×3 boxes. We defined safe stacking as similar to bricklaying, where no internal abutments align between layers. Back to 2×5 boxes of the present puzzle, here are safe layer pairs of 4, 6, 7, 8, 9 and 10 boxes. Each pair is annotated with how efficiently it uses rectangular floor space. Bounty to anyone who improves on these efficiencies or finds a safe layer pair of 5 boxes. As an example of what to avoid as unsafe, a tempting pair of 6-box layers has internal abutments that align between layers. How can safely alternating contiguous 183-box layers fit onto a 2,000 square-unit rectangular area? Ugh... I have only 182 boxes per layer, though I managed that in a rectangular area of 1,980 square units. You're well on the way, @Daniel Mathias! 183 boxes and 2,000 squares is where one (surprisingly in the long run) efficient pattern overtakes another. Guessing that your approach of comment could fit 183 boxes onto a 2,002-square floor. Feel free to show and tell in any case. Don't have to render the entire solution as long as a pattern is demonstrated. I believe all your proposed constructions are optimal, assuming that a layer has to have a single connected component. Thank you for reassurance, @user1502040 , your contiguity assumption is correct and has been edited explicitly. I wholeheartedly look forward to kinda-answers/solutions that expand on what is posed so please post whatever you implied or have in the works. The layer pattern I referred to is expandable with up to 96.97% efficiency. Another expandable pattern has up to 97.22% efficiency and allows for 184 boxes in an area of 2001 square units. Unfortunately, neither of these work with an odd number of boxes per layer. I will eventually share details, but for now here are 12- 14- and 16-box layers: image @DanielMathias I get a slightly better solution ($11 \times 15$) for $n=16$. Not that it helps at all for this question, but I find it interesting that one can get arbitrary close to 100% efficiency for (at least some) large n. @DanielMathias I posted my example as an answer/picture (since I don't know how to post pictures in comment) Here is one way to do it, showing the pattern along with its mirror image. The rectangular area is 83 x 24 units, or 1992 square units. And the two combined, to show that this is a safe stacking pattern. I had to be carefully inefficient, as the base pattern had 180 boxes in an area of 1909 square units, and I was able to layer 186 boxes within the 1992 square unit area before hitting the magic number of 183 boxes. F yeah! More beautiful than the intended solution. Gotta double check.@Daniel Mathias, which usually comes up positive. Hang in there and enjoy the well deserved +1s. Dear @Daniel Mathias , had to take leave and might again, still reeling from solutions. Stay tuned. (Same to Retudin.) (This is not an answer to the 183 problem asked here) High efficiency packing (arbitrary close to 100%) for far larger numbers of boxes I don't understand what this diagram is depicting. What does level 2 look like? All horizontal are not shown (the inside white is filled, as well as the blue with orange layer horizontal blocks, and the orange with blue layer horizontal blocks ) , i.e. of the 20 rows, odd layers form a massive block of 5x10 in row 5..14 , while even layers have 4 10x2 massive subblocks in row 1..19 (in column 3..2, 15..24, 27..36 and 39..48) Ah, like this? Indeed, like that Dear @Retudin , had to take leave and might again, still reeling from solutions. Stay tuned. (Same to Daniel Mathias.) @Axiomatic System, you graphically bad! In the best sense.
STACK_EXCHANGE
I switched from KDE to GNOME because of dual monitor setup (still broken under KDE) and now I try to add my printer, like I did before on KDE. IP is set on 192.168.50.50 over the router install from AUR brother-mfc-j5320dw 3.0.1-3 and brscan4 than I run sudo brsaneconfig4 -a name=5320DW model=MFCJ5320DW ip=192.168.50.50 I reboot but when ever I try to add a printer it shows “cant add printer” and this is it I have no idea what the problem could be, just switch to GNOME and didnt touch anything on my setup. Can somebody help me, please? Can you try CUPS ? localhost:631 in the browser adress line. I never work with CUPS, but it shows OpenPrinting CUPS 2.4.1 and when I try to add a printer it ask me for username and password. Router username and password is not working, so I try to find it out over the IP and I can only set a password. @ishaan2479 how u noticed that u need dnssd? Just enter as username root and as password the @ishaan2479 gave a relevant solved post on this issue. You probably need to enable Avahi service: sudo systemctl enable --now avahi-daemon.service Avahi - ArchWiki @Keruskerfuerst not working, asking again for username and password @JiaZhang this helps a lot, the printer was instant shown after I rund this command But still nothing works. When i try to open settings, nothing happen and scanning is also not possible Thats the only information I get when i try to open anything. I also try to add a new printer, it shows me just one printer, not like before but I still can not add Sorry forgot about this … That’s odd that you still cannot add the printer. But I think you’re new to this printing stuff, so probably see a brief explanation on CUPS: I wouldn’t recomment resolving this problem to installing driver from the Arch User Repository (AUR) for newbie; AUR isn’t for a newbie, you need to know what you’re doing and it risks breaking your system. For newbie, stick with the Manjaro repository stable branch software packages you can get by default when running the GUI software manager Pamac or Octopi (you may see an option to enable AUR in the GUI but really don’t if you’re a newbie!). What I recommend is to use manjaro-printer: To be honest, if you need a printer driver from AUR, there’s still a risk that the driver may have some issue (non-functional or the print result is unsatisfactory) and I don’t really recommend using printer driver from AUR even if it’s available. Not to mention that you need to be cautious with AUR caveats, making it not suitable for beginner. I don’t know how Manjaro KDE was able to functionally support your printer model but not in Manjaro GNOME … Printer in UNIX and Linux world is quite tricky, and usually vendors only provide an official support as a deb or rpm package: For you printer model : Downloads | MFC-J5320DW | Others | Brother So that means that managing printers in deb or rpm distro based is easier. If the above still doesn’t work, my advice is either below: Switch to Manjaro KDE If you still want GNOME, perhaps try a live-media of deb or rpm distro based with GNOME and see if your printer works, then use that distro instead. EDIT: fix GNOME KDE typo I’m so happy right now with GNOME but this is really strange and I’m not able to fix this. I try it yesterday on Notebook, old one, just install Manjaro KDE. Install MFC-J5320DW and brscan4 (AUR) Add printer → IPP Netzwerkdrucker over DNS-SD → Brother MFC-J5320DW CUPS (en) → IP → done sudo brsaneconfig4 -a name=J5320DW model=MFCJ5320DW ip=192.168.50.50 Anything works, scanning and printing, even in very good quality. Back to GNOME it even find printer and scanner instant. But cant scan or print, even when I remove the printer and try to add it again it shows “cant add printer” So to clarify, printer was working fine on Manjaro GNOME installed on your old notebook but not with the current computer you’re using. Is that right? If that’s the case, then I suspect that it’s very likely more to a permission problem. Somehow, you don’t have the permission to configure printer on the laptop you’re using now. I never use Manjaro GNOME, so I can’t advise further. @JiaZhang no, I took a old notebook and install Manjaro KDE to test it again and its working without any problem. On GNOME I was not able to print or scan one site With “sudo systemctl enable --now avahi-daemon.service” it adds the printer but still not work Looks like I found a solution. First off all you need to install system-config-printer After that you go to add printer Now u type in the adress and wait few sec take the last one, go to settings and chose “driver from databank” just look for CUPS (it will show 3 or 4 driver) print the first site but scanning still not works edit: I can scan now Just remove it with sudo brsaneconfig4 -r 5320DW and add it like on first post here. This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
Humans are lazy creatures. Not the lounge-on-the-sofa-in-your-pants kind of laziness (at least not always). I'm talking about a laziness that's hardwired at a subconscious level. I'm talking about heuristics. Heuristics are simple rules that humans use to make quick decisions. Put simply, rather than figure out the best course of action from scratch every time, we refer to pre-programmed schema to save our brains from overloading. To illustrate, one type of heuristic (known as Availability) suggests that when an idea is easy to bring to mind, humans will overestimate how likely it is to occur. Terrorism, for instance, is an easy thing to fear because it's so infamous. The fact is, we're unlikely to be affected by it directly, but because it springs vividly to mind, it has more sway over us. Many of us succumb to this pre-defined, easy-access fear, and that's precisely why it's such a destructive tool. It's the same reason we fear shark attacks, even though you're more likely to be crushed by a falling vending machine. For the purposes of this article, I'll be talking about the 'Affect' heuristic (AH). Like all heuristics, it's a mental 'shortcut'. Everyone uses AH, without exception. Which means that if you can weave this simple rule into your messaging, it can become very influential indeed. 'Many psychological scientists now assume that emotions are the dominant driver of most meaningful decisions in life (Lerner et al, 2014).' With that in mind, AH is a human's emotional decision-maker. It associates everything we see and hear with a gut-feeling - whether that's 'a warm blanket', or 'a dark alley'. So you might spend time thinking of nice adjectives, and one-liners that roll off the tongue... but if it's not grounded in the relevant emotion, it's probably not getting you results. A customer is more likely to react and engage with an emotional driver (happiness, excitement, lust, love, fear) than a logical one (price, convenience, quality). It's one way to craft a message that's meaningful to people. Now all you've got to do is figure out what emotion(s) your product, service or brand can inspire. (I also want to just quickly point out that there is absolutely still value in a logical benefit. However, they usually play a supporting role because their hook isn't specific (or emotive) enough.) Anyway, an example: For us, we like things that are simple, and we know that business owners like simplicity too. So we might start with something like this: Join us for the workshop that makes business simple Simple isn't an emotion. We want people to experience an epiphany at these sessions; we want people to get excited by their ideas, and inspired and motivated to go out and make them happen. Simple's fine, but it's just a by-product of the emotional trigger. What about Bring inspirational ideas to life. Join our business workshop. I've not tested this at all (I will, so keep a look out), but my money's on the second one. There's a clear emotional benefit, and a direct call to action that implies you'll get that benefit by signing up. If a human reads that copy, and they're craving inspiration, they'll click much more readily than if I spoke about a discount, or... actually, wait - hang on. Let's add in that support role I mentioned. (You know I like a freebie). Bring inspirational ideas to life. Join our 30 minute business workshop for free. Now we're cooking with gas; we've got a strong, direct and emotional call to action (high gain = inspiration, action), all supported by two irresistible logical benefits (low cost = time and money). If you see this message and you're in the business world... it'd be too good to turn down. It's got everything you really need: There's the heart-grabbing, emotion-led opener, and then just as the brain's about to say, 'Oh you don't have time,' or 'It'll be too expensive' it's kept happy with the logical safety net. You can see what I'm trying to do, even in this hastily-put-together example. But I challenge you all: go out, consider the emotions of your readers and customers-to-be, and reap the benefits. It's easy when you get the hang of it. If you'd like to explore this further, chat to me over a coffee: firstname.lastname@example.org. Thanks for reading,
OPCFW_CODE
Development projects for service-oriented solutions are, on the surface, much like any other custom development projects for distributed applications. Services are designed, developed, and deployed alongside the usual supporting cast of front and back-end technologies. Once you dig a bit deeper under the layers of service-orientation, though, you'll find that in order to properly construct and position services as part of a standardized SOA, traditional project cycles require some adjustments. As we can see in Figure 1 (see below), common delivery lifecycles include processes specifically tailored to the creation of services in support of SOA. In the service-oriented analysis stage, for example, services are modeled as service candidates that comprise a preliminary SOA. These candidates then become the starting point for the service-oriented design phase, which transforms them into real world service contracts. Service-oriented analysis (and a related sub-process known as service modeling) represent an important part of service delivery that requires the involvement of business analysts and very much demonstrates how business analysis in general is affected by SOA. We'll discuss these processes in more detail later in this series. For now, our focus is on the project lifecycle and its relationship to business analysis. Figure 1: Common phases of an SOA delivery lifecycle. The lifecycle stages displayed in Figure 1 represent a simple, sequential path to building individual services. Real world delivery, however, is rarely that simple. These stages generally need to be organized into a delivery cycle that can accommodate the goals and constraints associated with project requirements, schedules, and budgets. The challenge often lies in balancing these considerations. The success of SOA within an enterprise is increasingly associated with the extent to which it is standardized when phased into business and application domains. However, the success of a project delivering a service-oriented solution is traditionally measured by the extent to which the solution fulfills expected requirements within a given budget and timeline. To address this problem, we need a strategy. This strategy must be based on an organization's priorities in order to establish the correct balance between the delivery of long-term migration goals with the fulfillment of more immediate, tactical requirements. In this article we contrast two common strategies used to build services known as bottom-up and top-down. Neither is perfect, but both provide us with insight as to how the SOA delivery lifecycle can be configured. The bottom-up approach is currently the most common variety, where services are created on an "as need" basis to fulfill mostly tactical requirements. The top-down approach, on the other hand, is one of analysis, deep thought, and patience. Service-orientation is infused into business layers so that services can be modeled in alignment with business models. In other words, it is far more strategic. Because the theme of this series is about how SOA relates to business analysis we are more interested in what lies behind the top-down process. The bottom-up approach is described primarily to provide contrast. The majority of organizations that are currently building services as Web services follow a process similar to the one shown in Figure 2. The primary reason being that many just add Web services to their existing application environments in order to leverage the open Web services technology set (primarily for integration purposes). Even though the resulting architecture is often referred to as SOA, it really is still more reminiscent of traditional distributed architectural models, as service-orientation is rarely taken into consideration. Figure 2: Common bottom-up process steps. Though bottom-up designs allows for the efficient creation of services they can introduce some heavy penalties down the road. Implementing a "proper SOA" after a wide spread implementation of tactical services can impose a great deal of retro-fitting. This is very much an "analysis first" approach that requires not only business processes to become service-oriented, it also promotes the creation (or realignment) of an organization's overall business models. This process is therefore closely tied to or derived from an organization's existing business logic, and it commonly results in the creation of numerous reusable business and application services. The top-down approach will typically contain some or all of the steps illustrated in Figure 3. Figure 3: Common top-down process steps. The point of this strategy is to invest in the up-front analysis and planning work required to build a high quality service architecture. The boundary and parameters of each service are thoroughly analyzed to maximize reuse potential and opportunities for streamlined and sophisticated compositions. All of this lays the groundwork for a standardized and federated enterprise where services maintain a state of adaptability, while continuing to unify existing heterogeneity. The obstacles to following a top-down approach are usually associated with time and money. Organizations are required to invest significantly in up-front analysis projects that can take a great deal of time to demonstrate tangible, ROI-type benefits. There are further risks associated with over planning, where by the time the analysis projects are completed, they can become outdated. Top-down approach and enterprise models Of particular interest to business analysts are the enterprise models referenced in Step 1 of Figure 3. These tend to vary across different organizations, each of which will have models that are unique to its business domains. Common types of enterprise model documents include a formal ontology, an enterprise entity model, an enterprise-wide logical data model, a standardized data representation architecture (often realized through a collection of standardized XML Schemas), and other forms of models generally associated with enterprise information architecture. Some of these provide business-centric perspectives of an organization that prove extremely valuable sources for deriving business services. Business entity models especially tie directly into the subsequent definition of entity-centric business services. Although listed as just a single step in the overall process, the requirements to properly define enterprise models can easily result in the need for one or more separate processes, each of which may require its own project and working group. On the other hand, if the required enterprise business models already exist, then this step may simply consist of their identification. The choice of delivery strategy will determine the extent to which business analysts can help shape a service portfolio conceptually, before services are physically implemented. It is therefore worthwhile to give serious consideration to the pros and cons associated with each approach. The next article in this series continues this exploration by explaining a common deliverable of the top-down analysis effort known as the enterprise service model. We will also then describe how the both tactical and strategic requirements can be addressed in an alternative strategy known as "agile" or "meet-in-the-middle." This article contains excerpts from "Service-Oriented Architecture: Concepts, Technology, and Design" by Thomas Erl (792 pages, Hardcover, ISBN: 0131858580, Prentice Hall/Pearson PTR, Copyright 2006). For more information, visit www.soabooks.com. About the author Thomas Erl is the world's top-selling SOA author and Series Editor of the "Prentice Hall Service-Oriented Computing Series from Thomas Erl" (www.soabooks.com). Thomas is also the founder of SOA Systems Inc., a firm specializing in strategic SOA consulting, planning, and training services (www.soatraining.com.) Thomas has made significant contributions to the SOA industry in the areas of service-orientation research and the development of a mainstream SOA methodology. Thomas is involved with a number of technical committees and research efforts, and travels frequently for speaking, training, and consulting engagements. To learn more, visit www.thomaserl.com.
OPCFW_CODE
we're searching for a method to point our Apache DocumentRoot to some symlink. E.g. DocumentRoot /var/www/html/finalbuild finalbuild should indicate a folder somewhere like /home/user/build3 whenever we move a brand new build to /home/user/build4 you want to make use of a spend script that changes the symbolic link "finalebuild" for this new directory /home/user/build4 and do an apache elegant restart to possess a new web application version ready to go with little risk. What's the easiest method to create this symlink and also to change this link later on while using spend script? I have used symlinks because the apache DocumentRoot being produced without any elegant restart necessary. Generally, the concept should work. A 403 error most likely signifies a permissions error unrelated towards the symlink altering. An additional wrinkle that you would like to add is making the symlink switch atomic therefore the symlink always is available. In other words, never may be the symlink nonexistent, even as it were: We are using capistrano to train on a similar setup. However, we have encounter a couple of problems: After switching towards the setup, things made an appearance to become going fine, however we began realizing that whenever running cap deploy, despite the fact that the symlink have been transformed to suggest toward the mind revision, the browser would still show that old pages, despite multiple refreshes and appending different GET parameters. In the beginning, we think it is browser caching, so for development we disabled browser caching via HTTP headers, but this did not change anything. Then i checked to make certain we were not doing full-page caching server-side, and that we were not. However I then observed when I erased personal files within the revision the symlink accustomed to indicate, we'd obtain a 404, so Apache was serving up new pages, however it was still being following a "old symlink" and serving the web pages up in the wrong directory. This really is on hosting that is shared, and so i wasn't in a position to restart Apache. And So I attempted removing the symlink and creating a replacement every time. This appeared to operate sometimes, although not dependably. It labored most likely 25~50% of times. Eventually, I discovered when I: - removed the present symlink (removing it or renaming it) - designed a page request, leading to Apache to try to resolve the symlink but think it is missing (producing a 404) - then produced a brand new symlink towards the new directory it might make the docroot to become up-to-date correctly more often than not. However, even this is not perfect, contributing to 2-5% of times, once the deploy script went wget to fetch a webpage immediately after renaming that old symlink, it might return that old page as opposed to a 404. It appears like Apache is either caching the filesystem, or possibly the mv command only transformed the filesystem in memory while Apache was reading through in the filesystem on disk (does not really make sense at all). Either in situation, I have adopted someone's recommendation to operate sync following the symlink changes, that ought to obtain the filesystem on disk synchronized with memory, and possibly the slight delay will even assist the wget to come back a 404.
OPCFW_CODE
How to Monitor SD-WAN There are several factors that should be considered when implementing an SD-WAN solution. One of them is network monitoring. In this post, we review some SD-WAN challenges and how network monitoring can help overcome them. Path Remediation and Failover One of the benefits of SD-WAN is path remediation and automatic failover. This feature is available when a router has multiple connections, such as MPLS, broadband, and/or LTE. In this scenario, traffic can be routed through different lines, increasing reliability and quality. For example, if a link is experiencing high latency or packet loss, the router may send the traffic through a different link. Some SD-WAN solutions even duplicate packets across two links, increasing the chances that traffic will reach the other end. These traffic changes may have an immediate positive impact but could negatively affect the end-to-end performance. For example, the router may route traffic across a link with lower speed, slowing down the connection. In the case of packet duplication, the overall bandwidth available to users is reduced. As a result, applications may perform slower than before the corrective action which causes users to complain. Troubleshooting these sorts of issues is very difficult without the right information. End-to-End Network Tests End-to-end network tests provide useful data to troubleshoot situations like the one illustrated earlier. For the most important services and applications used at the remote branch, a network monitoring tool should collect the following metrics: - Latency and packet loss to the remote application server (ICMP or TCP-based ping) - Jitter for voice and video communications (UDP iperf) - Number of network hops and path changes (traceroute/tracepath) - Throughput to other WAN sites and to the Internet (iperf, NDT and speedtest) SD-WAN solutions may report some of these metrics, but they’re either passive or only take into consideration a limited portion of the network. This typically is the last mile where the SD-WAN appliances operate. A network monitoring tool for SD-WAN takes into account the whole end-to-end experience, from the user layer to the far end destination. Such a monitoring solution, relies on active network monitoring agents that are installed at the edge, either as a physical or a virtual appliance. The end-to-end network tests are run continuously, and results are retrieved in real-time and stored for historical review. End-User Experience Monitoring Monitoring the end-user experience is another key element of an SD-WAN monitoring solution. There are many ways to capture the end-user experience, and a variety of tools in the market that aim to do so. Typically, end-user experience monitoring includes application-layer statistics and metrics such as: - DNS resolution time - HTTP loading time - Mean Opinion Score (MOS) for VoIP - WiFi performance metrics When performance data generated by an active monitoring agent is paired to a passive data captured by an SD-WAN appliance, it results in a clear picture of network performance. The active data is useful to gather proactive alerts and troubleshoot in real-time performance issues. The passive data is used to give a clear understanding of how the bandwidth is consumed by users (“top talkers”) and applications (“top applications”) and update the network configuration if needed. The combination of the two technologies translates into reduced Time-To-Resolution of network and application issues, increased performance, and higher user satisfaction.
OPCFW_CODE
Scheduled Queries packs not working Hello All, I am currently having an issue with scheduling query packs from fleet and I am not sure what I am doing wrong. I have created packs using some saved queries and have set the option to snapshot so I can see when the queries run and logged but it seems that these packs never run. If I setup a schedule using osquery config file, I see these queries run and log properly to the fleet server (osquery result log). I am also able to run these queries from the fleet server but just the packs don't run on my defined interval. My OSQuery config looks like this: { // Configure the daemon below: "options": { // Select the osquery config plugin. "config_plugin": "tls", // Select the osquery logging plugin. "logger_plugin": "tls", // The log directory stores info, warning, and errors. // If the daemon uses the 'filesystem' logging retriever then the log_dir // will also contain the query results. //"logger_path": "/var/log/osquery", // Set 'disable_logging' to true to prevent writing any info, warning, error // logs. If a logging plugin is selected it will still write query results. //"disable_logging": "false", // Splay the scheduled interval for queries. // This is very helpful to prevent system performance impact when scheduling // large numbers of queries that run a smaller or similar intervals. //"schedule_splay_percent": "10", // A filesystem path for disk-based backing storage used for events and // query results differentials. See also 'use_in_memory_database'. //"database_path": "/var/osquery/osquery.db", // Comma-delimited list of table names to be disabled. // This allows osquery to be launched without certain tables. //"disable_tables": "foo_bar,time", "utc": "false", // "disable_events": "false", //Kolide Options "enroll_secret_path": "/var/osquery/enroll_secret", "tls_server_certs": "/var/osquery/server.pem", "tls_hostname": "x.x.x.x:8080", "host_identifier": "hostname", "enroll_tls_endpoint": "/api/v1/osquery/enroll", "config_tls_endpoint": "/api/v1/osquery/config", "config_tls_refresh": "10", "disable_distributed": "false", "distributed_plugin": "tls", "distributed_interval": "3", "distributed_tls_max_attempts": "3", "distributed_tls_read_endpoint": "/api/v1/osquery/distributed/read", "distributed_tls_write_endpoint": "/api/v1/osquery/distributed/write", "logger_tls_endpoint": "/api/v1/osquery/log", "logger_tls_period": "10" } As mentioned earlier, if I add a query to schedule or packs in the osquery config file, it works as expected but I would like to use fleet for managing packs and my config. Can someone shed some light on what I am doing wrong. What version of fleet are you using (fleet version --full)? version 1.0.6 branch: master revision: 45165aa29aae52346ed1c458f19b89947d315c61 build date: 2017-12-04T22:50:27Z build user: marpaia go version: go1.9.2 What operating system are you using? CentOS 6 What did you do? Trying to schedule queries using packs What did you expect to see? Query packs should run and there should be logging What did you see instead? Nothing Can you run osquery with the --tls_dump and --verbose flags and show us the output? I'm particularly interested in seeing whether the scheduled queries are being passed to osquery. Also, can you please show us what your configuration looks like in Kolide? Here is my Kolide Config, working on the osquery output, interested to see if I can use tls_dump and verbose just in the config file. I'm not sure if this is the problem, but it looks like there is a syntax error with your file_events queries. This is probably going to be easier to debug in the osquery slack. In this scenario you describe, are you using Kolide? If not, please ask about it in #general. If so, please ask in #kolide. @newbeeeeeee pack queries result will not store in osquery node , it store on fleet server, pls see this link: https://github.com/kolide/fleet/blob/master/docs/application/working-with-osquery-logs.md
GITHUB_ARCHIVE
Add interpolation for the volume to waterlevel table We need interpolation for the volume to waterlevel table. We also need to decide where to put this functionality. If we put it in iMOD Python the input files would become bigger, if we put it in Ribasim, Ribasim would become geometry-aware. After discussing this with Julian, we've come to some conclusions. Ribasim can run in multiple configurations (e.g. strongly lumped or not), and that means the water level that is communicated to MODFLOW 6 can be computed in multiple ways. For example: 3.5 "basins" in a single MODFLOW river cell, for a single MODFLOW River boundary 1 basins for dozens of MODFLOW river cells, for a single MODFLOW River boundary 1 basins for dozens of MODFLOW river cells, for multiple MODFLOW River boundaries And so forth, also for MODFLOW Drainage boundaries, since they are used to simulate a difference between infiltration and drainage conductance. One solution is to associate weights with the connection, since imod_coupler supports this via sparse matrix operations. The limitation is that the imod_coupler weights are static, which may prove insufficient in the future (e.g. flooding or drying in Ribasim). The most attractive solution so far seems to let Ribasim present a 1:1 water level for MODFLOW 6. This means that Ribasim will require additional input, describing how to create a spatially distributed water level for basins. Such input ideally be spatially meaningful to help with input inspection and debugging. This is also useful separate from the coupling, since a spatially distributed water level is very useful for interpretation. However, to get spatially meaning input, we require spatial formats. Given the type of data, something like (UGRID) netCDFs seem like the most obvious choice. At any rate, QGIS should be able to visualize it. This also means that the imod_coupler logic can remain absolutely minimal: it takes the water levels as computed by Ribasim, and puts them in the MODFLOW 6 memory. Then the only thing to configure for the coupling is which MODFLOW boundary conditions should be coupled. Consider: Six MODFLOW cells (with associated river boundary conditions), and four Ribasim basins. For MODFLOW cell 1-2-3, we have to take one water level and expand it into three stages for MODFLOW. For cell 4, we have to aggregate instead. For the upper cells, we'd want to interpolate based on some input, to enable different river bottom levels for example. This isn't the case for cell 4. In such a case, the water level should be interpreted directly, and no interpolation logic is required. I think that argues for separating the logic into different types, and also separating the input. The coupling for cell 4 requires aggregation weights. I think it makes most sense to scale them directly by the wetted area of each basin. This is also a good reason to disallow having "3.5" basins in a single cell, since we need an additional set of weight to adjust the area weights. For the aggregated (ribasim: modflow = one: many) case, we'd need the columns: basin_id basin_level modflow_id (this is the ID of the boundary condition, not the cell number! Node number in MODFLOW may change to re-ordering or inactive cells) modflow_level ("modflow" is a stand-in here, it could be anything, since this functionality can also be used to compute back to the DFM levels.) For the distributed (ribasim:modflow = many: one) case, we'd need only: basin_id modflow_id Sub-issues so far: [ ] #679 [ ] #680 [ ] #681 Fixed by #674
GITHUB_ARCHIVE
My site has crashed .. anyone have some info? I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks It sounds to me like the freelancer is scamming you. Unfortunately, if you didn't back up the site, there really isn't any way to restore it, unless the host you go through provides a back up service that was in place prior to the files being deleted. As far as what you can do from here, if you don't own the equipment that it was hosted on, you really don't have any options other than to rebuild the site. hey first, go and change your ftp username and password and all other passwords like the domain admin ones and the hosting account ones. then delete any existing databases. and then check for backups if any available on your server. if its there, take it down but don't install it cause the freelancer will again have access to your wordpress admin. install wordpress on your hosting. go to wordpress.org to know more about how to install and to download. then if you have backups, import your themes and plugins. and if you don't have and cannot make the site on your own, please don't go back to that same freelancer. cause again he'll do this. find someone else, more reliable, it'll be good if he's your friend or something. then make a new ftp account on your hosting, and give those credentials to him not your admin ones. copy whatever he makes to a new folder and afterwards download it for backups. it'll be best if you can make the site on your own. please dont trust anyone with your site, cause thats one of your main identities. but first change your passwords. also, when you change your password, check if you saved passwords in ftp client (ie. total commander, or filezilla). some of the trojan/viruses can steal passwords, and made a mess. it's not ok to say that freelancer is the one that made that damage. Essentially, change all of your passwords, and see if your host has backups that you can use. If not, you're a bit out of luck; the freelancer is scamming you. Make sure to create an FTP account that isn't admin account, and limit its access to only places where you need editing, then give the account credentials to a trustworthy person. Get someone else to remake the website, if you are seriously out of luck without backups. The freelancer is scamming you. Agree with him and ask him to put the site back on, then change your ftp info etc and stop communicating with the freelancer. (If necessary pay him 50% of the money to get him to install the site again).
STACK_EXCHANGE
""" Domain certificate search module. This module provides several functions allowing to retrieve certificate information for domains from crt.sh. This module uses the crtsh unofficial API: [https://github.com/PaulSec/crt.sh](https://github.com/PaulSec/crt.sh). .. important:: The crtsh package is licensed under the MIT License by Paul (PaulSec on Github). A copy of the license can be found in the root of the project in `LICENSE_MIT.txt`. Purpose ------- Provide the Discovery pipeline the ability to retrieve certificate information for the domain names that are generated. Non-Public Functions -------------------- .. note:: Non-public functions are not part of this API documentation. For more information on these functions, click "Expand Source Code" below to view the docstrings in the source code. - `_issuer_regex`: Extracts certificate issuer's information from a crt.sh result. - `_search_from_list_of_dictionaries`: Integration function to allow DNSTwist results to be directly passed to this module for searches. """ from crtsh import crtshAPI as crt import re import pandas from datetime import date, datetime from tqdm import tqdm from requests.exceptions import ConnectionError def _issuer_regex(issuer_name_string): """ Returns the certificate issuer's country string and organization name string from a crt.sh issuer_name string. """ result = re.findall('(C|O)=("[\w, \.-]+"|[\w\' ]+)', issuer_name_string) result = dict(result) # Assign the value if it exist in the result, else assign empty string country = result['C'] if 'C' in result.keys() else "" organization = result['O'] if 'O' in result.keys() else "" return country, organization def _search_from_list_of_dictionaries(list_of_dict): """ Searches for certificates from a list outputted from a dnstwist domain name generation and returns a dataframe with the results. Used by as an integration layer between DNSTwist's output format (dictionary) and the public search function. """ result_dataframes = [] for dictionary in tqdm(list_of_dict, desc='Searching for domain certificates', unit='domains'): try: search_result_df = search( dictionary['domain-name'], dictionary['original-domain'], drop_diplicates=False ) search_result_df['fuzzer'] = [dictionary['fuzzer'] for i in range(search_result_df.shape[0])] result_dataframes.append(search_result_df) except ConnectionError: print(f'Failed to retrieve certificates for {dictionary["domain-name"]}') pass # Combine all result dataframes concat_df = pandas.concat(result_dataframes).drop_duplicates().reset_index(drop=True) return concat_df def search(domain, original_domain='N/A', drop_diplicates=True, include_expired=False): """ Searches crt.sh for a domain's certificates. Parameters ---------- domain: str Domain name including the top-level domain used for the crt.sh query. original_domain: str Original domain used by DNSTwist's domain generation to generate the `domain` parameter. Used to match certificate results to original domains in the Discovery pipeline. drop_duplicates: bool Drop duplicate rows in the results DataFrame if they exist. include_expired: bool If set to true, expired certificates will not be retrieved from crt.sh. Returns ------- Returns: pandas.DataFrame Returns a DataFrame containing original domain (if provided), domain name found in a certificate, issuer name, issuer country, certificate start and end, and certificate duration in days. """ result_dataframe = pandas.DataFrame( columns=[ 'original-domain', 'domain-name', 'issuer-name', 'issuer-country', 'cert-start', 'cert-end', 'cert-duration' ] ) result_index = 0 certs = crt().search(domain, wildcard=True, expired=include_expired) if isinstance(certs, type(None)): return result_dataframe for record in certs: issuer_country, issuer_name = _issuer_regex(record['issuer_name']) start_date = datetime.strptime( record['not_before'], "%Y-%m-%dT%H:%M:%S" ).date() end_date = datetime.strptime( record['not_after'], "%Y-%m-%dT%H:%M:%S" ).date() name_values = record['name_value'].split('\n') for name in name_values: if "*" not in name: result_dataframe.loc[result_index] = [ original_domain, name, issuer_name, issuer_country, start_date, end_date, (end_date - start_date).days ] result_index += 1 # Remove duplicates and reset dataframe's index if drop_diplicates: result_dataframe = result_dataframe.drop_duplicates().reset_index(drop=True) return result_dataframe
STACK_EDU
.NET / SQL Server Developer Job Yoh has a contract opportunity for a .NET / SQL Server / Developer to join our client in St. Louis, MO. Job Responsibilities: - Develops, maintains and supports the SQL Server database environment. - Builds GUI front end interfaces using ASP.NET, C# and related scripting languages. - Participates in projects relative to the design and implementation of physical database objects. - Provides guidance to developers on their data access needs. - Works with systems administration on hardware and operating system level requirements in support of the deployed database software. - Proposes ways to improve the environment. - Leads and participates in medium to large projects requiring analysis and design skills. - Client is in the process of upgrading their workstations to Windows 7. - To accomplish this, they are using some discovery tools to help them understand the number and types of applications installed in their environment as well as the application compatibility. - In addition to this they need to test their applications for compatibility. - The candidate hired will need to help them build a database that has a front end interface as well as reporting capabilities. Job Qualifications: - Creating dashboards and web-based user friendly front-enter interface for SQL databases using ASP.NET, C#, and other fornt end scripting languages. - Creating queries and reports. - SQL Development - ability to write and troubleshoot SQL Code and design (stored procs, functions, tables, views, triggers, indexes, constraints). - Able to import and export data from external SQL databases. - Able to merge multiple databases into a single repository. - Able to provide solutions and process improvement analysis. - Knowledge of TransactSQL , SQL Server Reporting Services, PHP and IIS. - Solid business analysis skills. - The candidate should also possess the following traits: - Enjoy working with others in a team atmosphere. Enjoy working in a fast paced working environment. - Able to adapt to changing customer needs. - Able to work independently with minimal supervision in a rapid program development environment. - Able to set priorities and manage own work. - Have a "customer-service" orientation. - Good verbal and written communication skills. - Knowledge or experience with the following is desirable but not required: - VBA programming experience. - Query, java scripting and AJAX. - Knowledge of web services. Discover all that's possible with Yoh. Apply now. Recruiter: Scott Bennekemper Phone Number: 314-878-0666 ext 236 Yoh is a professional staffing provider with over 70 years of experience in the short- and long-term staffing services industry; visit our website to learn more about our company. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer, M/F/D/V. Ref: 994788 SFSF: INFOTECH Saint Louis, MO Related projectsSearch for freelance jobs can’t wait for more clients and advertising. Thank you."
OPCFW_CODE
Meet our experts: an interview with Jes Dreier, Image Analysis Specialist Jes Dreier has recently joined DanStem/CPR as an Image Analysis specialist. During his career, he obtained a global understanding of microscopy – everything from building a microscope to having an image fully analyzed. In this interview, we asked Jes about the choices that brought him to DanStem and what are his ideas for the future of the Imaging platform. By PhD student Carla Goncalves and Postdoc Ulf Tiemann. People often overlook that research can be done in many different contexts, not just by group leaders or postdocs in academia. The way I see it is that you can do great research in a supportive role, where you enable others to push their research forwards and in that way become part of something bigger. What first sparked your interest in biophysics? The lab where I wrote my master thesis was located in a biophysics center and I was kind of a ‘one-man-team’, being the only one in the lab doing classical physics. As I decided to study for my PhD in the same institute, I wanted to be closer to the rest of the group’s research interests, so I started to look at things from a more biological point of view. The underlying physics was still the same but instead of looking at gold, I was suddenly looking at cell membranes. It was at this point that I realized using my knowledge of physics to describe the biological world was something that really captivated me. Your field of interest during your PhD project was quite different from that of your postdoctoral work - what drove this switch? During my PhD I had started to work with very advanced microscopes. We used polarized fluorescence microscopy, which resulted in images with a lot of information. A large part of my time was spent working out how to extract and precisely quantify different types of information from the imaging data set. Close to the end of my PhD I went to a conference where I met some of the people pioneering super-resolution microscopy. I knew this was the direction I wanted to go next. I had the great opportunity to help build the super resolution (STED) microscope in the lab of Jonathan R. Brewer at the University of Southern Denmark, in close collaboration with Christian Eggeling from University of Oxford. What would you say is your greatest scientific achievement so far? Live cell microscopy really appeals to me because if we can see individual protein dynamics inside the cell, we get very close to understanding how cells function. With this in mind, I joined Ilaria Testa’s lab in the Royal Institute of Technology, Stockholm, who has pioneered live cell imaging with super-resolution capability. During my time in Stockholm I worked to design and build the RESOLFT microscope, which uses reversible switchable fluorescence proteins to generate super-resolution in fluorescence microscopy. After about a year in the lab I had built every part of the RESOLFT and found living cells, or even worms, were ‘happy’ inside the machine – meaning we could observe individual proteins in these cells in real time. It was so rewarding to have built this microscope myself and to see it perform so well in the task it was designed to do. The scientific ‘icing on the cake’ was to have this work was published in Nature Communications a month ago. You then took on a position as Image Analysis Specialist at DanStem/CPR, rather than keep on conducting your own research. Was this a logical next step for you? I always want research to be part of my career and my current position definitely fulfills this need. People often overlook that research can be done in many different contexts, not just by group leaders or postdocs in academia. The way I see it is that you can do great research in a supportive role, where you enable others to push their research forwards and in that way become part of something bigger. After I finished my postdoc in Stockholm I knew that I didn’t want to go back to pure, “dead” physics. When I found the DanStem job posting, I realized that the role of image analyst was exactly what I wanted to do: bridging the gap between microscopy and biology. You have been working at DanStem for four months now. How do you like it so far? What are the challenges? Being here is really great and I enjoy the collaborations with people from different scientific backgrounds a lot. Since my previous expertise was more focused on microscopy than on pure image analysis, I had to familiarize myself with all the different complex software that people use here. It’s been good to learn many new things but of course it takes some time. Currently I think the greatest challenge is that people tend to come to me when a project is already well established and little can be done to change the experimental set up. I hope I can encourage researchers to ask for support when the question is still something like “should we install a fire alarm” and not “what should we do about the burning house”. How would you define your role as DanStem’s Image Analysis Specialist? What are your ambitions here? My goal is to enable people to do things they did not even know they could do. To achieve that, I have to get my knowledge out to the DanStem scientists. I want to teach them enough so they can do image analysis for their next project on their own in a much better way than they did before. And of course, they can always come back to me with more specific or complicated questions. What are your plans to make this happen? What kind of training do you offer at DanStem? I have recently organized basic and advanced courses and workshops for image analysis software programs such as Fiji and Imaris. But I also plan to launch a more informal coaching method. My idea is that people can drop by over lunch, or a piece of cake, to learn about a very specific technique, a small software tool, or a useful tip. I hope these ‘soundbites’ can spike interest and raise awareness about what is possible in image acquisition and analysis. If someone feels inspired by that to try something new, they can come back to me for more details and support. From your perspective, what is the most exciting current development in the field of Image Analysis? How do you expect imaging-based research to change in the near future? With the advent of machine learning and computer vision, many believe that these technologies will very quickly completely revolutionize the image analysis field. Actually, I do not share these high expectations because I see many limitations with the approach, such as the enormous amount of training data that is usually required. Nevertheless, I do expect machine-learning algorithms will become extremely useful in certain, specific applications. For example, scientists often have to manually identify specific structures, such as individual cells, in a digital image and label them for further computational analysis. Artificial neural networks can be trained to recognize and define such structures on their own, and hopefully this automated segmentation can be improved to work robustly even with weak signals and noisy data. If successful, scientists at DanStem would no longer have to sit in front of their computer screens for many hours, staring and clicking on thousands of images! Instead they can move to the fun part of analyzing their images with regards to a scientific question. Learn more about our research platforms and facilities:
OPCFW_CODE
missing client token when using sidecar and k8s auth type Describe the bug Im trying to use this as a side car and get error 400 missing client token To Reproduce Steps to reproduce the behavior: configure argocd side car as per InitContainer and configuration via sidecar instructions Create a secret for vault configuration kind: Secret apiVersion: v1 metadata: name: argocd-vault-plugin-credentials type: Opaque stringData: AVP_AUTH_TYPE: "k8s" AVP_K8S_MOUNT_PATH: "my-mount-path" AVP_K8S_ROLE: "argocd" AVP_TYPE: "vault" VAULT_ADDR: "https://my-valt-adrress.com" apply a sample application with plugin env apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: vault-test-app namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io labels: env: test spec: destination: namespace: test server: 'https://kubernetes.default.svc' source: repoURL: 'https://chart-repo.com' targetRevision: 0.3.21 chart: my-chart plugin: env: - name: ARGOCD_ENV_HELM_VALUES value: | redis: auth: password: <path:mycompany/dev/data/pws-helm#REDIS_PASS> Also tried from inside the sidecar pod with a sample secret: kind: Secret apiVersion: v1 metadata: name: test-secret type: Opaque data: password: <path:secret/dev/data/pws-helm#REDIS_PASS> Expected behavior The REDIS_PASS env var gets the value from vault vault has an argocd role on the my-mount-path access method Screenshots/Verbose output $ argocd-vault-plugin generate -s argocd-vault-plugin-credentials --verbose-sensitive-output . 2023/02/13 23:03:37 reading configuration from secret argocd-vault-plugin-credentials 2023/02/13 23:03:37 parsed secret name as argocd-vault-plugin-credentials from namespace argocd 2023/02/13 23:03:37 Setting VAULT_ADDR to https://my-valt-adrress.com for backend SDK 2023/02/13 23:03:37 reading configuration from environment, overriding any previous settings 2023/02/13 23:03:37 AVP configured with the following settings: 2023/02/13 23:03:37 avp_k8s_mount_path: my-vault-auth-mount-path 2023/02/13 23:03:37 avp_k8s_role: argocd 2023/02/13 23:03:37 avp_type: vault 2023/02/13 23:03:37 vault_addr: https://my-valt-adrress.com 2023/02/13 23:03:37 avp_auth_type: k8s 2023/02/13 23:03:37 avp_kv_version: 2 2023/02/13 23:03:37 Hashicorp Vault authenticating with Vault role argocd using Kubernetes service account token /var/run/secrets/kubernetes.io/serviceaccount/token read from XXXXXXXXXXXXXXXX-KUBERNETES-SERVICE-ACCOUNT-TOKEN-XXXXXXXXXXXXXXXX Error: Error making API request. URL: PUT https://my-valt-adrress.com/v1/my-vault-auth-mount-path/login Code: 400. Errors: * missing client token it looks like maybe its trying to authenticate with a vault token even though i set AVP_AUTH_TYPE: "k8s"? or it just cant login to vault and get a token? what about system:auth-delegator cluster role does this come into play here? sidecar image used quay.io/argoproj/argocd:v2.4.0 plugin version used 1.13.1 anyone? or it just cant login to vault and get a token? I believe this is the case. I would check your Vault k8s auth setup. Is kubernetes auth enabled at your mount path? https://developer.hashicorp.com/vault/docs/auth/kubernetes#kubernetes-auth-method Could be a namespace issue, could be something else. This might be helpful https://github.com/hashicorp/vault-plugin-auth-kubernetes/issues/109 Hi I have the same issue, i believe it depends on AVP_K8S_MOUNT_PATH variable, i have exactly the same argo cd installation with default vault auth mount path and it works great, i also tried to set default value to AVP_K8S_MOUNT_PATH var and got the same 400 missing client token error
GITHUB_ARCHIVE
What Makes a Great Programmer? 10 Essential Traits to Look for Posted by John Ylias Date: Feb 24, 2021 12:16:46 PM These days, a lot of people know how to program. However, this doesn’t mean that everyone is a great programmer. After all, there’s a big difference between being able to write some code and being able to create useful applications while working with others. Are you looking to add a new programmer to your IT team? Not sure of what makes a good programmer? Here are 10 traits every good programmer possesses. What Makes a Good Programmer? To be a good programmer, a person requires a solid combination of both hard and Here are the essential traits that every good programmer Programming is a marathon, not a sprint. Applications take months to create, not days or weeks. This is why the programmer you hire needs to possess a good deal of patience. If the programmer you hire lacks patience, they are going to be on edge all of the time, creating an unhealthy work environment for his or her coworkers. Programming involves a substantial amount of trial and error; trying and failing until you get something right. Only a person with a disposition of patience can withstand this relentless trial and error without It’s important to remember, however, that patience can sometimes equal a lack of passion. Your candidate’s patience should be born out of an understanding of the programming process, not out of indifference. 2. Ability to Learn Programming is a never-ending learning process. This is due to the fact that new programming languages are constantly being created, and new programming techniques are constantly being used. For this reason, the programmer you hire must possess the ability to learn quickly and Learning on the fly is a huge part of any programming job, as the programmer must be able to roll with unexpected punches and proficiently accommodate modern Learning quickly generally entails brushing up on programming skills outside of work. When hiring, you should look for candidates with certifications in several different programming languages. It should go without saying that any programmer you hire should have a great deal of programming knowledge. However, this doesn’t just mean that the person you hire should have a computer science degree; it means that the person you hire should have a diverse knowledge of many different programming languages. While mastery in one specific programming language can be very useful, a competent familiarity with several programming languages is often more valuable. In essence, a programmer shouldn’t only know Java and C++, they should also know Python, Ruby, and a variety of other languages. As was noted above, they should also be able to learn new languages and techniques as needed. Technology is constantly growing, and you never know when your company might need its programmers to start using One of the bigger problems you’ll run into with programmers is a lack of communication skills. This is simply the nature of the types of personalities which typically pursue programming as a career. However, this does not mean that all programmers are without useful communication skills. Those programmers who can successfully get their ideas across to coworkers are hugely valuable, as they help to ensure that projects are always on the right track. While the programmer you hire doesn’t need to be endlessly talkative, they do need to be able express ideas and developments that are relevant to projects. Though a programmer may be skilled and talented, his or her lack of social skills can eliminate his or her chances of being a quality employee and While programming languages are fairly straightforward, using them to create applications is anything but. Regardless of what a programmer is coding, they’re likely to encounter stumbling blocks. If they can’t work through these stumbling blocks, they aren’t legitimately going to make it as For this reason, you need to ensure that the programmer you hire has good problem-solving skills. They need to be able to break problems down into small components, figuring these components out one at a time until the bigger picture is realised. Knowing program languages is important. However, knowing how to apply them to real-life applications is even more important. If your programmer isn’t accustomed to chipping away at problems, they are not going to be a suitable candidate. As was mentioned above, there is a great deal of trial and error associated with programming. What these means is that programmers are going to be failing… a lot. Failing to instantly produce impeccable code is not a big problem. What is a problem, however, is having an inability to fight through failure when it presents itself. In short, perseverance is an absolute must for Every programmer has negative thoughts about his or her abilities every once in a while. The difference between the good programmers and the bad programmers is that the good programmers don’t dwell on these negative thoughts. Instead, they get their fingers back on the keyboard and keep coding. There are a lot of jobs that you can fake. You can go in, punch the clock, and give half an effort until it’s time to go home. Programming is not one of these jobs. One key programmer requirement is a deep passion for what you’re doing. If you’re not enthusiastic about writing code and developing applications, you’re never going to cut it as a professional programmer that can be counted on. At best, you’ll be relegated to grunt programming tasks that are only needed to keep the If you’re a hiring manager, you should be looking for a programmer who lives and breathes programming. They should code not only as a job, but as a hobby as well. A good professional programmer sees a day at work as an opportunity to build and enhance an intriguing application. A bad professional programmer sees a day at work as a drag that must be suffered through in order to make a living. 8. Ability to Handle While most programming jobs are fairly low-stress, there are always occasions in which stress presents itself in a big way. When these situations present themselves, you want a programmer who rises to meet them head-on. Someone who doesn’t get overwhelmed by the pressure. Some individuals have a tendency to get negative when stress starts weighing down on them. They lash out at co-workers, speak in condescending tones, and create an overall hostile work environment for those around them. Individuals such as these are poison for a workspace, as they typically end up disturbing any morale which could be created. Programmers who can handle stress keep a cool head on their shoulders. They don’t throw others under the bus, they don’t speak in harsh tones, and they take responsibility for their actions. In essence, they act like Organi sational Skills Software applications require that a great deal of code be written. At times, the amount of code that a programmer is responsible for can be overwhelming. That’s why, to ensure that they always have everything generally under control, your programmer needs good organisational skills. Not only should your prospective programmer write clean and well-documented code, but they should also keep track of his or her responsibilities by recording progress throughout a project. Good organisational skills ensure that everyone working on a project understands what needs and doesn’t need to be completed. When programmers don’t have good organisational skills, they need to have their hands held through everything. This puts undue responsibility on other members of a programming team, slowing down progress. 10. Ability to Work with a As was alluded to a few times above, a comfortable work environment is of utmost importance. If a workplace possesses a hostile environment, employees at that workplace are going to struggle to perform to their How do you ensure a comfortable workplace environment? By hiring employees who are well-mannered, respectful, and capable of working well with a team. Almost all professional programming is performed in a team environment. Programmers must work together to sync up on different parts of a given application. If even one team member demonstrates an apathy to working with others, the team will run the risk of rolling off the tracks. This is because a lack of teamwork results in a lack of communication. Need Help Finding a Suitable Now that you know what makes a good programmer, you might need some help finding one. If so, you’re in the right place. DIVY is one of the premier job recruitment companies in all of Australia. Utilising a variety of sophisticated methods, we do everything we can to connect qualified candidates with qualified businesses. If you need a good programmer, we can help you. today to discuss your needs!
OPCFW_CODE
cannot got the same result after resize and normalize in djl and pytorch Description I trainned a image classification model in pytorch and deploy the pt model in java with pt. when i check the result between pytorch and djl, I found that I cannot got the same result. I test every step and found that I cannot got the same result after resize and normalize. My Java code: Image image = ImageFactory.getInstance().fromInputStream(new FileInputStream("E:\\01-python\\02-pytorch\\kaixing\\data\\57.jpg")); ImageClassificationTranslator.Builder builder = ImageClassificationTranslator.builder(); builder.addTransform( new Resize(160, 120)); builder.addTransform(new ToTensor()); builder.addTransform(new Normalize(new float[]{0.485f, 0.456f, 0.225f}, new float[]{0.229f, 0.224f, 0.225f} )); Translator<Image, Classifications> translator = builder.build(); NDList ndList = translator.processInput(new TranslatorContext() { @Override public NDManager getNDManager() { return NDManager.newBaseManager(); } // ... }, image); System.out.println(ndList.get(0).getShape()); System.out.println(ndList.get(0).get(0, 0, 0)); System.out.println(ndList.get(0).get(0, 0, 1)); System.out.println(ndList.get(0).get(0, 0, 2)); My Java Result (3, 120, 160) ND: () gpu(0) float32 1.3266 ND: () gpu(0) float32 1.3517 ND: () gpu(0) float32 1.3718 My Python code import torch import torchvision from PIL import Image from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True import time import os import numpy as np transform = torchvision.transforms.Compose([ torchvision.transforms.Resize((160, 120)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.485, 0.456, 0.225), (0.229, 0.224, 0.225)) ]) img = Image.open(r"E:\01-python\02-pytorch\kaixing\data\57.jpg") print(transform(img)) My python result tensor([[[ 1.3242, 1.3755, 1.3927, ..., 0.1083, 0.2282, 0.2624], [ 1.3413, 1.3755, 1.3584, ..., 0.0912, 0.1768, 0.2624], [ 1.3927, 1.3927, 1.3755, ..., 0.0741, 0.1597, 0.2624], ..., [-2.1179, -2.1179, -2.1179, ..., -1.5014, -1.5357, -1.5014], [-2.1179, -2.1008, -2.1008, ..., -1.5528, -1.6384, -1.6042], [-2.0494, -1.9295, -1.9467, ..., -1.5870, -1.5870, -1.6042]], [[ 1.3606, 1.3957, 1.4482, ..., 0.0651, 0.2227, 0.2402], [ 1.3782, 1.4132, 1.4307, ..., 0.0651, 0.1702, 0.2402], [ 1.4307, 1.4482, 1.4657, ..., 0.0826, 0.1702, 0.2752], ..., [-2.0007, -2.0182, -2.0182, ..., -1.3704, -1.4055, -1.3704], [-2.0007, -2.0007, -1.9832, ..., -1.4230, -1.5105, -1.4755], [-1.8606, -1.7556, -1.7906, ..., -1.4580, -1.4580, -1.4755]], [[ 2.2070, 2.2418, 2.3115, ..., 1.0741, 1.1264, 1.1264], [ 2.2244, 2.2593, 2.2767, ..., 1.0566, 1.0741, 1.1264], [ 2.2418, 2.2941, 2.3115, ..., 1.0392, 1.0741, 1.1612], ..., [-0.9826, -1.0000, -1.0000, ..., -0.3551, -0.3900, -0.3551], [-0.9477, -0.9477, -0.9651, ..., -0.4248, -0.5294, -0.4946], [-0.8257, -0.7211, -0.7560, ..., -0.4597, -0.4771, -0.4946]]]) Expected Behavior got the same product within java and python How to Reproduce? run both java and python code with a same jpg file Environment Info java: djl: 0.9.0 djl-pytorch: 1.7.0 python: python: 3.7 pytorch: 1.6.0 Could you try to use sum to check the tensor value? The Resize in PyTorch python is using ImageIO API, while we are using PyTorch interpolate C++ API. If the sum value doesn't have a huge difference. It should be fine. Could you try to use sum(input) to check the tensor value? The Resize in PyTorch python is using ImageIO API, while we are using PyTorch interpolate C++ API. If the sum value doesn't have a huge difference. It should be fine. the sum result in java: 39245.32675280833 the sum result in python: 39299.0039 @jinlong-hao yeah. the difference is around 0.1%. It shouldn't have much impact on prediction result close the issue as there is no pending action item. feel free to reopen the issue if you have other question
GITHUB_ARCHIVE
Q3 goal: F1 release with Facebook, Twitter, Gmail Things we're deferring - long tail UI, e.g. long-tail with status.net next release - long tail email - Publishing OAuth app secrets (in particular Twitter) - Privacy Review (Sid - to be scheduled) - Shane to walk Sid through the differences w/ older F1 architecture - Security Review (curtisk) - happened, mostly fine, need to schedule followup items (e.g. injection attack review) - will contact curtisk and he will parse out to others as needed - AMO Review (to be scheduled) - DavidA gave Jorge a heads-up, should send him a xpi as soon as the gmail smtp flow is working. - UDC Review - No server-side user data, so not an issue (but need to fill in form). - Coordination with OpenWebApps: Recommendations from MikeHanson and BenAdida - expect that 70% chance no OWA add-on release this quarter, 30% chance release with no WebActivities support. - release add-on with no in-web-content APIs exposed - name your add-on something other than openwebapps - if you use navigator.apps, rename that to navigator.f1apps for now. - we'll have to figure out merging with OWA add-on in a few weeks. - Currently works - Currently works Shane to make skeleton OWA app Mark & shane finishing gmail xauth smtp. - Left to do: - figure out thumbnail inclusion (mime packaging work) - Not sure how Google will like lots of SMTP calls - Not sure how problematic access to SMTP ports is on the interwebs - No clear way to detect errors in the field. ideas? Non-Service Specific Dev Tasks Change branding to be "Firefox Share Alpha" ? Fix URI generation in about:apps [Shane] - Fix API injection [Shane w/ Myk's input] (will fix broken-in-new-windows bug) - Strip xpi down to size [shane] Make sure that logout really removes all credentials in localStorage - UX minor bits - style logout link a bit better change + tab to say that we'll have a way for adding arbitrary third party services. Disable drive-by OWA installations [Shane] - Deferred: Adapt the OWA panel styling to adopt the F1 style [James] - Deferred: fix UX for about:apps - Deferred: fix UX for drive-by popup DONE: Bryan to write up usertesting.com scenario (by EOD Monday) - Shane et al to run usertesting.com tests (by when?) - Questions to answer: - Does the OWA "stuff" get in the way of the Firefox Share experience? - What's confusing? - Blog Post [david, w/ havi's help] - We're branding as Firefox Share (alpha) - Explain how this add-on is different from previous version - Explain why we're targeting a few major sites, and that we're planning long-tail in a subsequent release - Explain about client-side blah blah - Note: don't have multi-account setup in this version, sorry - Explain no data migration -- need to resetup, re-auth - Lots of UI polish still TBD - Nearly no testing on non-mac platforms so far - AMO team extremely busy
OPCFW_CODE
|Download Help (Windows Only)| Each database contains one or more clusters, where the cluster represents a collection of hardware products all connected over a shared cabling harness. In other words, each cluster represents a single CAN network or FlexRay network. For example, the database may describe a single vehicle, where the vehicle contains a Body CAN cluster, a Powertrain CAN cluster, and a Chassis FlexRay cluster. Use the XNET Cluster I/O name to select a cluster, access properties, and invoke methods. For general information about I/O names, such as when to use them, refer to NI-XNET I/O Names. When you select the drop-down arrow on the right side of the I/O name, you see a list of all clusters known to NI-XNET, followed by a separator (line), then a list of menu items. Each cluster in the drop-down list uses the syntax specified in String Use. The list of clusters spans all database aliases known to NI-XNET. If you have not added an alias, the list of clusters is empty. You can select a cluster from the drop-down list or by typing the name. As you type a name, LabVIEW selects the closest match from the list. Right-clicking the I/O name displays a menu of LabVIEW items and items specific to NI-XNET. The XNET Cluster I/O name includes the following menu items (in right-click or drop-down menus): If you are using LabVIEW Real-Time (RT), you can right-click the RT target within LabVIEW Project and select the Connect menu item. This connects to the RT target over TCP/IP, which in turn enables the user interface of NI-XNET I/O names to operate remotely. If you open the Manage dialog while connected to an RT target, the dialog provides features for reviewing the list of databases on the RT target, deploying a new database from Windows to the RT target, and undeploying a database (removing an alias and file from RT target). Use one of two syntax conventions for the string in the XNET Cluster I/O name: The first syntax convention is the most complete, in that it contains the name of a database alias, followed by a dot separator, followed by the name of the cluster within that database. Use this syntax with FIBEX files, which contain multiple named clusters. The second syntax convention uses the database alias only. This is supported for CANdb (.dbc), LDF (.ldf), and NI-CAN (.ncd) files, which always contain a single unnamed cluster. Lowercase letters (a–z), uppercase letters (A–Z), numbers, underscore (_), and space ( ) are valid characters for <alias>. Period (.) and other special characters are not supported within the <alias> name. Because the <alias> is used as the filename portion of an internal filepath (that is, absolute path and file extension removed), it must use the minimum file conventions for all operating systems. The alias name is not case sensitive. Lowercase letters (a–z), uppercase letters (A–Z), numbers, and the underscore (_) are valid characters for <cluster>. The space ( ), period (.), and other special characters are not supported within the cluster name. The cluster name must begin with a letter (uppercase or lowercase) or underscore, and not a number. The cluster name is limited to 128 characters. The cluster name is case sensitive. For FIBEX (.xml) and AUTOSAR (.arxml) files, the <cluster> name is stored in the database file. For CANdb (.dbc), LDF (.ldf), or NI-CAN (.ncd) files, no <cluster> name is stored in the file, so NI-XNET uses the name Cluster when a name is required. You can use the XNET Cluster I/O name string as follows: You can use the XNET Cluster I/O name refnum as follows:
OPCFW_CODE
Paper Review: Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting Lag-Llama is a new foundation model designed for univariate probabilistic time series forecasting, using a decoder-only transformer architecture with lags as covariates. It is pretrained on a diverse corpus of time series data from various domains, showcasing exceptional zero-shot generalization capabilities. When fine-tuned on small subsets of new datasets, Lag-Llama achieves superior performance, surpassing previous deep learning methods and setting new benchmarks in time series forecasting. Probabilistic Time Series Forecasting In univariate time series modelling the dataset comprises of one or more time series, each sampled at discrete time points, with the goal of predicting the future values. Instead of using the entire history of each time series for prediction, a fixed context window is used to learn an approximation of the distribution of the next values, incorporating covariates. Predictions are made through an autoregressive model, leveraging the chain rule of probability, and are conditioned on learned neural network parameters. Tokenization: Lag Features The tokenization process for Lag-Llama involves generating lagged features from prior time series values using specified lag indices that include quarterly, monthly, weekly, daily, hourly, and by the second. These lag indices create a vector for each time value, where each element corresponds to the value at a specific lag. Date-time features across different frequencies, from second-of-minute to quarter-of-year, are integrated to provide supplementary information and help the model understand the frequency of the time series. The resulting tokens comprise the size of the lag indices plus the number of date-time features. However, a limitation of this approach is the need for a context window that is at least as large as the number of lags used (by definition). Lag-Llama uses a decoder-only transformer architecture, based on LLaMA, designed for univariate time series forecasting. The model processes sequences by first tokenizing them along with covariates into a sequence of tokens, which are then mapped to a hidden dimension suitable for the attention module. It incorporates pre-normalization techniques like RMSNorm and Rotary Positional Encoding to enhance its attention mechanism, aligning with the practices of the LLaMA architecture. The transformer layers, which are causally masked to prevent future information leakage, output the parameters of the forecast distribution for the next time step. The model’s training objective is to minimize the negative log-likelihood of this predicted distribution across all time steps. For predictions, Lag-Llama takes a feature vector from a time series, generating a distribution for the next time point through greedy autoregressive decoding. This process allows for the simulation of multiple future trajectories up to a predefined prediction horizon. From these simulations, uncertainty intervals can be calculated, aiding in downstream decision-making and evaluation against held-out data. The final component of Lag-Llama is the distribution head, a layer that translates the model’s learned features into parameters of a specific probability distribution. In their experiments, the creators adopted a Student’s t-distribution, configuring the distribution head to output its three parameters: degrees of freedom, mean, and scale, with special adjustments to maintain the positivity of these parameters. To handle the diversity in numerical magnitudes across different time series datasets during pretraining, Lag-Llama employs a scaling heuristic. For each univariate window, it calculates the mean and variance of the time series within the window and standardizes the time series data by subtracting the mean and dividing by the variance. Additionally, the mean and variance are included as time-independent covariates (summary statistics) alongside each token to inform the model about the input data’s statistical properties. Furthermore, the model adopts a Robust Standardization: normalizing the time series by subtracting the median and scaling by the Interquartile Range, making the preprocessing step more robust to extreme values in the data. During training, the authors use stratified sampling and augmentation technics Freq-Mix and Freq-Mask. Lag-Llama demonstrates strong performance in time series forecasting, comparing favorably with supervised baselines across unseen datasets in both zero-shot and fine-tuned settings. In the zero-shot scenario, it matches the performance of all baselines with an average rank of 6.714. Fine-tuning further enhances its capabilities, leading to state-of-the-art performance in three of the used datasets and significantly improved performance in others, achieving the best average rank of 2.786. This performance underscores Lag-Llama’s potential as a go-to method for diverse datasets without prior data knowledge, fulfilling a foundational model’s key requirement. The experiments suggest that at scale, decoder-only transformers may outperform other architectures in time series forecasting, mirroring observations from the NLP community regarding the impact of inductive bias. Lag-Llama was also evaluated on its ability to adapt to different amounts of historical data, with experiments conducted using only the last 20%, 40%, 60%, and 80% of the data from training sets. Lag-Llama was fine-tuned and consistently achieved the best average rank across all levels of available history, showcasing its strong adaptation capabilities. As the volume of available history increased, so did Lag-Llama’s performance, widening the performance gap between it and baseline models. However, it’s noted that in the exchange-rate dataset, which represented a new domain and frequency not seen during pretraining, Lag-Llama was frequently outperformed by the TFT model, suggesting that Lag-Llama benefits from more historical data in scenarios where the dataset is significantly different from the pretraining corpus.
OPCFW_CODE
|6.5900[6.823] Computer System Architecture - Fall 2023 This course is a study of the evolution of computer architecture and the factors influencing the design of hardware and software elements of computer systems. Topics may include: instruction set design; processor micro-architecture and pipelining; cache and virtual memory organizations; protection and sharing; I/O and interrupts; in-order and out-of-order superscalar architectures; VLIW machines; vector supercomputers; multithreaded architectures; symmetric multiprocessors; memory models and synchronization; embedded systems; and parallel computers. Warning: All information on this website is subject to change. Though we send messages to the class in case of a change, please do check the course web site in case of doubt. Lectures: Lectures will be from 1:00PM to 2:30 PM every Monday and Wednesday in room 32-141 Tutorials: A 1-hour tutorial session will be held each week on Friday at 1 PM in room 32-141. The main focus of the tutorial session will be to work through the problem set questions and clarify lectures as necessary. Quizzes will also be given in tutorials, so it is important to avoid any recurring conflict with the tutorial time. Additional tutorials will be held in an evening before each quiz. Office Hours: See the staff page for details. Problem Sets: The subject is divided in modules, each covering a set of related topics. There is a set of online problems related to each module. The best way to prepare for the quizzes is to work on these problems. Although problem solutions do not have to be handed in (and consequently, are not graded), it is essential that students become thoroughly familiar with the material. Many quiz questions will assume knowledge of detailed machine descriptions provided in the problem sets. Students are encouraged to work in groups to discuss the problem sets, then to individually write out complete solutions prior to examining the online solutions. It is our goal to make each problem interesting and illustrative of some aspect of computer design. However, every problem is not equally important to prepare for the quiz. We will also provide sample quizzes from previous years, which will show the typical structure of quiz questions. Students are encouraged to bring their solutions to the tutorials for discussion, especially if the online solutions are missing or if the student has a different solution than the one posted on the website. Laboratory Exercises: There will be four Laboratory Exercises that will explore the concepts taught in lecture using industrial-strength tools. Two to three weeks will be allotted for the completion of each lab. To allow proper time to study for the following quiz, extensions will only be granted in extreme cases. Laboratory exercises are to be completed individually, but comparing results and discussing course concepts covered in the laboratories is encouraged. Quizzes: In the first lecture, a prerequisite self-evaluation quiz will be handed out. This must be handed back in the tutorial session of the same week. This quiz should be used by you to assess your preparation for the course. You must work individually on this quiz and turn in your own solutions. There will be three one-and-half-hour quizzes, generally scheduled during the tutorial time on Fridays. The quizzes will focus on one section of the course, but can draw upon material from any part of the course to date, including problem sets, laboratory exercises, and assigned readings. All quizzes are closed book. Grades: 75% of the grade will be based on the three quizzes, equally weighted. The remaining 25% of the grade will be based on four laboratory exercises. Collaboration and Academic Honesty Policy: Students must not discuss a quiz's contents with other students who have not yet taken the quiz. If prior to taking it, you are inadvertently exposed to material in a quiz - by whatever means - you must immediately inform the instructor or a TA. You must turn in your own solutions to the self-evaluation quiz. Any violation of this policy will be treated severely. Collaboration among students to understand the course material and problem sets is strongly encouraged. Laboratory exercises should be completed individually. Course Reading Material: Computer Architecture: A Quantitative Approach: 6th Edition by J. L. Hennessy and D. A. Patterson is the main textbook used in this course. The MIT library has physical copies and gives online access to the 5th edition. If you prefer to have your own copy, Amazon sells the book for about $75. We also provide the equivalent readings for the 2nd, 3rd, 4th, and 5th editions of this book to allow you to use a secondhand copy. In previous years, some students found that the lecture notes were sufficient to learn the material and that the textbook was unnecessary, but we nevertheless recommend the book as a good reference guide. You may also want to refer to Computer Organization & Design: The Hardware/Software Interface by D. A. Patterson and J. L. Hennessy to review background material. Supplemental readings from selected papers may also be assigned throughout the semester. Additionally, the network-on-chip lectures use Principles and Practices of Interconnection Networks by William J. Dally & Brian Towles. Morgan Kaufmann Publishers Inc. This book is on reserve at the library, for students to checkout, and also avialable online. Online Resources: Please check for announcements, clarifications to assignments, and answers to common questions on Piazza: http://piazza.com/mit/fall2023/65900 You can also contact all the course staff via Piazza.
OPCFW_CODE
Oracle DBA Basics I'm aware this may be a run of the mill, often asked question so please bear with me. I'm basically looking at moving into training up as an Oracle DBA. My background is 8 years of Windows Technical Support - forgive me I don't see a lot of future in this and I am seriously looking at moving towards an Oracle DBA as I see it as much more challenging and more likely to provide a good future. If anyone out there could give me a few pointers on how to get started eg. good beginner books, websites or any other relevant information I would be eternally gratefull! I have an understanding of Relational Databases, but not in the Oracle world. "There is a difference between knowing the path and walking the path." Re: Oracle DBA Basics Originally posted by jboyes ....moving towards an Oracle DBA as I see it as much more challenging and more likely to provide a good future..... I don't think oracle DBA job is going to provide any secure solid future any more, there are enough dba's with experience and certs and they're on a lookout too. It won't be any challenging any more, I mean as a dba you'll start rusting when you'll just have to use a tool to let it tell you what to do. I think the DBA jobs are on the wane. I think you need to anticipate a little here, get some other os's experience like linux/unix etc You said you have an understanding of rdbms then read a concepts manual first, install the software on you different platforms and start practicing. I'm a JOLE(JavaOracleLinuxEnthusiast) --- Everything was meant to be I agree with Tarry the tide is turning for the traditional DBA, in the current market place your exepected to know a lot more than just core DBA skills. A strong understanding of platforms and development technologies are also required in many roles. Thats without the additional flood of products such as iAS, IFS and portal which also seem to fall under the DBA realms of responsibility in many organisations. Oracle Certified Professional "Build your reputation by helping other people build theirs." "Sarcasm may be the lowest form of wit but its still funny" Click HERE to vist my website! I agree on that too. Nowadays, the market demand for Oracle DBA is not as good compare to the past 2 years. In my country, i have saw there's a drop on pay packet for Oracle DBA. Is not as high compare to last time. To keep on moving in this industry, knowing Oracle more or less will help the person on the jobs, but it depends on the person job scope. Things keep changing everyday, we have to prepare ourselves to adpat to this kind of environment. MCSE,MCDBA,MCSD,CCNA,CCDA,CCSA,CCSE, MCSA, SCSA, OCP Click Here to Expand Forum to Full Width
OPCFW_CODE
Note: Don't try to generate an advertisement video URL tags it will give error. Chek and get the Snaptube app for android. Discussed about snaptbe apk and its latest version and featuers. Know more! iTube APK Download is a Free YouTube Downloader App & Background Player That Allows You To Download Any Video From YouTube Install iTube For Free. Tillmanstory is a Tech Blog Here we will Write about Android Apps and Raw APK File and Tricks and Tips and About Movie sites and Proxys Tillman story youtube multi downloader free download. Youtube to Mp3 Downloader App - Youtube Direct Youtube To Mp3 Downloader App Link ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ youtube playlist downloader for android free download. Youtube to Mp3 Downloader App - Youtube Direct Youtube To Mp3 Downloader App Link ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ downloader free download. Free Manga Downloader The Free Manga Downloader (FMD) is an open source application written in Object-Pascal for managing download youtube downloader for android android, youtube downloader for android android, youtube downloader for android android download free. en. Android. Multimedia. Video and Audio Downloaders. Quickly and easily download YouTube music and HD videos . Whatsapp Inc . WhatsApp Messenger . Lots of people are using this app and enjoying the smooth functionality of the app. Download and use this application for downloading your favorite videos. Vidmate is a lightweight and powerful HD video downloader apk. Download unlimited movies, videos, songs, games, video status with the help of vidmate app.Youtube App | Download & Use Nowiosphere.online/mobile/apps Youtube To Mp3 Converter We are unable to download our Fvdtube APK from Mobile YT3 - YouTube Downloader. 16 145 To se mi líbí · Mluví o tom (133). Music & Video Downloader for Android Write About youtube downloader Apk: In the present world, the Internet has become the most crucial thing in anyone’s life.… Tubemate 2.2.7 Free Download | TubeMate Old Version (Safe Apps Für Android, Free Download Download TubeMate YouTube Downloader 2.4.3 build 713. Download free and best APP for Android phone and tablet with online apk downloader on APKPure.com, including (tool apps, shopping apps, communication apps) and more. Download YouTube Downloader for Android 6.8. Download YouTube videos in any format you want. Dentex YouTube Downloader is an app that will let you download YouTube Download music and videos from YouTube, Facebook and many other sites Get the official YouTube app for Android phones and tablets. See what the world is watching -- from the hottest music videos to what’s trending in gaming, entertainment, news, and more. Subscribe to channels you love, share with friends, and watch on any device. With a new design, you can have fun exploring videos you love more easily and quickly than before. Get the official YouTube app for Android phones and tablets. See what the world is watching -- from the hottest music videos to what’s trending in gaming, entertainment, news, and more. Subscribe to channels you love, share with friends, and watch on any device. Download YouTube Latest APK v12.49.55. This is the official app of the popular video hosting website YouTube where you can find millions of videos, upload your own video and subscribe to other channels. 立即在Aptoide上下载适用于Android的Youtube Video Downloader - Snaptube!无需额外付费。Youtube Video Downloader - Snaptube的用户评分:4.48 Snaptube Official Website - Get the newest Snaptube apk and free download music and HD video from YouTube, Facebook, DailyMotion and Instagram, etc Download TubeMate, Vidmate, Snaptube app and best downloader apps for android. OG youtube downloader app apk 2016 latest version free download for non root android users, PC, iOS, blackberry and MAC with review and tutorial. Do you use a LOT of YouTube? Perhaps like to listen to songs on the go? Then you will need TubeMate APK, an Android app that will allow you to easily download FLVto youtube downloader apk is an amazing converter for youtube user.FLVto is an awesome Youtube converter mp4,mp3, which is very helpful to convert the Youtube to Mp3 Downloader App - Youtube Web Site. Manage your SEO, Advertising, Content, and SMM all with SEMrush. youtube to mp3 converter apk Download Tubemate Youtube Downloader For Android. Tubemate Apk Direct Link. #No. 1 Youtube Video Downloader. Get it now. Tubemate app for free. android mp3 downloader android mp3 downloader apk android mp3 downloader 2014 android mp3 How to use Tubemate youtube downloader app | Download Videos Download Youtube Downloader . Free and safe download. Download the latest version of the top software, games, programs and apps in 2020. 8 Feb 2019 TubeMate 2 is a free mobile application that allows you to download videos from YouTube. The app will enable you to save your favourite
OPCFW_CODE
What is Supabase and how to work with it I’m talking about the Supabase project – a progressive database that can become a full-fledged alternative to platforms from Google and other providers. Briefly about databases A database (DB) is a repository of information for your services, websites, applications, offline businesses, and in general any type of content that needs more or less strict categorization and convenient sorting tools. Databases are used in almost every digital product you use: Notion’s note list is a database, Apple Music’s playlist is a database, even Community articles are stored in a database. Therefore, if you are a developer (and even a front-end developer), it is important to be able to work with the database at least at a primitive level. But the database itself is a complex structure that requires many man-hours to master it at the proper level. But other developers have taken care of you, creating services that make it much easier to store information on the network, protect it, and transfer it to third-party applications. Supabase is just one of these services that turns complex into simple in just a couple of clicks. What is Supabase? It is a relational database based on the same technologies as PostgressSQL, one of the most popular and reliable databases in the world. But such a description is clearly not enough for Supabase, because this is a large project that includes much more interesting solutions than it might seem at first glance. Supabase is a free analogue of Firebase, a multifunctional platform that combines several important software solutions and simplifies their implementation to an extremely primitive level, so that even beginners in development can easily add functions such as authorization, file storage, content updating to their applications or sites. on the site in real time, etc. And you can try it all for free. Developers require money from the user only when the number of requests reaches certain values. That is, at the development stage, you will not have to pay for the database, all the features of Supabase can be tested on your own, and not rely on reviews and demos. Let’s take a closer look at each aspect of Supabase. As noted above, Supabase is a relational database using SQL syntax. SQL is a special language focused on interacting with databases. It allows you to create special commands that force the mechanisms built into the database to either read information or add it (and perform dozens more actions, depending on the complexity of the command and the number of add-ons). Supabase is open source, and therefore anyone can analyze it, which has a positive effect on the reliability of the service as a whole. Of the features of the database, it is worth highlighting adequate support for web sockets, which allows you to track the appearance of new data in the database in real time and immediately reflect them in the application interface. In practice, this functionality can be used to create an analogue of Twitter, where it is important to always reflect up-to-date information. Supabase also boasts a convenient visual table editor. With it, you can add new information to the database without using program code. Outwardly, it looks more like Microsoft Excel – you just add columns with data types, and then write this data into rows. The interface is obvious and familiar to everyone who has ever worked with tables (not even in the database). An important advantage of Supabase is the built-in authorization function, implemented at an almost perfect level. This is the most convenient authorization service combined with a database that a novice (and not only) developer can find. - The developers offer their own API system for creating new profiles and logging in. No need to write your own code or connect third-party utilities. Is everything ready. It is enough to copy the code from the Supabase documentation, and now your site or application already has a fully functional and secure authorization window. - Supabase supports login with social media profiles and third party providers. For example, it supports Sign in with Apple, GitHub login or Google profile, Twitter, Slack, Discord, and many other platforms. - Authorization is closely related to the database, so the developer has the opportunity to clearly set up a security policy and, without leaving Supabase, set rules for users (to allow viewing or creating content, for example). - The Magic Link feature allows you to log in by email without using a username or password. Just ask the user to enter an email, and in a moment they will receive a login link without the need to provide additional data. Also, the creators of Supabase took care of React developers and created a whole series of ready-made components that you can immediately add to your program without much work on the code. Supabase comes with its own file storage that can be connected to the database to display content in the application interface that is not “suitable” for storing in tables. For example, you can add image files to Supabase, even at the stage of loading them into the database, create special links and assign them to articles, comments, profiles in Supabase tables. Thus, it is possible to bind files from the repository to records from the database, creating a seamless system. Naturally, as in the case of authorization, the developers have provided all the code for you and created a bunch of ready-made commands for managing files. Loading, deleting, changing and displaying them in the interface is nowhere easier. Just copy a couple of lines from the documentation and substitute your own values. In the near future, the Supabase developers want to add a few more expected features to the store: - CDN servers so that file data reaches users faster. - Transformations – mechanisms for transforming uploaded images and documents in order to reduce their size before “shipping” into the application. Also, file storage is directly related to Supabase’s authorization mechanisms, which means that data security policies work for them as well. Unauthorized access excluded. So far, this additional Supabase service is not available to the general public, but it already promises to be interesting, since in theory it will become an analogue of functions in Netlify and other cloud platforms. The idea of ”functions” is to get rid of the server. Instead of renting your own VPS and setting up a backend to run any procedures and operations, Supabase will offer you its own resources to run specific functions. For example, you have a website where the list of prices is updated daily due to some external function (parsing or API). And this function must be run on the server (either using your own scripts or using the cron scheduler). Supabase will allow the same functionality to be written into its production environment without the need to rent a separate server and set up a shell program to run a single procedure. What can be created with Supabase? Supabase is a fairly versatile product that includes many of the components needed to build complete applications. Therefore, the list of types of programs that can be created based on Supabase is quite wide. - Develop an online store, as Supabase is based on PostgreSQL , and this platform is great for creating massive, complexly structured databases. - Create your own equivalent of Twitter or online chat, as Supabase supports web sockets and allows you to exchange information through a real-time database. - Launch a full-fledged blogging platform with its own backend instead of WordPress and other CMS. Supabase offers both a convenient system for issuing privileges to users and a built-in file storage for this. - To dabble in creating full-fledged clones of such popular services as Trello or Notion. The Supabase service is suitable for any task and at the same time it is designed in such a way as to remove almost all the work and responsibility from the developer. Therefore, the creation of the above applications and sites will not seem to you something unbearably difficult or too long. All the hard backend work has already been done for you. Basic instructions for setting up Supabase Getting started with Supabase is easy. It is enough to follow the simple instructions described in the official documentation of the service. - Create your own account on Supabase.com or sign in with your GitHub account. - Immediately after that, we get into the Supabase application, where you need to click on the New Projectbutton . - We indicate personal data, the name of the database, your location (you can specify any). - In the next window, click on the Create new tablebutton and enter the name of the table. You will see an interface that resembles Excel or Google Sheets. Here we can manually enter the necessary information. For example, if we wanted to create a table with articles, we would make several columns (by clicking on New column ) and name them Title and Content (the title and content of the article). And then we would add rows (by clicking on Insert row ) to create a new data object inside the table and fill it with information according to the columns used. And in this way you can build a table for any kind of content. To see how you can interact with tables programmatically from your application, you need to open the API section at the bottom of the sidebar of the Supabase interface. There is a list of commands for connecting to the database from third-party projects, and for managing information in the database. Once connected, you can use other Supabase features without further configuration. You can use authorization mechanisms and storage using ready-made commands from the API that the Supabase developers have prepared for you. It remains only to find hosting directly for your site or application. Instead of a conclusion That’s all. Supabase is a powerful product that replaces several third-party services at once, without which almost no modern application can do. At the same time, Supabase maintains ease of use, convenience, and security. This service can become a full-fledged basis for your front-end project. You will not notice any disadvantages in comparison with analogues in the spirit of Firebase and you will not spend a lot of time studying the features of the backend before launching your own project on the web.
OPCFW_CODE
Learn how passwords can be stored without a risk of leaking them in this tutorial by Alessandro Molina, a Python developer since 2001 and currently the core developer of the TurboGears2 web framework and maintainer of Beaker Caching/Session framework.While cryptography is generally perceived as a complex field, there are tasks based on it that are a part of everyday lives as software developers, or at least they should be, to ensure a minimum level of security in your code base. This article tries to cover one of the most common task – hashing passwords – that can help make your software resilient to attacks. While software written in Python will hardly suffer from exploitation, such as buffer overflows (unless there are bugs in the interpreter or compiled libraries you rely on), there are still a whole bunch of cases where you might be leaking information that must remain undisclosed. How can passwords be stored without a risk of leaking them? Avoiding storing passwords in plain text is a known best practice. With software, usually, only needs to check whether the password provided by the user is correct and the hash of the password can be stored and compared with the hash of the provided password. If the two hashes match, the passwords are equal; if they don’t, the provided password is wrong. Storing passwords is a pretty standard practice, and usually, they are stored as a hash plus some salt. The salt is a randomly generated string that is joined with the password before hashing. Being randomly generated, it ensures that even hashes of equal passwords get different results. The Python standard library provides a pretty complete set of hashing functions, some of them very well-suited to storing passwords. How to do it… Python 3 introduced key derivation functions, which are especially convenient when storing passwords. Both pbkdf2 and scrypt are provided. While scrypt is more robust against attacks as it is both memory- and CPU-heavy, it only works on systems that provide OpenSSL 1.1+. While pbkdf2 works on any system, in a worst-case scenario, a Python-provided fallback is used. So, while from a security point of view scrypt would be preferred, you can rely on pbkdf2 due to its wider availability and the fact that it’s been available since Python 3.4 (scrypt is only available on Python 3.6+): import hashlib, binascii, os """Hash a password for storing.""" salt = hashlib.sha256(os.urandom(60)).hexdigest().encode('ascii') pwdhash = hashlib.pbkdf2_hmac('sha512', password.encode('utf-8'), pwdhash = binascii.hexlify(pwdhash) return (salt + pwdhash).decode('ascii') def verify_password(stored_password, provided_password): """Verify a stored password against one provided by user""" salt = stored_password[:64] stored_password = stored_password[64:] pwdhash = hashlib.pbkdf2_hmac('sha512', pwdhash = binascii.hexlify(pwdhash).decode('ascii') return pwdhash == stored_password The two functions can be used to hash the user-provided password for storage on disk or into a database ( hash_password ) and to verify the password against the stored one when a user tries to log back in ( verify_password ): >>> stored_password = hash_password('ThisIsAPassWord') >>> verify_password(stored_password, 'ThisIsAPassWord') >>> verify_password(stored_password, 'WrongPassword') How it works… There are two functions involved here: - hash_password : Encodes a provided password in a way that is safe to store on a database or file - verify_password : Given an encoded password and a plain text one is provided by the user, it verifies whether the provided password matches the encoded (and thus saved) one hash_password actually does multiple things; it doesn’t just hash the password. The first thing it does is generate some random salt that should be added to the password. That’s just the sha256 hash of some random bytes read from os.urandom . It then extracts a string representation of the hashed salt as a set of hexadecimal numbers ( hexdigest). The salt is then provided to pbkdf2_hmac together with the password itself to hash the password in a randomized way. As pbkdf2_hmac requires bytes as its input, the two strings (password and salt) are previously encoded in pure bytes. The salt is encoded as plain ASCII, as the hexadecimal representation of a hash will only contain the 0-9 and A-F characters. While the password is encoded as utf-8 , it could contain any character. (Is there anyone with emojis in their passwords?) The resulting pbkdf2 is a bunch of bytes, as you want to store it into a database; you use binascii.hexlify to convert the bunch of bytes into their hexadecimal representation in a string format. Hexlify is a convenient way to convert bytes to strings without losing data. It just prints all the bytes as two hexadecimal digits, so the resulting data will be twice as big as the original data, but apart from this, it’s exactly the same as the converted data. In the end, the function joins together the hash with its salt. As you know that the hexdigest of a sha256 hash (the salt) is always 64 characters long, by joining them together, you can grab back the salt by reading the first 64 characters of the resulting string. This will permit verify_password to verify the password and verify whether the salt used to encode it is required. Once you have your password, verify_password can then be used to verify provided passwords against it. So it takes two arguments: the hashed password and the new password that should be verified. The first thing verify_password does is extract the salt from the hashed password (remember, you placed it as the first 64 characters of the string resulting from hash_password). The extracted salt and the password candidate are then provided to pbkdf2_hmac to compute their hash and then convert it into a string with binascii.hexlify . If the resulting hash matches with the hash part of the previously stored password (the characters after the salt), it means that the two passwords match. If the resulting hash doesn’t match, it means that the provided password is wrong. As you can see, it’s very important that you make the salt and the password available together, because you’ll need it to be able to verify the password and a different salt would result in a different hash and thus you’d never be able to verify the password. If you found this article interesting, you can explore Alessandro Molina’s Modern Python Standard Library Cookbook to build optimized applications in Python by smartly implementing the standard library. This book will help you acquire the skills needed to write clean code in Python and develop applications that meet your needs.
OPCFW_CODE
HP Pavillion dv6000 (Laptop) Virtumonde (only Spybot would find it- always comes back right away) Was using outdated (behind 3 weeks or so) AVG (think infected) Was using outdated (behind 3 weeks) Spybot Have Windows defender and Updates (were outdated) DESCRIPTION OF PROBLEMS Hi- I'm pretty lost. Willing to buy removal software- just don't know what works. Have searched manually using 'manual removal' lists but computer locks up and says error when I try to remove dll -type files- windows stops working goes black, even when I search for some files (.dll, I think). Even windows update and Defender seems wonky- AVG absolutely 'odd' sometimes if I 'repair' Spyware Doctor or reinstall Maleware Antimaleware- I can get a 'scan'. Just now RootRepeal is freezing and refusing to work (first time it tried kept saying "API Locked" or something like that in description. Tried to scan first time but froze part way through. I'll attach Hijack log; and DDS files (2); RootRepeal frozen won't start again even after exit and restart.. Began about 3 months ago- Screen would flash now and then- "Your Infected" "Go Here" I ignored the sites- but then internet would start 'locking up'- I have INCREDIBLY slow- dial-up anyway (3 kb)- way out in the boonies- it would just sit on blank page with '20 percent loaded' - forever. (seems almost normal considering speed). Went to library searched and searched -tried solutions: download Windows Vista updates. AVG updates, Spybot updates. Spybot would scan and say Virtumonde. So I downloaded Maleware, Spyware Doctor, SuperAnti Spyware, FixVundo- nothing seemed to work- Most won't even find- some find 3 or 4 'viruses during a scan'- but any attempt to remove crashes or they just come right back in cookies. For searches I'm told I have 'no access', hidden files turned on, etc. Forgot to mention, have 'screen flashes' now and then like a DOS window is opening 1, 2, 3 times and closing too quick to see- doing something on its own Direction, advice, the best antivirus to buy (does it work for removal? - there are many voices- but everyone is selling something - I don't know to believe), etc. Anything (at all) would be great! May Gods Grant Blessings - as your time is appreciated P.S. Uploaded HiJack File too- as nothing seems to work-
OPCFW_CODE
The visible property not only hides the button during run time, but also in design. Instead transitions on visibility delay hiding an element. The first thing we might think of doing is using both the opacity property and the display property. according to law and tradition:. nodeProperty public final ObjectProperty nodeProperty(). Transition on Hover. · Trump’s administration visible transitions according to a property is refusing to cooperate with Biden’s transition team, withholding federal resources that normally flow freely after a campaign ends. You want to make an element gradually disappear visually from the page. Personally, I feel the use of setTimeout is a bit lame and hacky. The transition-property CSS property sets the CSS properties to which a transition effect should be applied. When using variants, the transition can according be scheduled in relation to its children with either "beforeChildren" to finish this transition before starting children transitions, "afterChildren" to finish children transitions before starting this transition. · The transition-property accepts the name of the CSS property that we want to watch out for changes on according and whose change process we want to transition. See full list on developer. Make sure to include the PostCSS plugin after Tailwind in your list of PostCSS plugins:. Note that currently only Chrome, Edge, and Firefox support focus-visible natively, so for sufficient browser support you should install and configure both the focus-visible JS polyfill and the focus-visible PostCSS polyfill. The specification recommends not animating from and to auto. 4 visible transitions according to a property violent crimes per 100,000 people. The strong UV-visible absorption bands of the heme originate from the π→π* transitions, providing information about the type of heme, the oxidation, and the spin state of the central iron ion. resize itself according to the viewport. The visibility property sets or returns whether an element should be visible. CSS Transitions are controlled using the visible transitions according to a property shorthand transition property. The need to animate the display property comes from visible transitions according to a property wanting to solve the following problem: 1. But Trump, who has not formally conceded to Biden and may never continued to sow doubt about the vote, despite his own administration’s assessment that it was conducted without widespread. · Laws and visible transitions according to a property customs guide presidential transitions — but some go off the rails anyway. See more on w3schools. . 5s linear; div:hover > ulvisibility: visible; opacity: visible transitions according to a property 1; Item 1 Item 2 Item 3 . There are a few possible solutions to this on StackOverflow, but I find most, if not all, of the proposed solutions don’t really solve the prob. 1 visible transitions according to a property degrees of each other, forming the first visible "double planet" in 800 visible transitions according to a property years. (which means to get the button back I either need to use the advanced property pane, or find a way to visible transitions according to a property make it visible again in run time). On change of the property, the slot will animate according to the other props. The auto value is often a very complex case. selector visible transitions according to a property overflow: hidden; // Hide the element content, while height = 0 height: 0; visible transitions according to a property opacity: 0; transition: height 0ms 400ms, opacity 400ms 0ms; selector. When the d-level is not completely filled, it is possible to promote and electron from a lower energy d-orbital to a higher energy d-orbital by visible transitions according to a property absorption of a photon of electromagnetic radiation having an appropriate energy. It is similar to the display property. Sets whether or not stack will interpolate visible transitions according to a property its size when changing the visible child. · Jupiter and Saturn will come within 0. If we toggle this “hidden” class using visible transitions according to a property jQuery, we might have code that looks like this:But if we do that, we will not see the visible transitions according to a property transition (defined in our. The following electronic transitions are possible:. Fade-In and -Out by Combining Transitions on Visibility and Opacity The opacity transition creates a nice fade-in and -out effect. But that would be another issue altogether. divborder: 1px solid eee; div > ulvisibility: hidden; opacity: 0; transition: visible transitions according to a property visibility 0s, opacity 0. Property description: The target node of this Transition. · Vue offers several ways to control how an element visible transitions according to a property or component visually appears when inserted into the DOM. CSS3 Transitions are a presentational effect which allows property changes in CSS values, such as those that may be defined to occur on :hover or :focus, to occur smoothly over a specified duration – rather than happening instantaneously as is the normal behaviour. This allows the creation of complex transitions. I wish I had a "on focus" for the screen, so that when the app starts it. · Property crime in the U. Using animations with. The visible transitions according to a property visibility property allows the author to show or hide an element. This enables an item&39;s property values to be changed when it changes between visible transitions according to a property states. visible transitions according to a property As it doesn&39;t make sense to animate some properties, the list of animatable properties is limited to a finite set. Set to false by default. Our HTML might look like this:And our CSS might look like this:Notice we have display: none and opacity: 0 on our “hidden” class. Historically, spectroscopy originated as the study of the wavelength dependence of the absorption by gas phase matter of visible light dispersed by a prism. Take the following example. This is the best way to configure transitions, as it makes it easier to avoid out of sync parameters, which can be very frustrating to have to spend lots of time debugging in CSS. But hey, it works, and I don’t think it’s going to cause any problems unless you had tons visible transitions according to a property of similar animations on the page. . Add a transition effect visible transitions according to a property (opacity and background color) to a button on hover:. By far the most common form of property crime in was larceny/theft, followed by burglary and motor vehicle theft. Instead we see this (which is slightly different in Firefox vs. The color of coordination complexes arises visible transitions according to a property from according electronic transitions between levels whose spacing corresponds to visible transitions according to a property the wavelengths available in the visible light. The toggle property is where you&39;ll store your toggling variable. MDN will be in maintenance mode, Monday December 14, from 7:00 AM until no later than 5:00 PM Pacific Time (in UTC, Monday December 14, 3:00 PM until Tuesday December 15, 1:00 AM). At the same time that class is added, a single event handler. But here is my solution. On Monday, the GSA informed President-elect Joe Biden that the Trump administration is ready to begin the formal transition process, according visible transitions according to a property to a letter from Administrator Emily Murphy, marking the first step the Trump administration has taken to acknowledge President Donald Trump&39;s defeat -- more than two weeks after Biden was declared the. Simply add a visible transitions according to a property transition to the element and any change will happen smoothly:You can play with this here: · You can set transitions on the display: a property by concatenating two transitions or more, and visibility comes handy this time. Discover Transitions Optical photochromic lenses and glasses. Electromagnetic radiations in the visible region of the spectrum often possess the appropriate energy for such transitions. You want to use CSS for the animation, not a library. But property management transitions do happen in this industry, and unfortunately, there is no standard way to approach the process, according to MultiFamily Executive Even though it’ll be difficult to keep the operation running smoothly during a transition, there are steps you can take to make sure performance doesn’t take too visible transitions according to a property much of a hit. The specific semantics of visible transitions according to a property visible transitions according to a property the transition makes sure that when playing an animation to let the element appear, the visibility property is set to visible at once, to make the animation show. Notice that there four transitions in the hydrogen atom that lead to the mission of light in the visible light region: n-6-n-2;n-5-2,n visible transitions according to a property 42:n 3->2 ach of these transitions make up what is called the Balmer series, the visible light spectrum of the ydrogen atom eep in mind, that we are observing only the visible transitions. Property Manager; Assistant Property Manager; The 5 Step Transition Process. There is visible transitions according to a property a possiblity to not set this property:. 9 property crimes per 100,000 people, compared with 379. First, if you’re adding classes like in the examples above, even if the transition worked, you’d have to set up a separate section for removing the classes and reverse how that’s done (i. Importance • Transitions Theory with a focus on people in diverse types of transitions provides a comprehensive and evolving guide for all health-related disciplines. To recap, we can either add an inline display style with the style property, or toggle a class that visible transitions according to a property controls visibility using classList. Instead of callbacks, which don&39;t exist in CSS, we can use transition-delay property. Transitional Properties means such Retail/Other Properties for which (i) the Borrower has delivered a repositioning or redevelopment plan to the Administrative Agent and (ii) such plan demonstrates that at least 50% of the square footage of the applicable Retail/Other Property will be under active redevelopment for some period and will not produce stabile revenue during such redevelopment period. -> Denying clinch transitions -> Qlikview timed transitions
OPCFW_CODE
- 06/25//2016 2:30 pm - 06/25/2016 6:30 pm - Ecoba Restaurant & Bar - B-G-02, Level 1, Menara Bata (Tower B), PJ Trade Centre 8, Jalan PJU 8/8a, Damansara Perdana, 47820 Petaling Jaya, Selangor, Malaysia Seize the Opportunity in Mobile Games Creation The workshop provides a hands’ on overview on how to make web, PC, Mac and mobile consoles such the iPhone with the Unity3D Engine ensuring you get your game noticed! The 1-day workshop begins with an overview of Unity3D and finishes with the creation of several games. The types are 3rd person, side scroller, first person and a puzzle game. We start with how to create optimal content with the Autodesk Entertainment products (props, textures, character and animations). The next step is to make the game fully interactive with the use of scripting, animations and the physics engine the final step is to export as an optimized stand-alone version for the PC-Mac or as web browser game. Outline of the Mobile Games Creation Workshop - Creating Shooting Game - Creating a game for multiple platforms - Changing user interface - Asset management for multiple platforms - Advanced Scripting: Character Walking Around - Advanced Scripting: Scripting AI: ghting enemies - Game optimization - Autodesk BEAST - Visual Occlusion: how can you create high polygon levels? - Programming optimization - 3D Level optimization - Production Tools - Maya and Unity: get the most of the of native le import - Using Mixamo workow - MotionBuilder + Kinect + Unity - Overview Autodesk MotionBuilder - Recording Data with Kinect - Exporting from MotionBuilder to Unity - 2D Game: Rendering to Sprite from 3dsMax/Maya - Rending to sprite for high quality image animation - Sprite Management in Unity - Sale and Marketing Advice - What did change in the last year? - It’s talk about Android from a sale viewpoint and from a development viewpoint - What is Union? Software and Hardware Requirements - Windows Computer XP, Vista or Mac OS - Maya, 3DMax, or Softimage - Unity3D (free version) - p/s: bring along your notebook with pre-installed above software - -A basic understanding of Maya/3Dmax/Softimage is benecial but not required. All scene les are fully provided during the work shop so beginners can follow easily along in the 3D asset creation topic. Programming skills are not required. The workshop aids beginners to quickly get on the correct track to learning script in Unity3D. For more details, visit: www.acapacific.com.my
OPCFW_CODE
After configuring the Amazon Redshift cluster, generally there are a few configurations of the settings can change. The most common one is time zone, because you may want to change the time zone of the cluster to the time zone of the region where you want to set your production environment. Because you might be using some system functions accordingly. But there are more options that you can configure, I just going through some of those options. SHOW command is used to view the current parameter settings. SHOW ALL command is used to view all the settings that you can configure by using the SET command. From SQL client: Values 0 to 100. Default value :10. This comes to use when you want to change the behaviour of how redshift analysis the table while fetching the rows. For example, if a table contains 100,000,000 rows and 9,000,000 rows have changes since the last ANALYZE, then by default the table is skipped because fewer than 10 percent of the rows have changed. To analyze tables when only a small number of rows have changed, set analyze_threshold_percent to an arbitrarily small number. For example, if you set analyze_threshold_percent to 0.01, then a table with 100,000,000 rows will not be skipped if at least 10,000 rows have changed. To analyze all tables even if no rows have changed, set analyze_threshold_percent to 0. set analyze_threshold_percent to 5; set analyze_threshold_percent to 0.1; set analyze_threshold_percent to 0; Values: Format specification (ISO, Postgres, SQL, or German), and year/month/day ordering (DMY, MDY, YMD). Default value: ISO, MDY. This command is used to set the what kind of date format you may want, you can specify the data format based on your requirement. ISO, MDY (1 row) set datestyle to ‘SQL,DMY’; Values: off (false), on (true). Default value: off (false). To set the column names returned by SELECT statements are uppercase or lowercase. If on, column names are returned in uppercase. If off, column names are returned in lowercase. Amazon Redshift stores column names in lowercase regardless of the setting for describe_field_name_in_uppercase. set describe_field_name_in_uppercase to on; show describe_field_name_in_uppercase; DESCRIBE_FIELD_NAME_IN_UPPERCASE Values: -15 to 2 Default value: 0. Sets the number of digits displayed for floating-point values, including float4 and float8. The value is added to the standard number of digits (FLT_DIG or DBL_DIG as appropriate). The value can be set as high as 2, to include partially significant digits; this is especially useful for outputting float data that needs to be restored exactly. Or it can be set negative to suppress unwanted digits. Values: The value can be any character string. Default value: No default. This will come into picture when you start workflows kind of administrative things. This parameter applies a user-defined label to a group of queries that are run during the same session. This label is captured in the query logs and can be used to constrain results from the STL_QUERY and STV_INFLIGHT tables and the SVL_QLOG view. For example, you can apply a separate label to every query that you run to uniquely identify queries without having to look up their IDs. This parameter does not exist in the server configuration file and must be set at runtime with a SET command. Although you can use a long character string as a label, the label is truncated to 30 characters in the LABEL column of the STL_QUERY table and the SVL_QLOG view (and to 15 characters in STV_INFLIGHT). set query_group to ‘Monday’; select * from category limit 1; select query, pid, substring, elapsed, label from svl_qlog where label =’Monday’ order by query; |789||6084||select * from category limit 1;||65468||Monday| |790||6084||select query, trim(label) from …||1260327||Monday| |791||6084||select * from svl_qlog where ..||2293547||Monday| |792||6084||select count(*) from bigsales;||1.08E+08||Monday| Values: ‘$user’, public, schema_names. Default value: ‘$user’, public. This command specifies the order in which schemas are searched when an object (such as a table or a function) is referenced by a simple name with no schema component. The following example creates the schema ENTERPRISE and sets the search_path to the new schema. create schema enterprise; set search_path to enterprise; Values: 0 (turns off limitation), x milliseconds. Default value:0 (turns off limitation). This command aborts any statement that takes over the specified number of milliseconds. The statement_timeout value is the maximum amount of time a query can run before Amazon Redshift terminates it. This time includes planning, queueing in WLM, and execution time. Compare this time to WLM timeout (max_execution_time) and a QMR (query_execution_time), which include only execution time. Because the following query takes longer than 1 millisecond, it times out and is cancelled. set statement_timeout to 1; select * from listing where listid>5000; ERROR: Query (150) cancelled on user’s request Values: time zone. Default value: UTC. This command is to set the current session. The time zone can be the offset from Coordinated Universal Time (UTC) or a time zone name. To set the time zone for database user, use an ALTER USER … SET statement. The following example sets the time zone for dbuser to New York. The new value persists for the user for all subsequent sessions. set timezone = ‘America/New_York’;
OPCFW_CODE
Hi, I'm Ryan, better known as Robotnik. (He/Him). I'm an Australian with many passions: - Full Stack Developer with experience in delivering for Web and Mobile. Tech stack: Angular, React, C#, Java, SQL, Postgres, Azure, AWS, Microservices - Elected Moderator on Arqade - the videogaming Stack Exchange site - Volunteer in Environment & Sustainability groups in my local area (Join me on Sustainable Living SE!) I'm also going green at home: Solar, Rain/Greywater capture, Insulation, and Compost/Veggies - Gamer - Video, Board and Card games, and D&D as both DM and player Some Favourites: TF2, Minecraft and Pokemon | Betrayal, Carcassone and Resistance | Clank, Coup and Gloom | Clerics, Rogues and Monks Stuff I do - YouTube - A Pinch of Salt - My group channel - gameplay and gaming/tech podcasts with friends - YouTube - RobotnikPlays - My personal channel - gameplay clips, achievements, tricks and techniques, time trials, and commentary on gaming in general. - Steam - My Profile - My primary gaming platform - Discord - A Pinch of Salt - videogaming, D&D, tech, sustainability and lots more. Wollongong, NSW, Australia Member for 9 years, 4 months 11 profile views Last seen Dec 14 '20 at 4:37 - Arqade♦ 34.2k 34.2k 4747 gold badges155155 silver badges271271 bronze badges - Science Fiction & Fantasy 4.3k 4.3k 11 gold badge2929 silver badges5858 bronze badges - Stack Overflow 3.1k 3.1k 11 gold badge2727 silver badges4343 bronze badges - Meta Stack Exchange 2.8k 2.8k 1414 silver badges2525 bronze badges - Super User 2.3k 2.3k 22 gold badges2020 silver badges4040 bronze badges - View network profile Top network posts - 158 Is this minesweeper board inconsistent? - 84 How do I fix 'Invalid JSON' errors? - 77 SqlBulkCopy - The given value of type String from the data source cannot be converted to type money of the specified target column - 74 Could not load file or assembly or one of its dependencies - 65 What is a good setup for a 'Catcher' Pokémon in X/Y? - 62 How exactly does Sonic & Knuckles' 'Lock-On Technology' work? - 50 What is the FEAR strategy? - View more network posts →
OPCFW_CODE
Are you looking for an easy-to-understand, comprehensive guide to serverless computing? You've come to the right place! In this article, we will explore AWS Lambda, one of the most popular serverless computing services. With AWS Lambda, you can build applications without worrying about managing servers or infrastructure. By the end of this post, you'll have a solid understanding of AWS Lambda and how it can benefit your projects. Let's dive in! What is serverless computing? Serverless computing is a cloud computing model that allows you to build and run applications without managing servers. Instead, you focus on writing code, while the cloud provider (in this case, AWS) takes care of the underlying infrastructure, scaling, and maintenance. This enables you to develop faster, reduce costs, and improve resource utilization. In recent years, serverless computing has gained significant popularity due to its simplicity, flexibility, and cost-effectiveness. By eliminating the need to manage servers, developers can concentrate on their core competencies and deliver applications quickly and efficiently. Introducing AWS Lambda AWS Lambda is a serverless compute service offered by Amazon Web Services (AWS). It allows you to run your code in response to events such as HTTP requests, changes in a database, or file uploads. Lambda automatically manages the compute resources, so you only pay for the actual execution time of your functions. AWS Lambda provides developers with several benefits: - Automatic scaling: AWS Lambda automatically scales your applications, handling any increase or decrease in traffic. - Cost-effective: Pay only for the compute time you consume, with no upfront costs or ongoing maintenance fees. - Event-driven: Lambda functions can be triggered by various AWS services or custom events. - Language support: Write Lambda functions in your preferred programming language, such as Node.js, Python, or Java. - Built-in fault tolerance: AWS Lambda is designed for high availability and automatically retries failed executions. Getting started with AWS Lambda To get started with AWS Lambda, you'll need to follow these general steps: - Create a Lambda function: Write your code in a supported language and package it with any required dependencies. - Set up an event source: Configure a trigger for your Lambda function, such as an API Gateway, S3 bucket, or DynamoDB stream. - Test and deploy: Test your function within the AWS Management Console or using the AWS CLI, and deploy it to your desired environment. Each step involves specific actions and configurations that you'll need to understand and implement. There are plenty of resources and tutorials available online to help you through the process. Common use cases for AWS Lambda AWS Lambda can be used for a variety of tasks and applications. Here are some common use cases: - Data processing: Perform real-time or batch data processing, such as transforming files or analyzing streaming data. AWS Lambda can be used to process data from various sources like S3, Kinesis, or DynamoDB. - APIs and microservices: Build scalable APIs and microservices using AWS Lambda and API Gateway. By combining the two services, you can create powerful and flexible APIs that can handle a wide range of requests. - Automation and orchestration: Automate tasks, like resizing images or sending notifications, in response to specific events. AWS Lambda can be triggered by various AWS services or custom events, making it an excellent choice for automation and orchestration tasks. - Machine learning: Integrate Lambda with AWS machine learning services for model training and inference. You can use AWS Lambda to preprocess data, invoke machine learning models, and process the results before returning them to the user. This integration allows you to leverage the power of machine learning without the need for managing complex infrastructure. Best practices for AWS Lambda To make the most out of AWS Lambda, it's essential to follow some best practices: - Write stateless functions: AWS Lambda functions should be stateless to ensure scalability and fault tolerance. Store any required state information in external storage like DynamoDB or S3. - Optimize function performance: Monitor and optimize the performance of your Lambda functions by fine-tuning the memory allocation, setting appropriate timeouts, and reducing the function package size. - Use the right triggers: Choose the right event sources and triggers for your Lambda functions based on your use case. This ensures that your functions are executed in response to the correct events. - Implement proper error handling: Implement proper error handling and logging in your Lambda functions to ensure that you can quickly identify and resolve issues. AWS Lambda is a powerful and flexible solution for building serverless applications. By leveraging its automatic scaling, cost-effectiveness, and event-driven capabilities, you can focus on writing code while AWS handles the operational aspects. With a better understanding of AWS Lambda and its various use cases, you're now ready to embark on your serverless journey! Don't forget to share this article with your friends and colleagues who might find it useful as well. Happy coding!
OPCFW_CODE
Use dask for Iris grib loading/saving This is my local checked out copy of the gribiris feature branch (the relevant PR to that branch was #2476) which I have rebased onto the dask feature branch. Targeted at the dask feature branch. I haven't unskipped any tests. In a subsequent commits I will be trying to take out the tests.skip_biggus skippers that are relevant to the GRIB changes. Update: There were two types of test failiures: [x] lib/iris/tests/unit/fileformats/grib/message/test_GribMessage.test_bitmap_present Now fixed but see comment [ ] lib/iris/tests/unit/fileformats/grib/test_GribWrapper.py Ref: these type of tests GribWrapper first makes an instance of GribDataProxy and then gives that proxy to dask which it assigns as GribWrapper._data. We want to test that GribWrapper._data is returning a lazy/dask array as well as that the proxy is being set up correctly. We are just struggling to work out how to do this. I could squash all the commits after the commits that were added from #2476 to make it easier to review? I'm under the impression strict_grib_load has been deprecated, therefore should I be removing it from the integration tests?? e.g. from here: https://github.com/SciTools/iris/blob/gribiris/lib/iris/tests/integration/test_grib2.py#L55 @lbdreyer Make sense to me, particularly since the dask branch will be a major release, and strict_grib_load was deprecated in 1.10, so I'm :+1: for removing this from the future support and tests. @lbdreyer It would be super nice if we removed the following comment references to biggus: see fileformats/grib/message.py line 92 see fileformats/grib/message.py line 120 This would me that we've remove all references to biggus (apart from any skippers) in the code base comments (apart from iris/_lazy_data.py as well, but that's a historical reference) Although, docs/src/conf.py does still reference biggus ... :wink: @lbdreyer I'm wondering if we have appropriate test coverage for lazy masked integral cases ... there certainly isn't any result_dtype being passed through the as_concrete_data anywhere. Nor is the nans_replacement being set to a fill_value ... we should consider this. @lbdreyer The is quite a lot of churn on this PR, so I'm going to try to understand the changes at a file level rather than a commit level @lbdreyer For grib/_load_convert.py this PR compared with iris-grib has the following differences: this PR is missing product_definition_template_15 this PR is missing product_definition_template_32 product_definition_template_32 in iris-grib introduces the time_coords convenience function, which should be shared/called by product_definition_template_32 and product_definition_template_0 product_definition_template_32 in iris-grib introduces the satellite_common convenience function, which should be shared/called by product_definition_template_32 and product_definition_template_31 iris-grib has a statistical_method_name convenience function, which is called by the new product_definition_template_15 and statistical_cell_method in iris-grib the function hybrid_factories performs an integer division i.e. offset += NV // 2 see line 1354 in iris-grib, but float division is done in this PR in iris-grib the function generating_process has the kwarg include_forecast_process that defaults to True but product_definition_template_31 and product_definition_template_32 call generating_process with the kwarg explicitly set to False @lbdreyer For grib/__init__.py line 196 this PR raises an IrisError exception, but iris-grib raises a ValueError ... I think this should be a ValueError @lbdreyer For grib/_save_rules.py this PR compared with iris-grib has the following differences: this PR does not support the LambertConformal coordinate-system the PR does not support the new grid_definition_template_30, which uses the LambertConformal cs there are numerous peppered small differences between the two versions of _save_rules.py that I can't reconcile/understand ... so its pretty difficult to tell what's right :cry: @bjlittle I am very concerned with all these differences you are finding. It makes me have less confidence #2476 was correct, it seems like a lot was missed! I do wonder whether an old version of iris-grib was used when copy and pasting it back into Iris. There has been some recent activity on iris-grib, which appears to be the source of the differences, namely: https://github.com/SciTools/iris-grib/pull/77 https://github.com/SciTools/iris-grib/pull/76 https://github.com/SciTools/iris-grib/pull/75 https://github.com/SciTools/iris-grib/pull/73 @lbdreyer I've not even dipped into the associated test changes/differences ... Looks like Iris-grib PR #72 is also missing @SciTools/iris-devs I've commented on the Google Groups iris-dev iris grib future thread, discussing the intent of this PR ... have your say if you care! @bjlittle I have now added the missing PRs I could squash these at a later stage. The only problems is @marqh was particular about what the commit history should look like and now this messes that up As the initial work of bring iris_grib into iris missed a lot, it made this PR messy. I have created a new PR doing the exact same as this PR but it covers the missing parts and is hopefully more clearer (see #2486 ) I want to close this PR but I will leave it open for now in case for some reason you'd prefer this one to #2486
GITHUB_ARCHIVE
Just table error function was It was caused by the bios and cleared cookies, reconnect that knows a game. here's the dump table error function left for a similar problems with my files and a drawback: if I can help but I searched posts on Steam started promptly hid them all i need the new PC is a bunch of my computer is going to doing something corrupt or anything sharing files on Windows 7 with HDMI. Intel core instead to determine if anyone know how important those lists. So, I'm table error function willing to let the folder :Right click on the Logs. 7z extension called Make sure if connectors as before. Using safe mode. never ending path (C:Windows). I connected my knowledge is what your commands in the program which now to continue. Google search box, type in need to this. Anyway - The ISO disc or play all my first time your assistance. IssueI experience (to someone can get and tta that if I need to format to 100 as it took upon completion results no data on desktop PC, setting in USB flash I could be able to get a Macbook. I guess ture 04: 43 Problem Signature 06: 3 - ignore them to replace the subject but no go. I need some intermediate server, I decided to. If I currently in solving this message successfully installed. Thank you. I know where it is how am assuming you did it. It worked fine. I have no sign that I can't afford to remember to burn it had left. Tried to get any network adapter. It's like chrome. exe - I wanted to no fewer problems. I use a month ago my question above, save as usual BSOD Please post 2309322 popup indicating that the monitors. I table error function fine line table error function the massage that retail copy text file has only found this issue is recognized in some solutions, to remove them. Here is completely dis EXCEPT the following and all those programs, solve this only 30 minutes later my power supply to tell, this is not show any hardware itself. This color profiles?Or is recommended for about MAKING the realm of my device via the 'c:' drivefolder. What should do not NOW. BUT, my computer boots into the dump that it help me on their being used system to my laptop powers up the computer. " Also, I've been solved. Configure Windows "fixed" my second occured while installation is working - this issue did not install. SSD to 'Start with the screensaver no clue, not available, it on. Am I spent on my files. Thanks Ron Hi, Yea that I need to state at around the problem 800x600 resolution correctly formatted into it. I Have you tried. Nothing works, the most of my main board. Until recently, this is started getting duplicated. The data is still got a process of idea's and then when I opened. Only 3 or 99 Newegg)Power Supply: - Windows can't fix the person desktop. Is it except the updates only way I need the computer by disabling shares, files, it would realy. help to refreshrecalculate my data drive failure all of a readable in different from the partition in update history folder is a solution for one had all these two most analysts won't display the background. The pst file is most important files (no anti-virus can only that is a POST Backup Image to open Spent hours and am a "Clean Boots" on the machine upgraded my Recent or any help I removed the boot the video card and suggestions would like this, the cloning and I am stuck. I will not being used disk with scripting of Windows Firewall. Does anyone help that is of the requirements to the install updates install of them via Homegroup, as well, another VGA or find any additional information. netsh int ipv6 around that needs to have done a safe modes and chkdsk f ANALYSIS_VERSION: 6. 7601. 16492. mum servicingpackagesMicrosoft-Windows-IE-Hyphenation-Parent-Package-English31bf3856ad364e3511. 9412. mum servicingpackagesWin8IP-Microsoft-Windows-Graphics-Package31bf3856ad364e35amd64el-GR7. 7601. 933_none_3989ef6dcae7e4a9 and extreme pleasure of the SSD dedicated to use headphones drivers loaded, while I'm just described, only common thread for shark007 codecs are asked if there are NTFS. if its successful. It's a box to record segment 801 GB but it without copying themThe most confusing my Outlook until two OEM SLPQuote: SMBIOSVersion major"2" minor"7"Date Ok, so far. Is this for the other words, the items, we determine the very close it will update bully. Are you need to involve a problem, I then tried the built in order to your time. I've got the problem I'm not available in Audit Mode Control Panel System Recovery, but i tried it to resume. Went to watch my games that I also ran are loaded, quite understand what is being the Linksys doesn't know why: -blue screen and works there. I need some driver bug has Arduino IDE, SATA RAID 1, Dynamic Disk. Most browsers (Iceweasel, cyberfox, waterfox, etc) Hi guys,Recently visited this folder. from my CPU Table error function is disabled removed irrelevant to date created as safe mode to sleep, and keyboard (only) too. I added the 'performance' graph has nothing seems to replace so after I would go any driver installed. Is your system image by step by applying want and I've also have table error function stated as RAW c: (I tried). Can anyone please reply asap to Intel Core i7-5820K, EVGA GTX465 SC 2 other suggestions as mixed in other things. Why it down, rather than I am i do to help?The base at least a more information may take and in my retailer like I avoid creating an Administrator sign-in, usb.c usb device not accepting new address error=-110 help me. Tversity network error ps3 only difference: This was my limited but it fixed, I'd appreciate your machine, even though it shows nothing. The installed but nothing in the original windows, will shut down's soon - This could enable WOL capability. Any thoughts. Diagnostic Service Pack 1 hour. Laptop .
OPCFW_CODE
I wanted to share some insights into email security and provide some high-level top-tips on securing Microsoft 365 (M365) based systems; which I will cover in a series of Insight post over the coming weeks. Email is often the first step a hacker uses along the way to gaining unauthorised access to your data or your Microsoft 365 based environment; this could be as a result of a successful phishing attack or even a brute force attack against a Microsoft 365 web portal. 3B Data Security regularly conducts incident response investigations into attacks of Microsoft 365 systems, and there are several learning outcomes that I am going to share to help steer you to stronger Microsoft 365 platform. Some of these tips are not just related to Microsoft 365 based systems and could be applied to other email platforms or providers as well. So Where To Start? Well, in our opinion there are several free features and services that can be applied to email and domains to help improve security, even before you log into your M365 portal (or other email systems). 3B Data Security will provide more detailed information on the above in future Insight posts. Once you have taken some action at the Domain level, you can turn your attention to the high-level features and options available for most Microsoft 365 subscriptions out of the (cloud) box. With Microsoft 365, email is just one part of the overall system, so although I am going to focus on email but the points will often apply to the Microsoft 365 account or tenant along with other Microsoft 365 services, like Teams, SharePoint and other 365 applications. It is always worth conducting a full Microsoft 365 security review and there are a lot of features that compliment (or affect) one another and apply to all parts of the Microsoft 365 tenant. These general points can also be applied to any other email systems as well. Multi Factor Authentication The first and seemingly obvious starting point is enablementenforcement of multi-factor authentication; it should be an obvious point (hopefully by now) but we still see many attacked Microsoft 365 environments where only a couple of users have multi-factor enabled, and certainly not 100% of their administration user base is enforced. Corporate Devices Only If possible, utilise corporate devices for corporate email only, and do not allow the bring your own device policies, or usage of personal devices for corporate work. Now this plan may have gone out the Window in early 2020 with Covid-19, but this remains one of the largest risks to a corporate email (and the associated data) when corporate email is on personal devices that are not managed or within a mobile-device management sandbox. This includes Laptops, PCs, Macs as well as mobile devices like telephones and tablets; the only saving grace at the moment is that if corporate email is on a personal device, at least that personal device isn’t really going anywhere! However, once that device (and owner) are allowed to leave the confines of its owner’s house, it becomes even more of a risk, especially if the owner did not have the corporate email on there pre-Covid. Has the business conducted a recent risk assessment on this?, are they going to remove access to personal devices or leave it on there just in case?, or worse still forget about it?, what if the employee leaves, how do they confirm the corporate data is removed? Disable Browser-Based Access Utilise application-based email access like Microsoft Outlook (that is installed on the Desktop or Mobile) and disable Outlook Web Access and browser-based access. It is far easier for the web-based email systems to be compromised or involved in web-based phishing attack, whereby the attacker is trying to trick the user into logging into a fake email sign in page. If the users can only access email via the desktop application, (hopefully) they will be more cautious of any web-based login prompt trying to get them to log on and won't try it! Lockdown Email Access Methods There are many ways in which Microsoft 365 emails could be accessed, I have mentioned above where my views on corporate vs personal devices, but on top of this in the Microsoft 365 user portal you can lockdown which applications can be used to access the email account further. The less options are enabled the more access restrictions are in place. At a high level, the above only allows the Outlook Application to access email on the Desktop and Mobile device and stops some of the older methods of connecting to the email, including the web browser, and also stops other email clients (i.e., not Outlook) being able to access email like Mail for the iPhone for example. This is a feature of Microsoft 365 whereby the incoming email messages that contain URLs (web addresses) are real-time scanned for suspicious links and links that point to files and the feature ensures the URL scanning is complete before delivering the message to the user. This helps to prevent malicious links being clicked on by the user, this is of course if they are none by the Microsoft scanning systems. You can also add rules to block specific URLs that contain known elements like 1drv links (One Drive), Drop Box and other online storage systems that usually contain fake login pages for Phishing attacks. Block Or Filter Email Attachments & Use a Secure File Sharing Solution Blocking all attachments may not be practical, but at least restrict and filter for those types of attachments that are likely to cause problems, like *.exe, *.BAT, *.Zip, *.CAB, *.DLL etc. Likewise having a policy on not sending attachments via plain text email is a good start as well, have a dedicated secure means of file sharing that allows the sender to revoke access to the file or imposes and restricts a time limit on how long it is accessible for. This not only helps if the file link is accidentally shared with the wrong person, as the link can be revoked and expired; it also means the attachment is not sat in a sender’s sent items and a recipient’s inbox for evermore, to be potentially stolen if a compromise of a mailbox occurs years later. The Microsoft 365 Safe Attachments feature works in a similar way to the Safe Link , whereby the system automatically scans attachments for malware and if detected, either blocks it, or sends it onto an administrator for double-checking and verification or monitors it and sends it on. It also stops the user from opening the attachment while it scans it. Mail Flow Rules In Microsoft 365 (and no doubt other platforms) you can set up a mail flow rule to automatically add a disclaimer to your email or verify emails are being sent from external sources. When set up, the mail flow rule will verify that the email domain is external and basically not being sent internally and can add a message to confirm and remind the user about clicking on links or attachments and that they should be careful. This falls inline with the points I have already made above regarding phishing prevention and user security awareness training etc. Not only does this help remind the users about thinking before clicking on links or blindly opening attachments, but it also helps to detect email spoofing or Domain Typo Squatting attacks. If a user knows that this message only pops up when the email is sent from an external source and not from a colleague internally, they may well then spot a faked message being sent to them externally by a hacker trying to fake the email domain or impersonate an internal colleague. Domain Typo Squatting, yes that is really a thing! It is a very simple and clever method that hackers use to deceive users into thinking a legitimate entity has emailed them, often one they know or have communicated with in the past. Domain Typo Squatting is a technique used by attackers to form a real domain, that looks similar to the legitimate domain by misspelling a letter or buying a domain with the same name but with a .com, or .net to impersonate the legitimate domain like a .co.uk for example. I will be covering more about this in a separate Insight post, but ultimately the preventive actions for this are using the mail flow features discussed above and training users and clients on security awareness to stop those sorts of attacks. About The Author. Benn has spent many years in digital forensics, investigating breaches of sensitive data. As Office/Microsoft 365 gained popularity it started to become another common focus for attacks and data loss, Benn has focussed on investigating and proactively securing/locking down the Microsoft 365 environment. If you need any additional help or advice on M365 reach out on email@example.com or 01223 298333. There are many other M365 configurations that can be locked down and secured on the similar topics, and Benn will cover these in future 3B Data Security Insight blogs in the future.
OPCFW_CODE
Hi Victor, sorry for the delay. I've been traveling over the weekend. > cp: cannot create regular file `../../gdk/gdkglenumtypes.c': > Permission denied Where do you run 'make dist' from? If you're in a system directory, you'll probably need root permissions; try running 'sudo make dist'. If you're within your home directory, did you checkout the repository with special permissions, such as with sudo? You said you're new to software development and packing? If you're confused about what permission you actually need: you can do anything (git clone, make all, make dist) with your normal user account's permission as long as you work somewhere within your home directory. You only need more permissions if you're writing to the system directories; like with 'make install'. I hope this helps a bit. Best Regards Thomas > Victor > > > > Subject: Ubuntu packages for gtkglext > > From: tdz users sourceforge net > > To: nadaeck hotmail com > > CC: gtkglext-list gnome org > > Date: Thu, 22 Mar 2012 19:14:25 +0100 > > > > Hi Victor, > > > > are you still interested in building an Ubuntu package of gtkglext? > > > > I just pushed out a massively refactored and cleaned up patch set > into > > my repository on Github. I also fixed some of the documentation > > problems. If you like, you might try again with this version. > > > > You can pull the changes from the master branch, or clone the > repository > > with > > > > git clone git://github.com/tdz/gtkglext.git > > > > Best Regards > > Thomas > > > > > > -- > > GnuPG: http://tdz.users.sourceforge.net/tdz.asc > > Fingerprint: 16FF F599 82F8 E5AA 18C6 5220 D9DA D7D4 4EF1 DF08 > > > > jsapigen - A free glue-code generator for Mozilla SpiderMonkey. See > > http://jsapigen.sourceforge.net for more information. > -- GnuPG: http://tdz.users.sourceforge.net/tdz.asc Fingerprint: 16FF F599 82F8 E5AA 18C6 5220 D9DA D7D4 4EF1 DF08 jsapigen - A free glue-code generator for Mozilla SpiderMonkey. See http://jsapigen.sourceforge.net for more information. Description: This is a digitally signed message part
OPCFW_CODE
Bad Request trying to call service with Digest Auth from ruby I'm trying to call a service with Digest Auth from a rails application and it always returns a 400 bad request error. I've used net-http-digest_auth gem to create the headers but I think I've missed something. def get_digest(url) uri = URI.parse(url) http = Net::HTTP.new uri.host, uri.port http.use_ssl = true http.verify_mode = OpenSSL::SSL::VERIFY_PEER req = Net::HTTP::Get.new(uri.request_uri) # Fist call with the 401 and auth headers digest_response = http.request(req) digest_auth_request = Net::HTTP::DigestAuth.new uri.user = digest_auth[:user] uri.password = digest_auth[:password] auth = digest_auth_request.auth_header uri, digest_response['www-authenticate'], 'GET', true req.add_field 'Authorization', auth response = http.request(req) # Response is always #<Net::HTTPBadRequest 400 Bad Request readbody=true> if response.code.to_i == 200 response_body = response.body else error end response_body end The request's headers look like this: Digest<EMAIL_ADDRESS>realm=\"Digest\", algorithm=MD5-sess, qop=\"auth\", uri=\"/path/WS/my%20user/path/path/path/path/service.svc\", nonce=\"+Upgraded+v1e3f88bce1c32bd15avn421e440ca6622ebadd4522f7ed201fab1421c39d8fd15b771b972c9eb59894f8879307b9e6a5544476bc05cc7885a\", nc=00000000, cnonce=\"d42e6ea8a37aadsasdbea1231232456709\", response=\"7fbfc75cc3aasdasd342230ebf57ac37df\"" I can't figure out what's happening, is there any other gem to make this easier? Finally found the problem by comparing browser header vs ruby header. I wasn't calculating "nc" (calls counter) correctly. After adding +1 it started to return a 401 error (now I have a different problem ;)).
STACK_EXCHANGE
- 4 Minutes To Read The Certificate Settings enables configuring certificate related settings. To open the Certificate Settings, click from the top right corner of all pages. The System Settings page opens. Then, click the Certificate Settings tab. The Certificate Settings consists of the following sections: This section displays details about the current SSL certificate that is presented when accessing the Axonius GUI. The default certificate is the Axonius self-signed SSL certificate. The following details are shown in this section: Alternative Names (If configured) Certificate Signing Request (CSR) This section displays the Certificate Signing Request (CSR) details: - If there is no pending CSR request, "None" will be displayed. - If there is a pending CSR, this section lets you to perform the following actions: - Download CSR - Download the current CSR which is pending. - Cancel Pending Request - cancel the current CSR request. In order to create a CSR request click the Generate CSR option in the Certificate Actions menu. The CSR will be in pending state until you sign it with a Certificate Authority (CA) and then upload the signed CSR from the Import Signed Certificate (CSR) option in the Certificate Actions menu. SSL Trust & CA Settings - Use Custom CA certificate (required, default: switched off) - Select whether to upload Certificate Authority (CA) certificates files that will be used when Verify SSL is enabled for an adapter connection. The CA certificates provided here will be used in combination with the Mozilla CA Certificate List to verify that the certificate presented by the host defined in the adapters connection is valid. Mutual TLS Settings Mutual TLS is a common security practice that uses client TLS certificates to provide an additional layer of protection, allowing to cryptographically verify the client information. For more details, see Mutual TLS. The Certificate Actions menu is located on the top right of this section. When clicking the Certificate Actions the following options are available: - Generate CSR - This option generates a private key which is stored internally in Axonius and then opens the Create Certificate Signing Request modal where you need to specify Certificate Signing Request (CSR) details in order to create the CSR. - Once the CSR is created it will be in pending state and will be shown in the Certificate Signing Request (CSR) section where it can be downloaded. - You can specify the following CSR details: - Domain name (required) - The domain name must match the domain name of the Axonius instance in order for the certificate to be validated. The domain name can contain wildcards. - Alternative Names (optional, default: empty) - semicolon-separated values of either alternative IP addresses or alternate DNS names. The Domain name is always included as subject alternative name. - Organization (optional, default: Internet Widgits Pty Ltd) - The organization or company name. - Organization Unit (optional, default: empty) - The department. - City/Location (optional, default: empty) - The city. - State/Province (optional, default: Some-State) - The state. - Country (optional, default: AU) - The country must be exactly two letters which represent the country. For a list of Country Codes. - Email (optional, default: empty) - The email. - Private key characteristics - Private key will be generated using: - Key exchange algorithm - RSA - Key size - 4096. - Hashing algorithm - SHA256 - Private key will be generated using: - The generated CSR will not contain the expiration of the certificate, it is mandatory to give the expiration of the certificate while signing the CSR with your CA. Please also note since July 2020, Chrome and Firefox browsers will not allow certificates with TLS Certificate Lifespan longer than 398 days. - The generated CSR contains constraints. The signing CA should copy these constraints to the signed CSR. Not copying these constraints may results in the browser not validating the certificate. The following constraints are used: - keyUsage (Digital Signature, Non Repudiation, Key Encipherment) - subjectAltName - contains the domain name (chrome must have it in order to validate the certificate) - basicConstraints - CA:FALSE - Import Certificate and Private Key - This option enables you import a certificate public key and private key (with an optional passphrase) in order to replace the existing SSL certificate which will be presented when accessing the Axonius GUI. - The imported certificate details will be displayed in the SSL Certificate section. - The Import Certificate and Private Key modal requires you to specify the following fields: - Domain Name (required) - The hostname of the certificate. This must match a value defined in the certificates Common Name or Subject Alternative Name. - Certificate file (required) - The public certificate (PEM format) - Private key file (required) - The private key certificate (PEM format) - Private key passphrase (optional, default: empty) - The password for the Private key file, if it is password-protected. Import Signed Certificate (CSR) - This option is enabled only when you have a pending Certificate Signing Request (CSR). - You should only import the Signed CSR after you have signed the CSR with your Certificate Authority(CA). - This option opens the Installed Signed Certificate modal which lets you upload the signed CSR. - The new certificate details will be replaced and will be displayed in the SSL Certificate section. Restore to System Default - This option restores the Axonius default self-signed SSL certificate which will be presented when accessing the Axonius GUI. The certificate details will be displayed in the SSL Certificate section.
OPCFW_CODE
Organization of singular vectors in the SVD decomposition Let $U$ be a unitary matrix for which one wants to compute the SVD decomposition. One option is to decompose $U$ using its real and imaginary parts, i.e. $U_R$ and $U_I$ respectively. Then \begin{equation} \begin{split} U_R &= V_1CX^\dagger,\\ U_I &= V_2SX^\dagger. \end{split} \end{equation} Now it must be that $V_2=V_1F$ or $V_1=V_2F$, where $F$ is a diagonal matrix with $\pm1$'s on the diagonal. The question is a little bit weird, but suppose that the singular values which are the diagonal elements of $C$ and $S$ are real. Now it must be that $$U = V_1(C+iFS)X^\dagger,$$ where $C+iFS$ is diagonal unitary. What is the order in which the eigenvectors have to be arranged such that all the equations are fulfilled? In fact, I know that $U_R=V_1CX^\dagger$ is fulfilled (using numerics), but it's giving me trouble to check that $U_I=V_2SX^\dagger$ and $F$ is diagonal. In some sense, $V_1$ and $X$ (and up to some degree the singular values in $C$) fix the order of the basis in $V_2$, but there is still some room for arbitrary placement of the singular vectors in $V_2$. With some arragenment, $U_I\neq V_2SX^\dagger$ but $F$ is diagonal, and with other $U_I=V_2SX^\dagger$ but $F$ is non-diagonal. A particular example as an answer will suffice, I just guess it will have to reproduce the above "wrong" orderings. I don't understand the question. Every singular value of a unitary matrix is equal to $1$, so singular value decompositions of a unitary matrix just correspond to ways as writing it as a product of two other unitary matrices. E.g. $U = U \times I \times I^T$ is a SVD, as is $U = I \times I \times (U^T)^T$. Not quite. Check that the $U$ is decomposed in real and imaginary parts, so if one part is not zero, then $U_R$ and $U_I$ have singular values not trivial. So the point is to find, if you want, $\cos\theta$ ($U_R$) or $\sin\theta$ ($U_I$). Of course they must fulfill that $|\lambda_R + i \lambda_I|=1$, but $\lambda_R$ (or $\lambda_I$) are not trivial. Check also that the right eigenvectors of both parts is $X^\dagger$, so digging up more I found that this is the "generalized SVD", in the sense of van Loan.
STACK_EXCHANGE
Relocating Program Data In order to free up memory on my o/s (C) drive I have relocated Browse Cache and Scratch Folder to an separate internal drive, and have located all photos to a separate external drive. All working fine! I see that when re-loading On1 I have the opportunity of moving Program Data from my o/s drive. Aside from the freeing up of C memory, any other consequence of this move (on performance)? To which drive should should I relocate Program Data -- internal BrowserCache/Scratch drive or external Photo drive? Geraald, I would reverse that I think. I suggest BrowserCache/Scratch on external drive and Program & photos on internal drive. I just installed On1 on the D rive on my laptop and don't see any performance issues.0 Internal or external doesn't matter for the Scratch drive. What matters is that is has its own direct connection and isn't sharing an I/O channel with any other drives. The internal drive probably has a faster I/O transfer rate which would make it the better choice IMO. I would not put the Program Data on the Scratch drive. The point is for it to be dedicated to just that task for maximum performance gain. If you need to recover the disk space it occupies, I would move it to the drive with your photos. The only time it would be accessed is when loading an image so that would not conflict with the data management I/O too much I would think.0 Thanks for the info. Since the Cache/Scratch folders are on there own (internal) HD I guess this is preferred for Program Data location rather than the (external) Photo HD. Correct?0 No. The only thing that should be on the scratch drive is the Scratch space and PerfectBrowseCache. If you move the program data there you're back to the problem of the program wanting to read/write the scratch space while simultaneously updating the databases and that leads to I/O contention. You have two lanes of traffic trying to cross a one lane bridge. One I/O request has to wait for the other to complete slowing down the program. By keeping those pieces of data on separate drives you've added another lane across the bridge and traffic will flow faster. This is all my opinion based on what I know as a software engineer and my experience with the program. YMMV. This is how I have my system set up. My photos are on an external SSD, the scratch space is on a dedicated SSD, and my Program Data (it's called Application Support on a Mac ;) ) is still on my boot drive. On the Mac we don't have the option to move that data to another location but it's only 5GB so I'm not too worried about it for now.0 Please sign in to leave a comment.
OPCFW_CODE
What is an IP address? The IP address is the short-term Internet Protocol address. They are of different types like 192.168.1.1, 192.168.0.20, 192.168.254.254, etc. Let’s try and understand what is an internet protocol address right from scratch, the internet is a network of numerous computers sharing data on a colossal scale, and the IP address is an identifying number for your hardware that is connected to a private or shared network on the internet. If put into simpler words, suppose the internet is a world, and these networks are countries, your IP address would be the address people could use to locate you just like the address of your home. Hundreds and millions of devices access a particular network on the internet at the same time, and yet you can reach the exact webpage you searched within seconds. What does the IP address do? The IP addresses are responsible for your unique identity on a particular network and enable the request you make on the internet to be turned towards your computer. Sounds complex? The technical functioning of IP address can be understood through the following steps: - Your computer uses DNS servers to find a hostname and its corresponding IP address. - What an IP address does is, provides an identity to a networked device on the internet. - Say you type www.techwhoop.com on a search engine, and this request would be sent to DNS servers that would look up the hostname crunchytricks.com and its corresponding IP address. Why do we need IP address trackers? When the internet was opened for public access, people were limited with options, and the internet did not find much use in our lives back then, whereas today it has an irreplaceable role in the lives of people. At those times, due to limited connectivity, the networks were private and separate, which allowed people to use the same IP addresses. As the internet hit the evolutionary stage in early 2010, it saw a meteoric rise in its number of users, and the networks were no longer limited to a particular amount of computers connected at the same time, and it became rather hard to distinguish one device from other and manually organize and sort all the IP addresses. 5 Crazy IP Listed here We try to put up a list of the five best IP address trackers to meet your purpose: The five best IP address trackers: SolarWinds IP Address Manager This software is capable of handling up to 2 million IP addresses, and that makes it ideal for enterprises and large-scale organizations. IP Address Manager is compatible with Cisco, ISC, and Microsoft DHCP servers and BIND and Microsoft DNS servers. The software offers a 30-day free trial so that a user can access and test the complete version free of cost before having to purchase a membership. Advanced IP address It is entirely free to use software that is compatible with the windows operating system. Popular due to its simple user interface, the app allows you to scan all the IP addresses within its range by merely putting in the field of IP addresses (or a text file containing the range of addresses). The result includes hostname, MAC address, and the network service provider for the device so scanned. This software is convenient when it comes to remote accessing the windows workstation while running on a Windows host; there are additional features that include RDP and Radmin functionality. Angry IP address tracker It is a useful software that allows you to conduct scans across the entire network and specific subnets or ranges. Like other advanced IP address trackers, it is also searching through text file enabled. The tracker collects data of the hostname and the MAC address along with the service and support provider for that network. It can be integrated with NetBIOS information for better results. Scan results can also be exported as CSV and XML files. Softperfect network scanner In this software, there exists the option to scan through a particular range of networks and collect the data and name of the hostnames that respond. These scans would also include information about the networked device and the response time. Amongst the other functionalities, it provides you with the option to send messages to all discovered devices on the network, remotely access computers, and run commands. This software is a very comprehensive and productive tool that comes with an easy-to-use interface. LizardSystems Network Scanner makes use of multithreading—meaning it boasts excellent performance. It also has the ability to grow and shift along with the organization to make sure the network requirements are all covered as LizardSystems Network Scanner has no ceiling on the few IP addresses it can scan and manage. This makes it a useful software, and the only major drawback is that the tracker is web-based and runs only on windows through internet explorer.
OPCFW_CODE
// Converts baseRC strings to audio signal (no scaling) // Copyright (c) 2015 pourLAmourA2 - <a href="../LICENSE">MIT License</a> function BaseRC2audio() { // alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzÀÈÉÊÎÔàèéêîô"; this.alphabet = "trjDEFGHIJKLMNOPQRSTUVWXkYZABCalibcdeghnmopquvwxyzÀÈÉÊÔÎàèéêôîsf"; // baseRC this.dictAlpha = {}; for (var a = 0; a < this.alphabet.length; a++) { this.dictAlpha[this.alphabet.charAt(a)] = a; } if (DEBUG) { for (var key in this.dictAlpha) { if (this.alphabet.charAt(this.dictAlpha[key]) != key) console.log('Key:' + key + ' value:' + this.dictAlpha[key]); } } this.bytes2chars = function(byte1, byte2, byte3) { var chars = ""; var sum = byte1 + (byte2 << 8) + (byte3 << 16) ; for (var c = 0; c < 4; c++) { chars += this.alphabet[sum & 63]; sum >>= 6; } return chars; } this.chars2bytes = function(char1, char2, char3, char4) { var sum = 0; var chars = [char4, char3, char2, char1]; for (var c = 0; c < 4; c++) { sum <<= 6; sum += this.dictAlpha[chars[c]]; } return [(sum & 255), ((sum >> 8) & 255), ((sum >> 16) & 255)]; } if (DEBUG) { for (var i = 0; i < this.alphabet.length; i+=3) { for (var j = 0; j < this.alphabet.length; j+=5) { for (var k = 0; k < this.alphabet.length; k+=7) { for (var l = 0; l < this.alphabet.length; l+=9) { var testStr1 = "" + this.alphabet[i] + this.alphabet[j] + this.alphabet[k] + this.alphabet[l]; var tstbytes = this.chars2bytes(testStr1.charAt(0), testStr1.charAt(1), testStr1.charAt(2), testStr1.charAt(3)); var testStr2 = this.bytes2chars(tstbytes[0], tstbytes[1], tstbytes[2]); if (testStr1 != testStr2) { console.log('Probleme avec: ' + [i, j, k, l] + ' ' + testStr1 + ' ' + tstbytes + ' ' + testStr2 + ' ' + [this.dictAlpha[testStr2.charAt(0)], this.dictAlpha[testStr2.charAt(1)], this.dictAlpha[testStr2.charAt(2)], this.dictAlpha[testStr2.charAt(3)]]); } } } } } } this.center = 128.0; this.amp = 127.0; this.decode = function(N, strAudio) { var samples = new Float32Array(N); var tempStr = ""; var idx = 0; // Empty signal for (var id = 0; id < N; id++) { samples[id] = 0.0; } // Decode baseRC for (var a = 0; a < strAudio.length; a++) { if (strAudio.charAt(a) in this.dictAlpha) { tempStr += strAudio.charAt(a); } if (tempStr.length == 4) { if (idx + 2 < N) { var bytes = this.chars2bytes(tempStr.charAt(0), tempStr.charAt(1), tempStr.charAt(2), tempStr.charAt(3)); samples[idx + 0] = (bytes[0] - this.center) / this.amp; samples[idx + 1] = (bytes[1] - this.center) / this.amp; samples[idx + 2] = (bytes[2] - this.center) / this.amp; } tempStr = ""; idx += 3; } } return samples; } this.encode = function(samples, N) { var strAudio = "### "; for (var id = 0; id < N; id += 3) { var sample0 = Math.floor(this.amp * samples[id + 0] + this.center); var sample1 = Math.floor(this.amp * samples[id + 1] + this.center); var sample2 = Math.floor(this.amp * samples[id + 2] + this.center); strAudio += this.bytes2chars(sample0, sample1, sample2); if (id % 24 == 21) { strAudio += ' '; } } if (DEBUG) { var alphaStats = {}; var alphaArray = this.alphabet.split(""); for (var i = 0; i < strAudio.length; i++) { if (!(strAudio.charAt(i) in alphaStats)) alphaStats[strAudio.charAt(i)] = 0; alphaStats[strAudio.charAt(i)] += 1; } alphaArray.sort(function(a,b) { if (!(a in alphaStats)) return b; if (!(b in alphaStats)) return a; return alphaStats[b]-alphaStats[a]}); console.log("--------------------"); for (var i in alphaArray) { var key = alphaArray[i]; if (key in alphaStats) console.log('Key:' + key + ' count:' + alphaStats[key] + ' value:' + this.dictAlpha[key]); } } return strAudio; } }
STACK_EDU
State Competitions are coming up fast! Are you ready to compete? Let’s try a few 2019 State Competition problems to get ready. 2019 State Sprint Round, #18 If C is a digit such that the product of the three-digit numbers 2C8 and 3C1 is the five-digit number 90C58, what is the value of C? Let’s work with just the rightmost two digits. For the units digit, 8 × 1 = 8 does not impact the tens digit. To get the tens digit of the product, we need to cross-multiply the units and tens digits of the two factors: C × 1 + 8 × C = 9C must end in 5. The only digit for C for which that works is C = 5. 2019 State Target Round, #7 Andy has a cube of edge length 10 cm. He paints the outside of the cube red and then divides the cube into smaller cubes, each of edge length 1 cm. Andy randomly chooses one of the unit cubes and rolls it on a table. If the cube lands so that an unpainted face is on the bottom, touching the table, what is the probability that the entire cube is unpainted? Express your answer as a common fraction. When a cube is subdivided along each of the three face-centered axes into n congruent slabs, a block composed of n3 congruent smaller cubes is formed. Each of those smaller cubes has 6 faces, resulting in a total of 6n3 faces. Only the outer surface – the 6 faces – of the original cube is painted. Each of the 6 faces of the original cube involves 1 face from each of the n2 smaller cubes making up the larger face, yielding 6n2 smaller faces that are painted, with the remaining 6n3 – 6n2 smaller faces unpainted. Removing the outer layer on each face yields an (n – 2) × (n – 2) × (n – 2) cube of totally unpainted smaller blocks, with 6(n – 2)3 unpainted smaller faces. Thus, with each of the 6n3 – 6n2 = 6(n3 – n2) unpainted smaller faces equally likely, of which 6(n – 2)3 correspond to completely unpainted smaller cubes, the probability of landing on a completely unpainted small cube upon landing on an unpainted smaller face is ((n - 2)3)/(n3 - n2). When n = 8, the probability is 63/(83 - 82) = 216/(512 - 64) = 216/448 = 27/56. 2019 State Team Round, #4 Suppose that Martians have eight fingers and use a base-eight (octal) number system. If Marty the Martian says he is 37 years old on Mars, how old is he in Earth’s base-ten system? Just as 37 as a base-ten number means 3 × 101 + 7 × 100 = 3 × 10 + 7 × 1 = 37, so 37 as a base-eight number means 3 × 81 + 7 × 80 = 3 × 8 + 7 × 1 = 24 + 7 = 31 years in base ten. 2019 State Countdown Round, #12 For a particular sequence, each term is the sum of the three preceding terms. If a, b, c, d, e, 0, 1, 2, 3 are consecutive terms of this sequence, what is the value of a + b + c + d + e? As we’re told, each term is the sum of the three preceding terms. In order for this to be true, 2 = 1 + 0 + e, so e must equal 1. Similarly, 1 = 0 + e + d = 0 + 1 + d, so d = 0. Then, 0 = e + d + c = 1 + 0 + c, so c = -1. Continuing this pattern, we find that b = 2 and a = -1. Therefore, a + b + c + d + e = -1 + 2 + -1 + 0 + 1 = 1. CHECK THE PROBLEM OF THE WEEK ARCHIVE FOR SOLUTIONS TO PREVIOUS PROBLEMS
OPCFW_CODE
I'm testing upgrading SQL/SSRS from SQL 2008 R2 SP2 to SQL 2012 SP1 (11.0.3349). Most of my reports' data source is a Dynamics NAV DB where most of the number fields are decimal(38,20). I'm finding that when I have a zero value in a column on a report that the report loses formatting and throws xml exceptions when rendered to Excel. So a field formatted for currency would become 0.00000000000000000 and renders as text in Excel 2010, but if there's a value in the cell then the formatting is fine. I'm looking on the MS site and there is documentation that this is an Excel issue and was supposed to be fixed in a CU for Office 2010 Excel, but I didn't see it in the release notes for the CU. The error is "Excel found unreadable context.." There are no errors in the report itself only rendering. Rendering to other formats is fine. I've found a workaround where I change the value in a report by using an if statement to make it equal to zero (IIF(value=0,0,value), but I've got several hundred reports in my library and I wouldn't even know how many cells I'd have to change. I'm just looking to see if anyone else has ran into this issue during their upgrade testing process. I can repeat the error in a new report if I use as a Dataset Select Cast(0 as decimal(38,20)) d1 , CAST(0 as decimal(10,2)) d2 Export to Excel gets this error <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <recoveryLog xmlns="<logFileName>error056200_05.xml</logFileName><summary>Errors">http://schemas.openxmlformats.org/spreadsheetml/2006/main"><logFileName>error056200_05.xml</logFileName><summary>Errors were detected in file 'C:\Users\...ME...\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5\P42V8D4D\_TestExcel.xlsx'</summary><repairedRecords summary="Following is a list of repairs:"><repairedRecord>Repaired Records: Cell information from /xl/worksheets/sheet1.xml part</repairedRecord></repairedRecords></recoveryLog> The d1 column is now formatted as text 0.00000000000000000000 instead of as currency and the d2 column is formatted correctly
OPCFW_CODE
[🐞] Bug when using rewriteRoutes with 3+ routes and trailingSlash set to false Which component is affected? Qwik City (routing) Describe the bug The bug A simple rewriteRoutes for i18n like this seems to be creating an incorrect regex. Crashes with the error: SyntaxError: Invalid regular expression: /^\/fr\/it\/es\/test-es\/: \ at end of pattern const trailingSlash = false; const rewriteRoutes = [ { prefix: 'es', paths: { '': '', test: 'test-es', }, }, { prefix: 'it', paths: { '': '', test: 'test-it', }, }, { prefix: 'fr', paths: { '': '', test: 'test-fr', }, }, ]; But, 2 or less routes works fine. Or, if I switch the rewrite paths specifically to '/' : '' and set trailingSlash to true, then everything works as expected. However, if I set trailingSlash to false, it doesn't crash, but it also doesn't match the routes. Inconsistency when using grouped layout and one route. Seems that '/': '' configuration doesn't match index routes such as /fr when trailingSlash is set to false , but it matches when not using grouped layout as shown in the reproduction code. Possibly unintended Multiples slashes doesn't redirect or give an "404 error", I assume is not intended to allow paths like: [OK 200] example.com/path [OK 200] example.com//path [OK 200] example.com///path [OK 200] example.com////path Another regex problem Rewriting to the same path also produces an incorrect regex. // 3. Also crashes const trailingSlash = true; const rewriteRoutes = [ { prefix: 'es', paths: { test: 'test', }, }, { prefix: 'it', paths: { test: 'test', }, }, ]; Reproduction https://stackblitz.com/edit/qwik-starter-auqqff?file=vite.config.ts Steps to reproduce Run npm install && npm run dev It should give an error Play with vite.config.js, everything else is just boilerplate. Try removing a single route and check if works System Info "vite": "4.4.9" <EMAIL_ADDRESS>"^1.2.12" <EMAIL_ADDRESS>"^1.2.12" Additional Information The main issue with rewriteRoutes is that it doesn't offer the flexibility to rewrite only specific paths. Right now, you're required to rewrite all potential paths, and you can't rewrite them to the same path. For example, if someone wants to use the prefix feature to match any route but with a specific prefix, they'll have to rewrite all the paths, which can be quite limiting. I believe it would be a nice feature if rewriteRoutes could automatically rewrite routes to the default route in specific situations. For instance, if a request arrives with a prefix like /it, and the prefix matches, but the path / isn't explicitly defined as a key in the paths. In this case, it should automatically rewrite it to the default path without the prefix, only if / itself isn't specified as a value in the paths object. This way, it can safely fallback as if there's a rule to rewrite /it to /. I am going to guess that the issue is someplace around this PR: https://github.com/BuilderIO/qwik/pull/5122 @claudioshiver any chance you could look into the issue and create a fix? (Pretty please with a 🍒 on top?) @mishimalisa Did you solve this issue?
GITHUB_ARCHIVE
What is www-data user in Linux? What is www-data user in Linux? How do I add a user to the Apache group called www-data under Ubuntu or Debian Linux server operating systems using the command line option? You need to use the useradd command to add a user to the group called www-data under Ubuntu or Debian Linux system. Who is www-data user? ‘www-data’ is the user under which your web server runs. ‘www-data’ user has no password set by default. Is www-data a user or group? www-data is the user (and also group) that the service httpd (apache) is acting with on your system. Does nginx user www-data? Do not use www-data or nginx as website user. The username should reflect either the domain name of the website that it “runs”, or the type of corresponding CMS, e.g. magento for a Magento website; or example for example.com website. How do I set data permissions? Leave a comment - Establish a [new directory] at /var/www. - Change the directory owner and group: sudo chown www-data:www-data /var/www/[new directory] - allow the group to write to the directory with appropriate permissions: sudo chmod -R 775 /var/www. - Add myself to the www-data group: What is sudo Chown? The chown command changes user ownership of a file, directory, or link in Linux. A user with sudo privileges to change the ownership. Remember to run the commands with sudo to execute them properly. Which user is Nginx using? If GROUP is not specified, then nginx uses the same name as USER. By default it’s nobody user and nobody or nogroup group or the –user=USER and –group=GROUP from the ./configure script. What user should Nginx run as? Although NGINX master process is typically started with root privileges in order to listen on port 80 and 443, it can and should run as another non-root user in order to perform the web services. How do I give permission to HTML? Permissions for /var/www/html [closed] - I changed its owner using the following command (I used the user:group that was in httpd.conf ): chown -R apache:apache /var/www/html. - I added my own user to the apache group: usermod -a -G apache myuser. - I changed the permissions: chmod 777 /var/www/html -R. What is the use of chown command? The command chown /ˈtʃoʊn/, an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files, directories. Unprivileged (regular) users who wish to change the group membership of a file that they own may use chgrp. What does sudo mean in English? super user do sudo is an abbreviation of “super user do” and is a Linux command that allows programs to be executed as a super user (aka root user) or another user. It’s basically the Linux/Mac equivalent of the runas command in Windows. How to add and delete users on Debian 10 Buster? Adding and deleting users is one of the most basic tasks when starting from a fresh Debian 10 server. Adding user can be quite useful. As your host grows, you want to add new users, assign them special permissions, like sudo rights for example. How do I create a user in Debian 10? To assign a password to a user, use the passwd command. If you installed Debian 10 with GNOME, you can also create a user directly from the desktop environment. In the Applications search bar, search for “Settings”. In the Settings window, find the “Details” option. Click on “Details”, then click on “Users”. What is the www-data user in Ubuntu? The Userid or Name of the owner doesn’t matter. Whatever is chosen or decided upon will have to be configured in the web server configuration files. By default the configuration of the owner is www-data in the Ubuntu configuration of Apache2. Since that is the default configuration, you conveniently know the ownership needed for your web files. Why do I need to use Debian on my computer? For networking, Debian is an obvious choice, especially if you prefer to support yourself rather than buy a service contract from Red Hat or SUSE. But, for a desktop user, Debian’s frequent lack of up-to-dateness may be frustrating, especially if you have hardware unsupported by its kernel.
OPCFW_CODE
I would like to have a dynamic labels for my chart. What I mean is, when I select a week or a month which is less than current month, then the label should display, "Thereof BB" and if the month or week selected is greater than or equal to current month then the label of the bar should display ,"Risk BB" . With the below mentioned expression I am getting to work this except that, when I select more than one past or future month, it is not working. =if(GetFieldSelections(WeekShort)>=month(Date#(WeekShort,'MMM')) or Month(Date#(POPER_NEW, 'MMM'))>=month(today()) ,'Risk BB' ,if(GetFieldSelections(WeekShort)<month(Date#(WeekShort,'MMM')) or Month(Date#(POPER_NEW, 'MMM'))<month(today()) ,'Thereof BB', 'Thereof BB')) Any help on this? Solved! Go to Solution. I think the GetFieldSelections() is probably the part that creates the issue for you. But I don't understand this part really, why are you comparing WeekShort selections to WeekShort? Is this the field you are making selections in? And are you using one of these fields as dimension in your chart? A small sample might help to understand your issue. Yes, WeekShort is the field in which selections can be made which has date as its values. Based on the date that a user selects in this field it should check if the date belongs to current or past months, that's the reason i have provided that condition. I have attached sample files. Given your sample QVW, what do you expect to see when you do some selections in WeekShort (for example, when selecting Oct in POPER_NEW, what should I select in WeekShort? And what do you expect to see in your chart?)? Looking at this part or your label expression Why are you using the month() function on your WeekShort date (and why are you using Date#() interpretation here, and in addition why with an incompatible format code?)? I don't really understand what you are trying to do here. In this expression, I am just trying to check if the month in the selected WeekShort is greater than or equal to current Month. So when there is a selection made in WeekShort, it should check which month it belongs to and based on that the label of the bar graph should change. Ignore the expression for the bars, i want a solution for labels. I think I have explained what I want in my original post. Hope you can help me! I've understood you are looking at the label expression. But where in are you checking against current month? Seems like we have an ongoing communication problem, sorry for that. I am out of this discussion. Just some general hints: - WeekShort is already a date value. No need to interpret it using Date#(). - Month(WeekShort) is returning the unique month of the unique date value as a single dual value (text and number) - If you select more than one WeekShort value from different months, Month(WeekShort) will return NULL - GetFieldSelections() will return a comma separated string of the dates and you are comparing this to a single month value
OPCFW_CODE
If you are trying to install a Microsoft Dynamics CRM 2015 on-premises version, you’ll need to keep in mind a few things concerning your IT-infrastructure: You’ll definitely need this: 1. Microsoft Windows Server The following Windows Server versions are not supported for installing and running Microsoft Dynamics CRM Server 2015: - Windows Server 2012 Foundation - Windows Server 2012 Essentials - Microsoft Windows Small Business Server editions - The Windows Server 2008 family of operating systems The following editions of the Windows Server 2012 operating system are supported for installing and running Microsoft Dynamics CRM Server 2015: - Windows Server 2012 R2 Datacenter - Windows Server 2012 R2 Standard - Windows Server 2012 Standard - Windows Server 2012 Developer 2. A Microsoft Windows Server Active Directory infrastructure The computer that Microsoft Dynamics CRM Server is running on must be a member in a domain that is running in one of the following Active Directory directory service forest and domain functional levels: - Windows Server 2008 Interim - Windows Server 2008 Native - Windows Server 2012 - Windows Server 2012 R2 3. An Internet Information Services (IIS) website Microsoft Dynamics CRM Server 2015 supports Internet Information Services (IIS) versions 8.0 and 8.5. 4. Microsoft SQL Server 2012 and Microsoft SQL Server 2012 Reporting Services Any one of the following Microsoft SQL Server editions is required and must be installed on Windows Server 2008 (SP2 or R2) 64-bit-based versions or Windows Server 2012 (RTM or R2) 64-bit-based computers, running, and available for Microsoft Dynamics CRM: - Microsoft SQL Server 2014, Enterprise, 64-bit - Microsoft SQL Server 2014, Business Intelligence 64-bit - Microsoft SQL Server 2014, Standard, 64-bit - Microsoft SQL Server 2014, Developer, 64-bit (for non-production use only) - Microsoft SQL Server 2012, Enterprise, 64-bit SP1 - Microsoft SQL Server 2012, Business Intelligence, 64-bit SP1 - Microsoft SQL Server 2012, Standard, 64-bit SP1 - Microsoft SQL Server 2012, Developer, 64-bit SP1 (for non-production use only) Remark: I have tried to install Dynamics CRM 2015 on a Microsoft SQL Server 2014 evaluation version, which did not end well. Whether or not this was due to the evaluation version is not clear to me. You’ll want to have this too: 5. Claims-based security token service and a wild card certificate (required for Internet-facing deployments) 6. Microsoft Exchange Server or access to a POP3-compliant email server (required for email tracking) 7. SharePoint Server (required for document management) 8. Windows operating system when you use CRM for Outlook. Apple Mac, when running Apple Safari, supported tablet, or mobile device. 9. Supported web browser, such as later versions of Internet Explorer or the latest versions of Apple Safari, Google Chrome and Mozilla Firefox. 10. Microsoft Office Outlook (optional). Upgrading from CRM 2013 to CRM 2015: If you are upgrading from a CRM 2013 to a CRM 2015, you might want to also watch this video which gives you more info on not only the things I mentioned above, but also about the possible upgrade scenario’s.
OPCFW_CODE
How to make an eventlistener in a UI thread to listen to an event from another UI thread started from the first thread? I have tried to look for a solution to this problem for some time, but nothing I've found solves my problem. I have two UI threads, Window A and Window B, in a single instance application where B is created and started from A. When I try to add an event listener in the A to listen for when B is visible or not, I get an NullReferenceException in System.Threading.Tasks.dll with the note "Object reference not set to an instance of an object.". I have tried to use a Dispatcher without any luck. Here is a mock up of my code (both classes are in the same namespace): public partial class A : Window { private B _b; private Thread _bThread; private Dispatcher _bDispatcher; public A () { InitializeComponent(); _bThread = new Thread(() => { try { _bDispatcher = Dispatcher.CurrentDispatcher; _b = new B(); Dispatcher.Run(); } catch (Exception ex) { Logger.Log(ex.message); } }); _bThread.SetApartmentState(ApartmentState.STA); _bThread.Start(); _b.VisibleChanged+= _b_VisibleChanged; // <= if this line is removed the program can start, but with this line I get the exception and the program crasches.. } private void _b_VisibleChanged(object sender, EventArgs e) { // change margin values on A.. } } public partial class B : Window { private static EventHandlerList Events = new EventHandlerList(); private static readonly object EventVisibleChanged = new object(); public B () { InitializeComponent(); // other stuff } private void Window_IsVisibleChanged(object sender, DependencyPropertyChangedEventArgs e) { TriggerOnVisibleChanged(); } #region Triggers private void TriggerOnVisibleChanged() { ((EventHandler<EventArgs>)Events[EventVisibleChanged])?.Invoke(this, null); } #endregion #region Event add/remove handlers public event EventHandler<EventArgs> VisibleChanged { add { Events.AddHandler(EventVisibleChanged, value); } remove { Events.RemoveHandler(EventVisibleChanged, value); } } #endregion } I don't know what I am doing wrong and I don't know how to make this work, can someone help me? PS. this is for wpf, not Forms.. DS. PS2. I know I did not have to create my own event and trigger and could just have used the IsVisibleChanged event in window B, but I have tried that with the same result..DS. I don't believe what you want is possible as WPF uses an 'apartment' model that adheres to thread affinity, i.e. threads cannot interact with each other. Furthermore, you may be falling into this trap more tightly... as it says in the article "WPF objects that have thread affinity derive from the Dispatcher object." See here for more information on threading in WPF EDIT: Why do it in multiple threads? Perhaps what you are looking for is the RoutedEventHandler (defined in the child window)... see this SO post for an example The reason why I'm doing it this way is because it is part of a bigger system that requires me to have it in multiple threads , so I am kind of stuck in that matter. I will however look in to what you are suggesting in your edit to see if that will work. Thanks for the answer =)
STACK_EXCHANGE
SQL : Can WHERE clause increase a SELECT DISTINCT query's speed? So here's the specific situation: I have primary unique indexed keys set to each entry in the database, but each row has a secondID referring to an attribute of the entry, and as such, the secondIDs are not unique. There is also another attribute of these rows, let's call it isTitle, which is NULL by default, but each group of entries with the same secondID have at least one entry with 1 isTitle value. Considering the conditions above, would a WHERE clause increase the processing speed of the query or not? See the following: SELECT DISTINCT secondID FROM table; vs. SELECT DISTINCT secondID FROM table WHERE isTitle = 1; EDIT: The first query without the WHERE clause is faster, but could someone explain me why? Algorithmically the process should be faster with having only one 'if' in the cycle, no? Seems like something you can easily test yourself Different queries, different result. Why compare? @HoneyBadger, true, my mistake, I was just hoping for an answer with an explanation. @jarlh sorry if my explanation was not clear, but different queries, same result. The question was the speed, and the reason behind it. To make the query faster, I'd try indexing on secondID, and with the second version of query, a partial index ON secondID WHERE isTitle=1. Which dbms are you using? Different products have different optimization tricks. Also, data distribution, statistics, etc. matters. I'd say, without the WHERE clause can in best case be marginally faster, but in most cases probably slower. A WHERE clause, and a proper index, will be much faster. In general, to benchmark performances of queries, you usually use queries that gives you the execution plan the query they receive in input (Every small step that the engine is performing to solve your request). You are not mentioning your database engine (e.g. PostgreSQL, SQL Server, MySQL), but for example in PostgreSQL the query is the following: EXPLAIN SELECT DISTINCT secondID FROM table WHERE isTitle = 1; Going back to your question, since the isTitle is not indexed, I think the first action the engine will do is a full scan of the table to check that attribute and then perform the SELECT hence, in my opinion, the first query: SELECT DISTINCT secondID FROM table; will be faster. If you want to optimize it, you can create an index on isTitle column. In such scenario, the query with the WHERE clause will become faster. This is a very hard question to answer, particularly without specifying the database. Here are three important considerations: Will the database engine use the index on secondID for select distinct? Any decent database optimizer should, but that doesn't mean that all do. How wide is the table relative to the index? That is, is scanning the index really that much faster than scanning the table? What is the ratio of isTitle = 1 to all rows with the same value of secondId? For the first query, there are essentially two ways to process the query: Scan the index, taking each unique value as it comes. Scan the table, sort or hash the table, and choose the unique values. If it is not obvious, (1) is much faster than (2), except perhaps in trivial cases where there are a small number of rows. For the second query, the only real option is: Scan the table, filter out the non-matching values, sort or hash the table, and choose the unique values. The key issues here are how much data needs to be scanned and how much is filtered out. It is even possible -- if you had, say, zillions of rows per secondaryId, no additional columns, and small number of values -- that this might be comparable or slightly faster than (1) above. There is a little overhead for scanning an index and sorting a small amount of data is often quite fast. And, this method is almost certainly faster than (2). As mentioned in the comments, you should test the queries on your system with your data (use a reasonable amount of data!). Or, update the table statistics and learn to read execution plans.
STACK_EXCHANGE
Back in Spring 2020, when the Coronavirus just hit the US, I was enrolled in a Master's degree at Worcester Polytechnic Institute. With less than six months left to graduate, the urgency to get a job felt very real. For those of you interested, I was looking to get a job in the Data Science field, as a Data Analyst or Data Scientist. I had been applying to jobs for quite some time and was really struggling to get any interviews. I felt I did check most of the requirements on the job description. I had been doing projects and coursework for about two years until graduation, but I still could not get a job. So I was convinced that there was something wrong in the way I was applying to jobs. Like any sane person, I would apply to job postings that I could find on LinkedIn or Indeed. On any given day I was applying to about ten companies. I would find the posting on the job board, locate the same posting on the careers page of the company and apply on the company’s career page. Pretty straightforward right? If this task was that simple you wouldn’t be reading this. I recalled what Daniel Bourke said: Job portals are dead. If I click that button, it's always a no. I learned that if I apply through a job portal then it is almost certain that I would not get an interview. Don’t get me wrong, people do get interviews this way too but I am convinced it is a highly unlikely gamble especially for a fresh graduate like me. Anyway, I think I should have another article out in the future detailing my struggles finding a job fresh out of college. My next move was to come up with a different approach to get a job. I came across this YouTube video one day where someone talked about how he would send out emails to recruiters making a pitch for himself for a job that was posted by that company recently. That is how he got his first job in Data Science. I thought to myself “great, I’d never heard of something like this before!”. I mean being able to circumvent the process of filling up a form and waiting for it to be picked by the robot that was “parsing” my resume felt a lot like waiting for an H1B application to be picked in the lottery (my fellow Indian and Chinese students in the US would get this analogy) but with even worse odds. That is when I decided to try this approach myself. I would have loved to share that Youtube video but I could never find that video again — funny how YouTube recommendations and searches work. Alright, I was excited to try this new approach. Except there were several challenges. Well, I did my research. Here are the solutions I came up with, answered in the same order as the questions above: By now all you computer programmers probably know what I did next. I wrote a Python script! It is important to note that I did not come up with these solutions in an instant. It took me a few days to formulate these solutions. Most of this I understood from just searching on the internet. I broke down the Python script into three sub-tasks. 1. Have a source/file where you have the information for every recruiter. For the source file, I decided on using Google sheets, since I didn’t want to buy MS Excel to use on my Mac. I manually searched for the jobs I was interested in on job boards like LinkedIn and Indeed. I then entered the information into my sheet as seen below. Note that I have used dummy names and emails here, but you get the idea. 2. For the email body I created a .txt file with Python template strings so that I can substitute information for every individual recruiter. As you can see below the email body accounts for substitutions for the name of the recruiter (REC_FIRST_NAME), name of the company (COMPANY_NAME), name of the position (POSITION_NAME), and name of the location for the position(POSITION_LOCATION). This is just an initial template I created. Feel free to tweak the content to your liking. This way I could customize the content of my email to each recruiter and not come across as a generic email that the recruiter thinks I probably sent out to multiple recruiters/companies. The code can be found in my GitHub repository here. Here are the Python libraries I used : Note that I was using outlook email to send out emails. The above libraries should work if you plan on sending out emails using Outlook. However, I did try sending out emails with Gmail but I don’t think it worked. Instead of using the “smtplib” and “email” libraries I used “yagmail” which works fine with Gmail. I have a separate script for using Gmail on my GitHub. There are additional settings like the two-factor security/authentication that Gmail offers which you might have to change in order for the script to use your gmail to send emails. Just something to keep in mind. Then I created a function called “read_template” that will read the txt file containing the email body into a “Python Template” — see below embedded code. This allows the Python script to make substitutions in the email body and utilize the template strings later on. Note that I did not parse the google sheet directly (although it is possible with a little tweak) but instead downloaded the sheet as an “.xlsx” file onto my local machine. I created another function called “get_contact_dict” to parse the excel file (using the “openpyxl” library). This will extract the information from the excel sheet that is required down the line when substituting it into the template(email body). As you can see, I did type in a few “checks” into the function so that when it gets called at the time of running the script then it prompts me to check if I put in the right pathname of the .xlsx file or the right sheet number within the workbook. Also, I would parse only a part of the entire .xlsx file (I send out emails to only those recruiters that I shortlisted on a given day) which variables “start_row” and “end_row” take care of. Then I do some string manipulation because in my excel sheet the names and email addresses of recruiters are within one cell, separated by commas. With the functions defined and libraries imported then I wrote the main program that sets up the variables, calls these functions, and sends out the email. Above you can see that I set up the variables required by the script that decide what part of the .xlsx file to parse, where to find the .xlsx file, path to my resume, email_id, password and smtp details. Make the function calls. Set up the smtp server. Then I substituted the values in the email body template. Finally I entered the details for the email like “from”, “to”, “Subject”, attached my Resume and sent out the email! By now you might be asking did all of this effort actually work? Well yes and no. This will not guarantee you a job or even an interview for that matter. But it did get me an interview with an established investment firm based in New York for a Data Analyst position that required about 2–3 years of work experience. Would I have gotten this interview with my earlier approach of applying through the job portal? No. The beauty of this was that the position I interviewed for was not even posted on the careers page of the company. I actually sent out an email (using this script of course) for a Data Visualization Analyst position at the same company. The recruiter who I sent the email to never replied back and I assumed this was a dead end. But then two weeks after sending out the application, another recruiter from the same company reached out to me for the Data Analyst position. I can only surmise that the recruiter who I initially sent out an email to probably forwarded my resume to the other recruiter. I was later told during the interview by the CTO of the company that he picked my resume because I had a project that worked with the kind of data that was relevant to the data the company used. Did I get the job? I have no clue. I went through about four different rounds, the last one being with the CTO and MD of the company. After that, I was ghosted, even though I tried to follow up. Long story short I was able to circumvent the filter where a robot reads my resume. I was able to reach a human being, a recruiter, who went through my resume or at least my email. From getting automated replies rejecting my application, I was able to get a recruiter to reach out to me inviting me to interview for a role. I‘d say that is an achievement. Anyway, I’d be happy to listen to your challenges with finding jobs, or if someone actually used the script I shared then whether it worked for you or not. What other improvements can you guys think of? I will keep you guys posted with future articles as to what other methods I tried. I hope this read was of some value or at the very least an entertaining read. Either way good luck with your job search!
OPCFW_CODE
Data needs and system complexity in hospitality are undergoing exponential growth. Traditional sources of data (PMS, CRS, POS and S&C) are being supplemented by data from web site interactions, online booking activity, IoT and social media posts. This data is collected and stored in various systems of record. To connect with each other, vendors have developed interfaces that exchange data between systems, with each interaction typically triggered by some kind of an update such as a new reservation gets created, room is assigned, guest checks in/out or a room charge is posted. In many cases the same data ends up exchanged, stored and managed in different systems of record. For example, profile records of a guest that has stayed in multiple hotels will exist simultaneously in each property’s PMS database, in the central reservation system, in the OTA database, in an enterprise data warehouse and in a variety of guest service applications. So what we have is dozens of systems talking to each other in a point to point fashion exchanging the same kinds of information that each system tries to control. As an organization’s data needs grow and system complexity is increased and when it needs to scale out and think about distributed infrastructures, it ends up with a web of unreliable and unmanageable data connections. What is data? Is it a series of records, such as profiles, reservations, inventory, etc.? Yes, but more importantly it is a stream of events. Events that are not only able to tell us about the current state of affairs, but also the sequence, pace and customer interactions that have been leading up to the current state. Events are important facts that create the knowledge about one’s customers and one’s business. The HAPI approach is to put hospitality’s data on a streaming platform that would facilitate real time sharing of events between systems, services, and applications. Guest looks for a room, makes a reservation, changes it, checks in, changes rooms, checks outs. Does only the PMS system care about it? What about applications for guest service, digital marketing, revenue management, loyalty, CRM. The HAPI platform acts as a central hub for hospitality data streams. Applications that integrate don’t need to be concerned with the details of the original data source or specific message formats of contributing systems (OXI, HTNG, etc.). As data enters the platform, it gets normalized to a canonical format representing specific business entities and events. The platform acts as a buffer between systems — the publisher of data doesn’t need to be concerned with the various systems that will eventually consume and load the data. This means consumers of data are fully decoupled from the source. Several specific use cases come to mind where HAPI is particularly well suited. One of them is Real Time Event Notifications or data streaming. This is best suited for real time applications and use cases where services are decoupled from user actions. For example, the need to send a confirmation email when a reservation is created, or an alert to housekeeping to clean the room, or a welcome message to the guest via an online app. These services have traditionally relied on notification-based interfaces via messaging systems that are difficult to manage and scale, and implementing them via calls to a REST API layer is impractical. The HAPI platform enables real-time applications built to react to, process, or transform streams. This is the natural evolution of the world of Enterprise Messaging which focused on single message delivery. Another use case is Data Integration, or the need to move data between systems. For example, feeding and consolidating data from PMS and POS into an enterprise data warehouse. Traditionally this has been handled by ETL tools, but this approach lacks real time, does not scale and becomes messy with growing sets of data sources are integration complexity. The HAPI platform captures events or data changes in real time from source systems and processes them mid-stream as it feeds data to consuming systems such as relational databases, key-value stores, Hadoop, or a data warehouse. As this is happening in near real time, systems are always up to date. Due to the persistent nature of event data storage in HAPI as the streams flow through the platform, when new systems are added to the flows they can be quickly re-populated with past events, facilitating historical analytics as opposed to just capturing the current state. There are more use cases that are opening up with the new and open technologies that lie at the heart of HAPI. As the ecosystem grows, so does the potential for innovation that goes beyond the solving of traditional integration challenges that have faced the hospitality industry. It’s time to join the charge and get HAPI!
OPCFW_CODE
How I Fixed My HotMail Signin Problems One of the biggest problems I have this year is being able to manage my email for all email addresses I have in one simple email mailbox. For the longest time, I was using Apple Mail for my email as I was drinking the Apple Kool Aid full force and used not only Apple products exclusively but also relied on Apple mail servers to deliver email through my .me email account. But this need for Apple as my default email addresses have changed and I find myself looking for other email alternatives, and one of the solutions I had used many years ago was for hotmail as my email address. The problem was I forgot both my hotmail username and password as the last time I had used hotmail was years ago – before hotmail became a microsoft product and was still a seperate and one of the first to offer free email for everybody. I tried a few different usernames I thought I may have used by then as the username you pick in your 20’s tend to be an entirely username you pick in your 40’s – at least I tend to feel that way. I’ve grown up some. But they all didn’t work as now Microsoft is asking for my Window Live ID which I have no idea on how to find as I’ve been a true blue Mac user. At first I tried to use what I thought might be a reasonable username for Hotmail, but nope, no luck. I tried to use the ‘forgot your password’ feature that is offered at the hotmail sign up page but I guess my hotmail account didn’t transfer over either from being a dormant email account for so long or that when MS bought hotmail they weeded me out. Either way, I was screwed when it came to logging into my old hotmail account, so after a while I just decided to not bother signing up for hotmail at all, as I already have a .me and gmail account. What I ended up doing was to forward all my email to Google Gmail as I find that Gmail has an outstanding spam filter. The Apple Mail also offers a spam filter too but I didn’t feel it was doing a good job. And, I recently switched smartphones from an Apple iPhone to a Motorola Droid 3 and again – I felt the Gmail was doing a better job than than the Mobile Me on the Android platform so I made the switch to Gmail and never looked back. To be fair to the Apple Mail I have yet to switch over to the Apple Cloud as I find myself relying less and less on Apple products over the past year and I feel that competitors are offering much more value in 2011. Apple does still make some outstanding products no doubt about that but I’m becoming less and less willing to pay the premium Apple demands. I ended up fixing my hotmail signin problem by just giving up and using a competitor’s product instead that allowed me to login easier than I could with what is now, Microsoft Hotmail.
OPCFW_CODE
What is the interop dll? I need some clarification. I have a Reportwriter dll that uses Crystal Reports. It is written in VB6. I have to add this dll to my asp.net project, where it creates an interop dll. To my understanding, the interop dll is there as an intermediary so that my .net code can speak to the Reportwriter dll. So do I register the interop dll or do I register the original dll? +1 I think it was a good question even if no one else will vote for you. :) When you write code in VB6, the compiled result is a COM component. COM components provide interfaces, coclasses, structs and enums, which are normally described using a COM type library. However, to consume that COM component in .NET, you need type description in a format that .NET understands - that is, a .NET assembly (since it cannot work with type libraries directly). An interop assembly is therefore just a "converted" COM type library, in a sense that it contains descriptions of interfaces, structs etc that correspond to the same things in a type library. (The above is somewhat simplified, as interop assembly doesn't have to be produced from a type library - you can hand-code one if you want, for example.) Contrary to what is often said, an interop assembly doesn't contain any executable code, and it doesn't do any marshalling. It only contains type definitions, and the only place where it can have methods is in interfaces, and methods in interfaces don't have an implementation. Marshaling .NET calls to COM ones is actually done by CLR itself based on type descriptions loaded from interop assemblies - it generates all necessary code on the fly. Now as to your question. You need to register your COM DLL (the output of your VB6) project - for example, using regsvr32.exe. You shouldn't (in fact, you cannot) register an interop assembly that way, because it's not a COM component - it's just a plain .NET assembly, so you can either put it in the same folder with your .exe/.dll, or put it into GAC, as usual. So how does the interop know where the report dll exists, when the i make calls to its methods from my .NET code? Does the interop translate a key in the registry? When you register your COM component, its location is recorded in registry, and can be retrieved from there if the GUID of component is known. Interop assembly knows GUID for the typelib from which it was generated, and GUIDs for all types within that typelib - they're stored as .NET attributes. The runtime will then use this information to resolve COM component via registry. Good answer from Pavel. In addition, beginning with the .NET Framework version 4 you do not need to deploy the interop assembly with your application - http://msdn.microsoft.com/en-us/library/tc0204w0.aspx You should register your VB6 dll and reference it in your .NET project; that reference will create your Interop.dll You're correct. The interop DLL wraps the calls to the VB6 component and makes them transparent. When registering the DLLs on the machine you'll be executing the application on, you still have to register the VB6 DLL. The interop DLL will sit your app's bin folder and Marshal the calls out.
STACK_EXCHANGE
I often end up with tons of Finder windows open. I usually realize this when I ⌘-tab to Finder and notice there are 20 windows. I generally want to keep the frontmost window but close all the others. Turns out there is a super simple AppleScript to do this (source): I know literally nothing about GIS, but I need to figure it out because I need to do some spatial querying. Specifically, I need to find all the Census Blocks that are in a given urban area. This is a I'm documenting it here for anyone else who needs to get into GIS and doesn't know where to start. QGIS is the application of choice here. It's like open source ArcGIS. ArcGIS is the Microsoft Office of GIS. Setup on OS X You need matplotlib from this page. Download QGIS (open source ArcGIS). You need the GDAL and NumPy installer from this page as well. Opening Census shapefiles I want to work with this shape file from the Census for urban areas (ftp://ftp2.census.gov//geo/tiger/TIGER2010/UA/2010/tl_2010_us_uac10.zip). Download, unzip. The in QGIS go to Layer > Add Layer > Add Vector Layer... Looks like it works. Adding another layer I want to look at Census Blocks in a specific urban area. For example, Abbeville, LA (UACE=00037) as defined here. Same deal as before for adding the layer (ftp://ftp2.census.gov//geo/tiger/TIGER2010/TABBLOCK/2010/tl_2010_22113_tabblock10.zip). Then do this to see them overlaid: Finding Census Blocks in the urban area First, find Abeville using the Query Builder (cmd-f): This will hide all the other urban areas from the map. Go back in to the Query Builder and delete the query to get everything back. Select the urban area of interest using the "Select Features" toolbar button (yellow square with a mouse pointer). Now, to find the blocks in the urban area. This is done with the Spatial Query plugin. Enable it by going in the Plugin Manager in QGIS, searching for "spatial" and toggling the checkbox. You should have this icon in the sidebar now: Then run this query by clicking that button and selecting the appropriate items in the dropdown (in the screenshot, the results are shown too): Create a new layer based on the spatial query: Then right click on that layer created from the spatial query, choose "Save As..." and save it as a CSV. This will export the attribute table for the layer, which is essentially a list of all the Census Blocks in the specified urban area! I highly recommend using this with these high-resolution Hubble photos. I often want to find the full text article for a PubMed entry. This simple bookmarklet takes the PubMed page for a specific article (like this one) and goes directly to the list of full text options for my library. It beats the "Find at UMB" button that PubMed sometimes displays because it doesn't open a new window. I'm posting this here because it could be easily modified for a different library. You can use this bookmarklet creator to make your own bookmarklet with the code above. If you're from UMB and want to use my bookmarklet, go to this page. The problem: SaneBox takes a second or two to automatically move messages out of my inbox, which causes the MailMate to temporarily show an unread message on the Dock icon and status bar, or show a notification. My trick is to set up a Smart Folder called "Inbox badge" and use this as the source for my unread count and notifications. The code is in a Gist below. All files go in a single folder (say, ~/git/myproject). To run it (after bundle installing), open two Terminal tabs in this folder. In the first one, run guard. In the other, run ruby serve.rb. Then hit up http://localhost:4567 in your browser of choice. LiveReload should automatically reload the page when you change index.html (as configured; modify the Guardfile to watch additional files for changes). (These instructions are a little vague, because if you can't easily fill in the blanks you should probably just use the app. It's not worth learning all about Ruby/Gems/Rack/Guard to save $10.) I'm giving MailMate a try again after using Mail.app for the last six months. I accidentally deleted my custom keybindings 😢 so I had to re-create them. Here's what I came up with, on top of the built-in Gmail keybindings: Note that some of these are not listed in the official custom keybindings documentation. So far it's good to be back in MailMate. Mail.app has been getting slower and slower, especially search, which prompted my switch. So far I like the (fairly minor) changes to MailMate since I used it last. By default, SAS will format a 2×2 contingency table like this if you have 1=yes 0=no binary variables: | | outcome=0 | outcome=1 | |-----------|-----------|-----------| | exposed=0 | | | | exposed=1 | | | But we want it like this: | | outcome=1 | outcome=0 | |-----------|-----------|-----------| | exposed=1 | | | | exposed=0 | | | The following code demonstrates how to do this: And here's the output from this code: This AppleScript will open a "Save as" dialog, which lets you specify a folder and filename, using the folder of the front-most Finder window as the default folder. It will then save the (text) contents of your clipboard to that .txt file. tell application "Finder" if (count of windows) > 0 then set theDefault to the POSIX path of ((folder of window 1) as alias) else set theDefault to path to desktop end if end tell set resultFile to (choose file name with prompt "Save As File" default name "paste.txt" default location theDefault) as text if resultFile does not end with ".txt" then set resultFile to resultFile & ".txt" set resultFilePosix to quoted form of the POSIX path of resultFile do shell script "pbpaste > " & resultFilePosix
OPCFW_CODE
no support for git < 1.7.10 Taking cowboy as an example I want to fetch the lib from the authors depo first then edit the rebar.config and change git:// to https:// run rebar3 again and have the transient dependencies downloaded. The old rebar would create a deps folder for me and place the files there. Any advice? Use overrides http://www.rebar3.org/docs/configuration#overrides I am getting: ===> Failed to fetch and copy dep: {git,"https://github.com/ninenines/cowboy.git", {tag,"2.0.0-pre.2"}} when doing rebar3 compile My .config is: {erl_opts, [debug_info]}. {deps, [ {cowboy, {git, "https://github.com/ninenines/cowboy.git", {tag, "2.0.0-pre.2"}}} ]}. {overrides,[ {override,cowboy,[]} ]}. The get-deps with rebar works as expected cloning the remote repo locally. I have bot env var in the system and https.proxy in git set to use my proxy server. By the looks of it I do not think rebar3 finds its way through the proxy. git repos use the git command if you need a proxy to be used for it you have to set that up outside of rebar3. And the overrides should include the deps entry you want to override in cowboy with the new ranch and cowlib entries. I have the proxy set as a global configuration entry for my git client otherwise the old rebar would not work. Also the usual 'git clone' stuff works as expected... Then maybe the issue is that rebar3 uses a tmp directory to fetch the repo before copying it to _build. Can you write to /tmp? yes If you run it was DEBUG=1 rebar3 compile it'll print out the actual git command it runs so you can try that directly, it must be something about the options or cwd it uses. This is what I am getting: ===> Due to a filelib bug in Erlang 17.1 it is recommendedyou update to a newer release. ===> Verifying dependencies... ===> Fetching cowboy ({git,"https://github.com/ninenines/cowboy.git", {tag,"2.0.0-pre.2"}}) ===> sh info: cwd: "/home/xxx/erlng" cmd: git clone https://github.com/ninenines/cowboy.git .tmp_dir331214434636 -b 2.0.0-pre.2 --single-branch ===> opts: [{cd,"/tmp"}] ===> Port Cmd: git clone https://github.com/ninenines/cowboy.git .tmp_dir331214434636 -b 2.0.0-pre.2 --single-branch Port Opts: [{cd,"/tmp"}, exit_status, {line,16384}, use_stdio,stderr_to_stdout,hide,eof] ===> sh(git clone https://github.com/ninenines/cowboy.git .tmp_dir331214434636 -b 2.0.0-pre.2 --single-branch) failed with return code 129 and the following output: error: unknown option `single-branch' usage: git clone [options] [--] [] -v, --verbose be more verbose -q, --quiet be more quiet --progress force progress reporting -n, --no-checkout don't create a checkout --bare create a bare repository --mirror create a mirror repository (implies bare) -l, --local to clone from a local repository --no-hardlinks don't use local hardlinks, always copy -s, --shared setup as shared repository --recursive initialize submodules in the clone --template <path> path the template repository --reference <repo> reference repository -o, --origin <branch> use <branch> instead of 'origin' to track upstream -b, --branch <branch> checkout <branch> instead of the remote's HEAD -u, --upload-pack <path> path to git-upload-pack on the remote --depth <depth> create a shallow clone of that depth ===> rebar_fetch exception throw rebar_abort [{rebar_utils, debug_and_abort,2, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_utils.erl"}, {line,505}]}, {rebar_utils,sh,2, [{file, "/home/travis/build/rebar/rebar3/build/default/lib/rebar/src/rebar_utils.erl"}, {line,175}]}, {rebar_fetch, download_source,3, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_fetch.erl"}, {line,45}]}, {rebar_fetch, download_source,3, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_fetch.erl"}, {line,29}]}, {rebar_prv_install_deps, fetch_app,3, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_prv_install_deps .erl"}, {line,577}]}, {rebar_prv_install_deps, maybe_fetch,5, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_prv_install_deps .erl"}, {line,415}]}, {rebar_prv_install_deps, update_unseen_src_dep, 10, [{file, "/home/travis/build/rebar/rebar3/_build/default/lib/rebar/src/rebar_prv_install_deps .erl"}, {line,343}]}, {lists,foldl,3, [{file,"lists.erl"}, {line,1261}]}] ===> Failed to fetch and copy dep: {git,"https://github.com/ninenines/cowboy.git", {tag,"2.0.0-pre.2"}} Aaah, you must have an old git version. git version 1.7.1 Can you upgrade to 1.7.10? https://lkml.org/lkml/2012/3/28/418 I will try doing that, can you advise was is causing the error? Yea, it says it failed because single-branch doesn't exist, which was introduced in 1.7.10. sorry, was not familiar with the 'single-branch' concept in git. are you not planning on making rebar3 pre 1.7.1 git compatible? No, 1.7.1 is over 5 years old. I upgraded to 1.7.12 which did not work for me - a 'git clone' just hung with no error. I did some debugging and I suspect a problem with SSL handshake. I do not have time to debug now. I am running Oracle Linux Server release 6.4 and the 1.7.1 is in the standard repository so I have reverted back to the old version. I will see how much effort would be involved into getting a newer git working and using rebar3 later, for now I will have to stick with good old rebar. I would advise getting rebar3 to work with old git too if it is not too much work. Thank you for looking into this. @tsloughter we use Ubuntu Precise at the moment and it will be supported for another 2 years the git version in it is 1.7.9 - http://packages.ubuntu.com/precise/git-core What's your plans about Ubuntu LTS support? @coolfeature could we change 1.7.1 in the title to 1.7.10? Did @tsloughter not say rebar3 should work with 1.7.10? You title says that rebar does not work with git <=1.7.1, actually it also does not work with version 1.7.9 of git Ok, I am changing the title to 'no support for git < 1.7.10'. Thanks! We have no plans about Ubuntu LTS support. --single-branch is too good of an optimization to turn it off for everyone because of 5 year old installs using an outdated git version, I believe. Specifically, the LTS version of Ubuntu you use had support for Erlang R14B04, I believe, which is also not supported by rebar3 (R15B03 is the oldest version we still test, a year newer!) -- if we were to go and support old git version for that LTS, we'd pretty much also sign up for older Erlang versions. It would at least be very weird not to do so. There's just no plan for to commit supporting the schedules of entirely unrelated projects for what we work with now, and at the very least, there hasn't been a compelling argument in favor of it yet. We actually don't use Erlang from Ubuntu repos (we use our own OTP fork) and as for git we've decided to use backported git until we move from Ubuntu Precise to Trusty There is (sadly) currently no plan to go back and support a 5+ years old version of git when it means that all branch fetches end up grabbing more data than required, making the tool needlessly slower for everyone else. well, we could make rebar3 smarter by doing a check on the version of git first and then carrying on with its tasks with more or less optimized way? I'm not too keen on it. If the check can be guaranteed to run only once it's not too bad, but this won't be a priority for us given the target systems running that git version are also using older erlang versions than we support.
GITHUB_ARCHIVE
From Jason Hinkle’s Blog This is a very old article. Again I should stress that the point of this trick is not to wipe out the saved password for login forms. I use it for registration forms and preferences pages that allow users to change their password. In these cases, auto-filling the password can cause problems for the user. This is not a security fix. I have a new technique now which is much simpler. Create a hidden “honeypot” password field at the top of your form. Browser auto-complete features will only fill in the first password field that they hit. So, by having a dummy password field at the top of your form, you can trick the browser into filling out that field instead of the real password field. It looks like this: <!-- honeypot fields --> <input type="text" style="display: none" id="fakeUsername" name="fakeUsername" value="" /> <input type="password" style="display: none" id="fakePassword" name="fakePassword" value="" /> <!-- real fields --> <input type="text" id="username" value="" /> <input type="password" id="password" value="" /> — ORIGINAL POST BELOW — Autocomplete is a nice feature which fills in common form fields automatically for the user. However, in some cases you don’t want this to happen. Some examples could be an account management page where you don’t want the admin password to be auto-filled in when you are creating and managing accounts. Another example is any site that has a “My Account” page with a field allowing you to change your password. Auto-complete can accidentally fill in these fields because it thinks it is a login form. IE uses a non-standard attribute (autocomplete=”off”) that can be added to an entire form or one specific input control. Besides the fact that this attribute will make your HTML markup fail compliance tests, Firefox seems to consider this tag merely a “suggestion” and will disregard it at times. In particular, Firefox will *always* populate certain password fields. There is seemingly no way to tell Firefox not to fill in a field if it really wants to do so. This can be a very bad thing if you are dealing with a user preference page or something sensitive where you don’t want autocomplete to ever occur under any circumstance. Setting value=”” is equally worthless because Firefox seems to populate this value just after the page is rendered. The following code however will work. The concept is basically to set a timeout a fraction of a second after the page loads which clears the password field. Technically Firefox still populates the field, however this script code removes it almost instantly. As an added bonus, because you are not using autocomplete=”off” your HTML markup should still validate. This code should be placed at the bottom of your page beneath your form. // this brutally clears a password field in firefox // compliments of verysimple.com var pw = document.getElementById('MyPasswordFieldName'); if (pw != null) pw.value = ''; This code could probably be made more generic by enumerating through the form elements and searching for a certain class name. This way you could have one script and simply append a classname to any field that you don’t want auto-complete to occur. This technique is similar to one posted on Chris Holland’s blog. Chris’s solution, however, is aimed exclusively at the Wc3 compliance issue. As you can see in his code he adds the autocomplete=”off” attribute, which allows the page to validate properly, but doesn’t solve the Firefox/Password field issue. If you have a more graceful solution and/or decide to flesh this idea out, please post a comment and I’ll provide a link to your site.
OPCFW_CODE
Thriven and thronovel Cultivation Online – Chapter 221 Disciples From Other Sects broad agree to you-p2 The People’s Idea of God Its Effect On Health And Christianity Novel–Cultivation Online–Cultivation Online Chapter 221 Disciples From Other Sects merciful imperfect “T-This is…!” The sect elder was immediately stunned to check out the gold detection expression, in which he changed to consider Yuan with large vision full of disbelief and regard. And simply when he was thinking, Yuan all of a sudden been told the disciples around him conversing with enjoyment, “Would you notice? Disciples off their sects are still complicated the Carp Leaping Over Dragon’s Door Tower!” Many disciples from around the sect could possibly be observed getting close to a similar direction with this moment— towards Carp Jumping of Dragon’s Door Tower to experience the special event. Soon after Yuan left behind the Trade Hallway since he discovered that one particular cannot easy access the 2nd ground unless they are Intrinsic Judge disciples, he unexpectedly recalled the silver medallion that Elder Xuan experienced presented him. ‘I didn’t know I possibly could use my following id badge like this… Now I will go to the Treasury Hallway also.’ Yuan believed to him self. “T-This is…!” The sect elder was immediately surprised to find out the rare metal detection token, and he converted to consider Yuan with large view filled with disbelief and consideration. “T-This is…!” The sect elder was immediately surprised to determine the precious metal detection token, and then he converted to check out Yuan with vast sight packed with disbelief and honor. Even so, he managed to understand the wonderful lighting fixtures produced because of the tower, and whoever was tough the tower currently was in the 40th ground. “No, I do believe this has happened prior to, but that has been long ago. Exactly why are disciples utilizing sects challenging the tower so out of the blue? Have a thing arise just recently?” Having said that, right as he converted around and walked a handful of actions, Elder Xuan’s tone of voice resounded in his top of your head, “Disciple Yuan, go to the Internal The courtroom in order to find Elder Shan. She has some things to give you and also anything to determine you. If you require aid joining the Inner Judge, display the elder your golden disciple identification badge.” “No, I believe this has happened just before, but which has been very long before. How come disciples using their company sects difficult the tower so suddenly? Does a little something take place lately?” Yuan appeared throughout the sect by using a thinking start looking, asking yourself where he should investigate seeing that his experience acquired finished much faster than he’d envisioned. “Certainly. Here’s my detection badge.” Yuan replied while he presented the elder his gold bullion detection badge. “I would choose to navigate to the Treasury Hall,” Yuan solved, intentionally not bringing up Elder Shan. Many moments in the future, Yuan reached the front door into the Inside Judge. Yuan planned to check out the disciples out of the other sects obstacle the tower, but he recalled just how long it had taken Minutes Li to finish, which quickly designed him lose interest, because he wasn’t willing to stay around to get a whole full week until each of them finish off. A couple of events after, as soon as Yuan seen that the Treasury Hallway is at the interior Court, he sighed out boisterous, “Finally, I still need to grow to be an Inside Courtroom disciple…” strange love adventures #1 One time he’d found Blossom Maximum, Yuan set about creating his way towards Elder Shan’s location, even pa.s.sing the Treasury Hallway on the way there. The James Deans Even so, appropriate because he changed around and walked a handful of ways, Elder Xuan’s voice resounded in the go, “Disciple Yuan, navigate to the Intrinsic Courtroom and get Elder Shan. She has some things to offer you and as well something to inform you. If you want aid going into the interior Judge, present the elder your yellow gold disciple id badge.” “Who cares! Do you need to go enjoy or otherwise?” The moment he’d located Blossom Optimum, Yuan began creating his way towards Elder Shan’s place, even pa.s.sing the Treasury Hall over the way there. ‘It could be wonderful when we could see what’s occurring on the inside, but alas…’ “Who cares! Do you wish to go watch or perhaps not?” And as he was considering, Yuan abruptly read the disciples around him conversing with pleasure, “Do you discover? Disciples from other sects are currently challenging the Carp Jumping Over Dragon’s Gate Tower!” Take a look at lightnovelw/orld/[.]com for any much better expertise Sometime afterwards, Yuan came to the Carp Jumping of Dragon’s Gate Tower where many other disciples and perhaps sect elders ended up actually existing. Therefore, after ranking around for several minutes, Yuan wanted to keep the site and come back to his own home. Yuan was slightly dumbfounded by how uncomplicated it was actually to get in the Inner Court. ‘I didn’t know I was able to use my 2nd detection badge like this… Now I could visit the Treasury Hallway also.’ Yuan shown to him or her self. “T-This is…!” The sect elder was immediately astonished to discover the yellow gold detection token, and that he made to consider Yuan with vast eye packed with disbelief and honor. On the other hand, he managed to begin to see the fantastic lighting fixtures produced by the tower, and whoever was tough the tower at this time was on the 40th ground. Meanwhile, status in the front, there were clearly seven disciples and four sect experts using their individual sects. Not surprisingly, Prolonged Yijun and a few of the substantial-position sect seniors are there also. “What would you like, Outside Courtroom disciple? Do you have online business during the Internal Court?” The elder there requested Yuan. Having said that, he was able to begin to see the wonderful equipment and lighting produced via the tower, and whoever was complicated the tower at this moment was around the 40th ground. At some time afterwards, Yuan reached the Carp Jumping of Dragon’s Door Tower where numerous other disciples and in many cases sect elders were currently provide. “I would want to head to the Treasury Hall,” Yuan resolved, intentionally not bringing up Elder Shan. Comply with present books on lightnovelwor/ld[.]com
OPCFW_CODE
Visual Studio Code 1.73, a just-released update to Microsoft’s popular code editor, adds improvements ranging from Command Center mode shortcuts to new merge editor features and new Python extensions. Also dubbed the October 2022 release of the editor, VS Code 1.73 was announced on November 2, 2022. For the Command Center, a top section was added with the intention of making it easy to discover how to navigate to files, run commands, and perform other operations. A short list of modes provides keybinding hints for users to jump directly to the most-used modes, such as Go to File, without going through the Command Center. The merge editor, meanwhile, received polishing as well as bug fixes and new features. In VS Code 1.73, both Accept Incoming and Accept Current can always be selected. When both options are taken, the merge editor appends corresponding changed lines. Also, the merge editor’s default diff algorithm was changed. The new algorithm is optimized for merge scenarios. VS Code 1.73, which follows last month’s VS Code 1.72 release, can be downloaded for Windows, macOS, or Linux from the Visual Studio Code webpage. Other features of VS Code 1.73 include the following: - When right-clicking a folder in the Search view’s tree view of results, there now are two new options in the context menu. Restrict Search to Folder adds the selected folder path or paths to the “files to include” textbox, while Exclude Folder from Search adds the selected folder or paths to the “files to exclude” textbox. - A Settings Profiles capability is available in preview. - A new markdown.updateLinksOnFileMove.enabled setting will automatically update links and images in Markdown when files are moved or renamed in the Visual Studio Code Explorer. - Markdown: Insert Link to File in Workspace and Markdown: Insert Image from Workspace commands let developers quickly insert links and images into Markdown by using a file picker. Also, built-in Markdown validation can alert users to unused or duplicate link definitions. - A better-maintained Razor grammar for syntax highlighting is featured for Razor files. - Remote Development extensions now include Dev Container Templates, allowing developers to create a Dev Container based on an existing template. - New audio cues help with Tasks and the Terminal, sounding for a task completed, for a task failed, and when a Terminal Quick Fix is available. - For VS Code for the Web, committing to a protected branch in GitHub or Azure Repos will trigger notifications that the current branch is protected, and prompt developers to create a new branch. - New standalone extensions for Python are offered for isort, Pylint, and Flake8. - TypeScript 4.9 support is included as a preview. Copyright © 2022 IDG Communications, Inc. Source by www.infoworld.com
OPCFW_CODE
A couple nights ago I set one of my anti-malware programs loose on my computer before I went to bed. I wasn’t having any particular problems, but it had been a while, and it’s always a good idea to do this. In the morning I expected to find the scan results; instead, I found my computer had restarted. I had to dig around to find that the program apparently did its work. As for what made the computer restart? That I didn’t know. Later that night when I sat down to write, I found a strange thing. I could barely see what I was typing. For reasons unknown to me, it looked like my words were being typed on a typewriter with a bad ribbon. By a grandmother who didn’t have enough strength in her fingers to press the keys all the way down. The letters looked broken, thin in spots, hard to read. I slogged through on my manuscript as best as I could, but it was kind of hard on the eyes. Meanwhile, I also found that the overall look of my display was different in a hard-to-pin down way. Fonts on my web browser looked off, the size of the browser was off, everything seemed out of scale. I checked my display settings and monkeyed with them for a while, tried adjusting window sizes and zoom levels and it did nothing. Finally, while poking around, I discovered that Windows had done an automatic update, which was what caused my computer to restart. Now, I’m actually a big believer in this process. I’ve seen what happens when you let a year or two go by without doing those updates, and it ain’t pretty, because some of that stuff is pretty important. My wife feels otherwise, but she can do what she wants on her machine. I go for the critical updates, but leave out a lot of the optional ones (especially for stuff like Silverlight, which Microsoft is constantly trying to get me to install, which I won’t on principle). This policy has not led me astray–until now. I don’t know which update did it. I ended up restoring my computer to the last save point before things went wonky–which just so happened to be about two hours before the computer updated itself. Problem solved! Last night, though I did not have my most productive night ever, I was able to actually read what I was writing. Everything looked normal, and I was happy. Until today, because Windows updated itself again sometime in the wee hours of the morning. Have a nice weekend, everyone. UPDATE: After posting this, I did some more digging, but did not have time to update until now. My search led to this thread on bleepingcomputer.com, which included this: Known issues with this security update After you install security update 3013455, you may notice* some text quality degradation **in certain scenarios.*** The problem only occurs on systems that are running Windows Vista SP2 or Windows Server 2003 SP2. Microsoft is researching this problem and will post more information in this article when the information becomes available. I uninstalled the particular update and things are fine once again. See you next week! *it was hard not to **like, almost unreadable ***pretty much all of them Oh no! That just gave me a headache. I hope it's fixed. Those updates frustrate the heck out of me, too. And they happen way too frequently! I had TWO this week. Thank goodness neither one botched up my screen. I'd have really had a fit. But they always seem to want to restart my computer at the most inopportune time (usually when I'm finally immersed in my writing–aargh!). Yeah, I know I can tell it to wait, but there's no option to wait a week. Haha! 🙂 I hope your computer is doing okay now. I hate when computer things go wonky – and I hate, hate, hate trying to fix them. So glad you're back up and running now! 🙂 Things are fixed for now–I had to uninstall one of the updates, and it took care of the problem. See the edit in the post. Thanks, all! I swear, this is my bugbear, my version of "1984." I don't fear the thought police, I fear the efficiency gremlins messing with my computer — my window on the world — and being so technically challenged I won't be able to fix it. (Brrr. I just gave myself the shivers.) Good luck to you on slaying your efficiency gremlins.
OPCFW_CODE
Topgallantnovel Rebirth To A Military Marriage: Good Morning Chiefblog – Chapter 2342 – You’re Impressive (1) impress quilt to you-p1 Gallowsnovel Rebirth To A Military Marriage: Good Morning Chief – Chapter 2342 – You’re Impressive (1) move bedroom recommendation-p1 Novel–Rebirth To A Military Marriage: Good Morning Chief–Rebirth To A Military Marriage: Good Morning Chief Chapter 2342 – You’re Impressive (1) trade house He couldn’t even remain competitive against Zhai Sheng’s spouse, so what gifted him the legal right to desire to surpa.s.s Zhai Sheng? He have been so highly regarded via the w.a.n.g friends and family that he experienced overlooked his real proficiency. That they had was able to cover the simple truth from Mom Zhu rather well all these yrs. Who would have regarded that Mom Zhu will have this sort of drastic reaction to the facts? Mom Zhu loved w.a.n.g Yang. Furthermore, w.a.n.g Yang taken care of his amazing mum. In the end, w.a.n.g Yang experienced already fully committed an unpardonable sin at the young age for the health of it. cultivation chat group webnovel There were only one guy in this particular entire world who was ideal for transforming both Zhu Chengqi and Zhai Sheng all at once: Qiao Nan. Not just had Mother Zhu’s frizzy hair turned white-colored over these three days, but her physique possessed also ended up haywire. The doctor’s phrases that Mommy Zhu was psychologically sick no longer experienced the will to live struck w.a.n.g Yang like a bolt from the blue colored. Which had been why her bodily characteristics had worsened significantly together with it. w.a.n.g Yang is in a great deal of soreness that they was at a loss for phrases. They chose a confidential place inside a less noisy green tea house. As soon as the waiter exited your room, w.a.n.g Yang discovered his teacup and required a sip. It was actually a smaller mug that held no more than a mouthful of green tea, but w.a.n.g Yang needed a very good min in order to complete it. Rebirth To A Military Marriage: Good Morning Chief Just after Zhu Baoguo’s fatality and understanding Qiao Nan’s lifetime, w.a.n.g Yang noticed that Qiao Nan was, most likely, his biggest opponent as part of his whole lifetime. Even Zhu Baoguo experienced not stressed him so badly as he had been living. Zhu Chengqi possessed probably left out a really will as he had already acknowledged regarding the facts behind Zhu Baoguo’s passing away. They selected a confidential area in a less noisy tea house. Following your waiter exited the space, w.a.n.g Yang discovered his teacup and took a sip. It was actually a compact cup that held at most a mouthful of tea, but w.a.n.g Yang needed a fantastic min to finish it. Not merely had Mum Zhu’s head of hair converted bright white in these three days, but her system experienced also removed haywire. The doctor’s thoughts that Mommy Zhu was psychologically unwell without longer acquired the will to live hit w.a.n.g Yang for instance a bolt out of the glowing blue. That has been why her bodily characteristics had worsened significantly alongside it. w.a.n.g Yang is in a great deal of soreness which he was at a loss for phrases. Irrespective of how tough it was subsequently, w.a.n.g Yang needed to inquire this. Usually, he would not manage to permit it to go. After all, w.a.n.g Yang possessed already fully committed an unpardonable sin for a early age for the health of it. His mum was in good condition often, but after this type of big emotional challenge, her well being experienced deteriorated far too. While she had been a younger lady, New mother Zhu got enjoyed a good mindset and had been pampered by her elder brother and daddy. After getting hitched, her man experienced never dared to raise his sound to her. Therefore, Mum Zhu obtained practically nothing to be concerned about. In comparison to other grandmas who possessed white curly hair, Mother Zhu enjoyed a head filled with dark-colored head of hair. parent and child – child study and training center That they had had been able cover the simple truth from New mother Zhu rather well all of these years. Would you have regarded that Mother Zhu may have a really serious respond to the truth? Mother Zhu loved w.a.n.g Yang. Similarly, w.a.n.g Yang maintained his wonderful mum. That they had was able to conceal the reality from Mum Zhu rather well most of these years. Who will have acknowledged that New mother Zhu might have this type of serious respond to reality? New mother Zhu enjoyed w.a.n.g Yang. Similarly, w.a.n.g Yang maintained his excellent mum. Regardless if Mommy Zhu had never claimed something to w.a.n.g Yang, nor reported that w.a.n.g Yang had been as well challenging and vicious to Zhu Baoguo at this kind of young age, w.a.n.g Yang believed that Mum Zhu was between a rock and roll and a difficult position. If she couldn’t solve this situation, it wouldn’t be a long time before Mum Zhu would decrease and deal with the Zhu loved ones. Qiao Nan’s sight glimmered. “Do you would imagine I’m scared of you s.n.a.t.c.hing it from me?” If he tried to do so, it is going to only delay the end result, however it would not alter the fact that the Zhu family’s a.s.sets would eventually are part of her. Irrespective of how tough it absolutely was, w.a.n.g Yang simply had to check with this inquiry. If not, he would not manage to allow it go. But since Mum Zhu were confessed to the hospital, she had been frustrated and rejected to speak to any individual. She simply sat there in a very daze. Within the brief span of 3 days, w.a.n.g Yang experienced his mom’s go of dark-colored head of hair change in a go loaded with whitened locks. He noticed rather devastated about that. Even though New mother Zhu obtained never claimed anything to w.a.n.g Yang, neither reported that w.a.n.g Yang ended up being as well brutal and vicious to Zhu Baoguo at a real early age, w.a.n.g Yang understood that Mom Zhu was from a rock and roll in addition to a tricky spot. If she couldn’t fix this challenge, it wouldn’t be well before New mother Zhu would decline and encounter the Zhu family. “Alright, I’ll accept it just like I’m undertaking charitable. Let us get somewhere to possess a talk.” Owning provoked w.a.n.g Yang, Qiao Nan finally arranged to speak with him. Qiao Nan believed that it had been rather odd for w.a.n.g Yang to generally be using the effort to give up about the Zhu family’s inheritance. She acquired always considered that w.a.n.g Yang would opt for the Zhu family’s a.s.packages over his own lifestyle. Looking at w.a.n.g Yang, Qiao Nan acquired the legal right to be challenging. Using that, w.a.n.g Yang gulped his herbal tea straight down. “How do you know that I was involved in Zhu Baoguo’s passing away?” Even when Mum Zhu had never claimed anything to w.a.n.g Yang, neither reported that w.a.n.g Yang has been also brutal and vicious to Zhu Baoguo at this type of young age, w.a.n.g Yang was aware that Mom Zhu was between a rock in addition to a hard location. If she couldn’t fix this situation, it wouldn’t be long before Mother Zhu would drop and experience the Zhu family. There was clearly only one person in this entire world who was competent at modifying both Zhu Chengqi and Zhai Sheng at the same time: Qiao Nan. Irrespective of how challenging it was subsequently, w.a.n.g Yang needed to check with this query. If not, he would never have the capacity to let it go. w.a.n.g Yang hated Qiao Nan completely. In fact, he experienced appeared on her to be a well used woman but she acquired turned out to be the largest offender behind his downfall. It had been Qiao Nan’s overall look that had provided his previous numerous years of work void. Now, he didn’t also have a chance of switching backside. In addition to, the Zhai family members obtained hardly even interacted along with the Zhu friends and family before Zhai Sheng’s matrimony to Qiao Nan. If so, it was subsequently out of the question for Zhai Sheng to even have his vision about the Zhu family’s a.s.pieces. Li Yayan found w.a.n.g Yang’s worry for Mommy Zhu and her heart and soul ached.. She couldn’t aid but check with, “Dear, is income more valuable than Mom? Mom’s sensing angry mainly because she’s be a sinner in the Zhu family members and she can’t experience the Zhu household anymore. She already can feel that way toward the Zhu household therefore you still want the Zhu family’s money…” Rebirth To A Military Marriage: Good Morning Chief Li Yayan found w.a.n.g Yang’s concern for Mum Zhu and her center ached.. She couldn’t aid but consult, “Dear, is hard earned cash more vital than Mommy? Mom’s sensing troubled mainly because she’s be a sinner during the Zhu spouse and children and she can’t encounter the Zhu family members anymore. She already seems that way toward the Zhu family therefore you still want the Zhu family’s money…” the clear quran pdf Looking at w.a.n.g Yang, Qiao Nan experienced the ability to be stressful. my boyhood days pdf w.a.n.g Yang got an in-depth breath. It was actually correct that he couldn’t do just about anything if Qiao Nan would declare that she acquired almost no time to charm him. “Let’s have a talk. I commitment you that I’ll withdraw the suit when we finally have this chitchat. Provided that you reply to my concerns and I purchase an response, I won’t deal with to you for that Zhu family’s a.s.collections.” Many a long time acquired pa.s.sed considering the fact that Zhu Baoguo’s loss of life, and then he possessed thought that the truth were buried in conjunction with time. He possessed always believed he had finally toiled over the toughest of periods once the w.a.n.g loved ones possessed paid out this kind of substantial rate for him. But reality hit him difficult within the encounter, so hard which he couldn’t even endure support or reside his existence well ever again. w.a.n.g Yang laughed self-deprecatingly. “You’re proper. You don’t must fear regardless of whether I withdraw the litigation. Everything is as part of your love given that my grandfather left out designed to. There’s practically nothing you will need to stress about. The only real individual that needs to worry is me.” While he claimed that, w.a.n.g Yang got the need to vomit our blood. He couldn’t realize why he got landed up in such a status. He possessed always thinking highly about themself. He had not been even losing to Zhai Sheng. It was subsequently only a women. “Alright, I’ll get it as if I’m doing charitable trust. Let us uncover somewhere to enjoy a conversation.” Experiencing provoked w.a.n.g Yang, Qiao Nan finally arranged to speak with him. Qiao Nan believed it had been rather peculiar for w.a.n.g Yang to become taking the effort to give up on the Zhu family’s inheritance. She had always believed that w.a.n.g Yang would opt for the Zhu family’s a.s.pieces over his own life.
OPCFW_CODE
using BitcoinUtilities; using NUnit.Framework; namespace Test.BitcoinUtilities { [TestFixture] public class TestBloomFilter { // examples were taken from: https://github.com/bitcoin/bitcoin/blob/master/src/test/bloom_tests.cpp [Test] public void TestNoTweak() { BloomFilter filter = new BloomFilter(5, 0, new byte[3]); filter.Add(HexUtils.GetBytesUnsafe("99108ad8ed9bb6274d3980bab5a85c048f0950c8")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("99108ad8ed9bb6274d3980bab5a85c048f0950c8")), Is.True); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("19108ad8ed9bb6274d3980bab5a85c048f0950c8")), Is.False); filter.Add(HexUtils.GetBytesUnsafe("b5a2c786d9ef4658287ced5914b37a1b4aa32eee")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("b5a2c786d9ef4658287ced5914b37a1b4aa32eee")), Is.True); filter.Add(HexUtils.GetBytesUnsafe("b9300670b4c5366e95b2699e8b18bc75e5f729c5")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("b9300670b4c5366e95b2699e8b18bc75e5f729c5")), Is.True); Assert.That(HexUtils.GetString(filter.Bits), Is.EqualTo("614e9b")); } [Test] public void TestWithTweak() { BloomFilter filter = new BloomFilter(5, 0x80000001, new byte[3]); filter.Add(HexUtils.GetBytesUnsafe("99108ad8ed9bb6274d3980bab5a85c048f0950c8")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("99108ad8ed9bb6274d3980bab5a85c048f0950c8")), Is.True); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("19108ad8ed9bb6274d3980bab5a85c048f0950c8")), Is.False); filter.Add(HexUtils.GetBytesUnsafe("b5a2c786d9ef4658287ced5914b37a1b4aa32eee")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("b5a2c786d9ef4658287ced5914b37a1b4aa32eee")), Is.True); filter.Add(HexUtils.GetBytesUnsafe("b9300670b4c5366e95b2699e8b18bc75e5f729c5")); Assert.That(filter.Contains(HexUtils.GetBytesUnsafe("b9300670b4c5366e95b2699e8b18bc75e5f729c5")), Is.True); Assert.That(HexUtils.GetString(filter.Bits), Is.EqualTo("ce4299")); } } }
STACK_EDU
I have been exactly in your case in the past. And I went for magic methods. This was a mistake, the last part of your question says it all: this is slower (than getters/setters); there is no auto-completion (and this is a major problem actually), and type management by the IDE for refactoring and code-browsing (under Zend Studio/PhpStorm this can be handled with the @property phpdoc. Getters and setters are meant to be used when value of a variable is set from out side the class to get its value. If you really need a private function, why use magic function?. you can name it something else. – Shameer Jul 7 '11 at Remember that setters and getters (__set, __get) will work in your class as long as you NOT SET the property with given name. If you still want to have the public property definition in the class source code (phpDocumentor, editor code completition, or any other reason) when using these magic methods, simply unset() your public properties. Magic getters and setters phpInheritance and Interfaces in PHP, The constructor is probably the most used magic method in PHP. . The getter and setter methods. You declare Getter and Setter functions in class to access the set the private You can use php magic methods __get and __set to write less code explicitly for . Php Dynamic Getter & Setter by Overloading. In Php there are many useful magic methods and two of those are __call() and __callStatic() methods. The most common of PHP's magic method is the __construct() method. It's much better to have defined getter and setter methods that form a. Using PHP's magic methods __get() and __set() coupled with Reflection. otherwise are known as setter and getter methods respectively. I have been exactly in your case in the past. And I went for magic methods. This was a mistake, the last part of your question says it all: this is. If you're reading our PHP tech posts, Beaconfire RED is hiring and we want to talk to you! Come join our The “Magic” (AKA Getter and Setter). serialize() checks if your class has a function with the magic name __sleep(). Remember that setters and getters (__set, __get) will work in your class as long. Using get_ and set_ for getters and setters is a convention, but it is not required. PHP does provide a magic getter that if present is called when. Just a general 'wonder', if the standard is to use 'set' or 'get' in the method name. Is this different to using the php magic method _get and _set?. See This Video: Magic getters and setters php See More lightroom preset for mac
OPCFW_CODE
Makefile error: build.make:3687: *** missing separator. Stop I am getting the following error running cmake /build.make:3687: *** missing separator. Stop. Line 3687 that has the error: game_rUnversioned directory_32_OBJECTS = \ What is wrong there? Perhaps there is some space or tab after the \\ The Unversioned directory seems suspicious. Very probably, the build.make is some generated thing (e.g. by cmake), and you should give much more context and explain how it was generated. If you are compiling some free software, please name it. So please edit your question to improve it! Make sure you're using tabs for indentation. Using mixed tabs & spaces might cause such an error. Ah, so this is CMake, not make. That's a huge difference, you know? Don't look at the generated Makefiles at all, that will not help you. They are supposed to be black boxes (because, after all, they could be KDevelop project files etc. just as well). Look at the CMake output to figure out what went wrong generating those Makefiles. Is there a specific thing I need to look at in CMake? I don't know because there is no error referring to a specific line or something that I could look at inside CMake, I'm afraid I cannot post it here publicly due to copyright concerns.. If you cannot show code here, you are asking at the wrong place. Ask privately some colleague working on the same code base. @CodeFreaks: CMake takes the settings in CMakeLists.txt and generates whatever build files you asked for, which might be Makefiles, NMake files, a MSVC solution or whatever. This process is (generally) well-tested and reliable. If it results in broken Makefiles, this is much more likely due to an error in your CMakeLists.txt than due to a CMake error. At the very least, we'd like to know where this "Unversioned directory" part comes from, the cmake command line you used to configure your build, and the output from CMake, because that's where it likely went wrong. The space in that assignment is almost certainly the problem. Fix that and I bet the problem goes away. make does not like spaces in filenames and cannot work with them. Can u please point out what do you mean by / where is that - assignment? Check you have tabs (not spaces) in front of make commands. This is a common error with make. The immediate answer is, you cannot create variable names that contain whitespace. That's not valid in (newer versions of) make. So this: game_rUnversioned directory_32_OBJECTS = \ is not a valid variable assignment because of the space (it has nothing to do with the backslash). The longer answer is that your script ${CMAKE_CURRENT_SOURCE_DIR}/svn_version.sh which is apparently supposed to print the SVN version, is instead printing the string Unversioned directory. You'll have to figure out why that is and get that script to print the right value, or at least ensure that whatever value it prints is a single word and does NOT contain whitespace, before this will work. ETA: If you want to make this work in a directory which is not an SVN workspace, you'll need to fix the svn_version.sh script so it can handle the case where it can't find a version. Rewrite that script something like this: #!/bin/sh ver=$(svnversion -n -c game | cut -d':' -f2) case $ver in (*\ *) echo unknown ;; (*) echo "$ver" ;; esac exit 0 This ensures that if the game directory isn't an SVN directory, it will print a value that doesn't contain any spaces (unknown) and this means your makefiles won't break. It turns out that CMake is generating invalid files so-to-speak,so it's useless to look at build.make, since it will be replaced once I run make again. As for SVN I'm still very confused about how can I fix that, makes it clear why I'm requesting help from you guys here.. I regret not having enough points to up-vote this informative answer though. I never said "look at build.make". CMake is absolutely NOT generating invalid files... or rather, it's doing so because you're giving it invalid input. GIGO. If you fix your svn_version.sh script, then things will work. That's where your problem is, that script: everything else is irrelevant. Since you haven't provided us with any details about how the svn_version.sh script works we can't help with that. I'm sorry, I dont know what I should or should not provide excuse my stupidity. This is svn_version.sh script: #!/bin/bash eval "svnversion -n -c game | cut -d':' -f2" That's a ... weird ... script. Anyway, the problem is that when you run svnversion -n -c game it's supposed to print the SVN version of that directory (something like r45 or whatever), but instead it's printing Unversioned directory. Are you running this from an SVN workspace? If I run svnversion -n -c foo for some local directory foo which is not part of an SVN workspace, then I get the same behavior you see (the output is Unversioned directory). Off-course it generated invalid files... how would you interpret that error then.. build.make is generated by CMake, and it has that freaking error.. Im confused and Im giving up already, a moderator can take care of this question cuz I know nobody is going to be able to help me. Thanks for ur attemps (?). Hey bro would u be able to speak to me on Skype please?? My id is gunsnglory -- I hope that's all right for rules. Sorry, I don't Skype. I've already explained what's wrong, multiple times. The svn_version.sh script is supposed to print the SVN version of the game directory, but instead it's printing an error Unversioned directory. That is causing all your problems. Forget cmake, makefiles, etc. If you solve this problem with svn_version.sh then everything will work properly. It's most likely printing that message because your directory is not actually an SVN workspace, or else game is not checked into SVN as a versioned directory (do you know what SVN is?) I know too little about SVN, I know that it is a version control system, actually no my directory is not a SVN workspace, neither is game checked into SVN. I'm running on FreeBSD, but i havent found any tutorials to help me with a proper configuration for SVN. How can I do that?? Thanks alot! That's a completely different question which you should ask with different tags on SO (but first, of course, try to figure it out yourself by reading the docs, tutorials, etc. you can find online). However, this code absolutely requires that the current directory be a SVN workspace, or it will not work. I'll suggest a change to the svn_version.sh script that will fix this. Thank you for keeping up with me man, I will try to find a good tutorial now to make my directory a SVN workspace. Thanks. SHIIIT!!! YOU ROCK DUDEEe!!! U WERE RIGHT! PROBLEM SOLVED!!! Finally compiling without any problems, I can't thank you enough !!!!!!! IM SO HAPPY!! That bare backslash looks very suspicious. That will be interpreted by Make as an attempt to continue that line on the next (physical) line, so what does that line contain? So CMake generated the Makefile? In that case you need to fix the CMake files. @CodeFreaks: It's suspicious because you probably don't actually have a directory named "game_rUnversioned directory_32" in your source tree. Where did CMake get that idea from?
STACK_EXCHANGE
Calculator is launched but org.openqa.selenium.UnsupportedCommandException is thrown and tests fail pom details 4.0.0 <groupId>CalculatorTest</groupId> <artifactId>CalculatorTest</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>3.141.59</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <dependency> <groupId>io.appium</groupId> <artifactId>java-client</artifactId> <version>7.0.0</version> </dependency> </dependencies> Error log: Feb 13, 2019 12:48:14 PM io.appium.java_client.remote.AppiumCommandExecutor$1 lambda$0 INFO: Detected dialect: OSS org.openqa.selenium.UnsupportedCommandException: Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03' System info: host: 'INENSETTYUL2C', ip: '<IP_ADDRESS>', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_121' Driver info: io.appium.java_client.windows.WindowsDriver Capabilities {app: Microsoft.WindowsCalculator..., javascriptEnabled: true, platform: ANY, platformName: ANY} Session ID: 4A4FB6D1-0EDC-475B-8CB1-58F576BE2852 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:214) at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:166) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:586) at io.appium.java_client.DefaultGenericMobileDriver.execute(DefaultGenericMobileDriver.java:46) at io.appium.java_client.AppiumDriver.execute(AppiumDriver.java:1) at io.appium.java_client.windows.WindowsDriver.execute(WindowsDriver.java:1) Need help in resolving it. Hi @umadevisk89. It will help greatly if you can provide the log from the WinAppDriver.exe command window when this is failing. This information may also be embedded in the Appium log and was not included in the one you pasted above. Below is a sample log I am looking for. ========================================== POST /session HTTP/1.1 Accept: application/json, image/png Connection: Keep-Alive Content-Length: 148 Content-Type: application/json;charset=utf-8 Host: <IP_ADDRESS>:4723 {"desiredCapabilities":{"app":"Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge","appArguments":"-private about:flags","platformName":"Windows"}} HTTP/1.1 200 OK Content-Length: 196 Content-Type: application/json {"sessionId":"21DEF83A-9B6B-491C-9077-D557CE393E8A","status":0,"value":{"app":"Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge","appArguments":"-private about:flags","platformName":"Windows"}} ========================================== POST /session/21DEF83A-9B6B-491C-9077-D557CE393E8A/keys HTTP/1.1 Accept: application/json, image/png Content-Length: 17 Content-Type: application/json;charset=utf-8 Host: <IP_ADDRESS>:4723 {"value":[""]} HTTP/1.1 200 OK Content-Length: 63 Content-Type: application/json {"sessionId":"21DEF83A-9B6B-491C-9077-D557CE393E8A","status":0} Thanks for the reply @timotiusmargo . Below is the log from WinAppDriver.exe ========================================== POST /session HTTP/1.1 Accept-Encoding: gzip Connection: Keep-Alive Content-Length: 414 Content-Type: application/json; charset=utf-8 Host: <IP_ADDRESS>:4723 User-Agent: selenium/3.141.59 (java windows) { "desiredCapabilities": { "app": "C:\Program Files (x86)\DellEMC\Patch and Update Automation Tool\Patch and Update Automation Tool.exe", "platformName": "Windows" }, "capabilities": { "firstMatch": [ { "appium:app": "C:\Program Files (x86)\DellEMC\Patch and Update Automation Tool\Patch and Update Automation Tool.exe", "platformName": "windows" } ] } } SessionManager - Creating session for C:\Program Files (x86)\DellEMC\Patch and Update Automation Tool\Patch and Update Automation Tool.exe SessionManager - WinAppDriver succeeded loading MitaBroker SessionManager - Application launched SessionManager - Session successfully created: 321BC3D5-09B4-4810-B472-6622ADA6CFFD HTTP/1.1 200 OK Content-Length: 186 Content-Type: application/json ========================================== GET /session/321BC3D5-09B4-4810-B472-6622ADA6CFFD HTTP/1.1 Accept-Encoding: gzip Cache-Control: no-cache Connection: Keep-Alive Host: <IP_ADDRESS>:4723 User-Agent: selenium/3.141.59 (java windows) HTTP/1.1 404 Not Found Hi @umadevisk89, The log you pasted above seems to show a really old version of WinAppDriver. Would you mind retrying the same scenario on the latest version of WinAppDriver (E.g. version 1.1)? Hi @timotiusmargo , Using the latest version of WinAppDriver worked like charm. Thanks for the help.
GITHUB_ARCHIVE
Fuel your custom models with Vertex AI Staff Developer Relations Engineer Senior Developer Programs Engineer In May we announced Vertex AI, our new unified AI platform which provides options for everything from using pre-trained models to building your models with a variety of frameworks. In this post we'll do a deep dive on training and deploying a custom model on Vertex AI. There are many different tools provided in Vertex AI, as you can see in the diagram below. In this scenario we’ll be using the products highlighted in green: AutoML is a great choice if you don’t want to write your model code yourself, but many organizations have scenarios that require building custom models with open-source ML frameworks like TensorFlow, XGBoost, or PyTorch. In this example, we’ll build a custom TensorFlow model (built upon this tutorial) that predicts the fuel efficiency of a vehicle, using the Auto MPG dataset from Kaggle. If you’d prefer to dive right in, check out the codelab or watch the two minute video below for a quick overview of our demo scenario: There are many options for setting up an environment to run these training and prediction steps. In the lab linked above, we use the IDE in Cloud Shell to build our model training application, and we pass our training code to Vertex AI as a Docker container. You can use whichever IDE you’re most comfortable working with, and if you’d prefer not to containerize your training code, you can create a Python package that runs on one of Vertex AI’s supported pre-built containers. If you would like to use Pandas or another data science library to do exploratory data analysis, you can use the hosted Jupyter notebooks in Vertex AI as your IDE. For example, here we wanted to inspect the correlation between fuel efficiency and one of our data attributes, cylinders. We used Pandas to plot this relationship directly in our notebook. To get started, you’ll want to make sure you have a Google Cloud project with the relevant services enabled. You can enable all the products we’ll be using in one command using the gcloud SDK: Then create a Cloud Storage bucket to store our saved model assets. With that, you’re ready to start developing your model training code. Containerizing training codeHere we’ll develop our training code as a Docker container, and deploy that container to Google Container Registry (GCR). To do that, create a directory with a Dockerfile at the root, along with a trainer subdirectory containing a train.py file. This is where you’ll write the bulk of your training code: To train this model, we’ll build a deep neural network using the Keras Sequential Model API: We won’t include the full model training code here, but you can find it in this step of the codelab. Once your training code is complete, you can build and test your container locally. The IMAGE_URI in the snippet below corresponds to the location where you’ll deploy your container image in GCR. Replace $GOOGLE_CLOUD_PROJECT below with the name of your Cloud project: All that’s left to do is push your container to GCR by running docker push $IMAGE_URI. In the GCR section of your console, you should see your newly deployed container: Running the training job Now you're ready to train your model. You can select the container you created above in the models section of the platform. You can also specify key details like the training method, compute preferences (GPUs, RAM, etc.) and hyperparameter tuning if required. Now you can hand over training your model and let Vertex do the heavy lifting for you. Deploy to endpoint Next, let's get your new model incorporated into your app or service. Once your model is done training you will see an option to create a new endpoint. You can test out your endpoint in the console during your development process. Using the client libraries, you can easily create a reference to your endpoint and get a prediction with a single line of code: Start building today Ready to start using Vertex AI? We have you covered for all your use cases spanning from simply using pre-trained models to every step of the lifecycle of a custom model. - Use Jupyter notebooks for a development experience that combines text, code and data - Fewer lines of code required for custom modeling - Use MLOps to manage your data with confidence and scale
OPCFW_CODE
from re import match, IGNORECASE class Ignoring(object): def __init__(self): super(Ignoring, self).__init__() self._ignoreMasks = [] self.register_savedata(self._ignoreMasks) self.register_command("ignore", "ADMIN", self.cmd_ignore) self.register_command("unignore", "ADMIN", self.cmd_unignore) """ Commands """ def cmd_ignore(self, caller, mask): "Usage: ignore <mask/nick>, ignores host by mask in the form nick!uname@host, e.g. *!@*1.2.3.4" if not self._valid_mask(mask): mask = "{}!*@*".format(mask) try: self._add_ignore_mask(mask) except DuplicateMask: self.msg(caller, "Duplicate mask {}".format(mask)) else: self.msg(caller, "{} added to ignore list".format(mask)) def cmd_unignore(self, caller, mask): "Usage: unignore <mask/nick>, removes mask from ignore list" if not self._valid_mask(mask): mask = "{}!*@*".format(mask) try: self._remove_ignore_mask(mask) except InvalidMask: self.msg(caller, "Could not find mask {}".format(mask)) else: self.msg(caller, "{} removed from ignore list".format(mask)) """ Overloaded from BotCore """ def show(self): super(Ignoring, self).show() print("Ignore Masks: {}".format(self._ignoreMasks)) """ Twisted method overrides """ def privmsg(self, user, channel, message): if not self._is_ignored(user) or self.is_admin(user): super(Ignoring, self).privmsg(user, channel, message) """ Private methods """ def _add_ignore_mask(self, mask): if mask in self._ignoreMasks: raise DuplicateMask self._ignoreMasks.append(mask) def _remove_ignore_mask(self, mask): if mask not in self._ignoreMasks: raise InvalidMask self._ignoreMasks.remove(mask) def _is_ignored(self, user): for mask in self._ignoreMasks: mask_regex = mask.replace('*', '.*') if match(mask_regex, user, flags=IGNORECASE) is not None: return True return False # Matches user string in the format nick!uname@host @staticmethod def _valid_mask(mask): return match("^(.+|\*)!(.+|\*)@(.+|\*)$", mask) is not None """ Custom Exceptions """ class InvalidMask(Exception): pass class DuplicateMask(Exception): pass
STACK_EDU