Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
In 2012 one of my readers contacted me to ask whether business rules and user story acceptance criteria could be considered the same thing. I answered in a blog post that they should not (http://www.chellar.com/AnalysisFu/?p=1699).
However, in 2013 I learned decision modelling, specifically, The Decision Model (TDM) and I was taught by Larry Goldberg and Barb Van Halle. More recently I have learned Decision Model & Notation (DMN) from James Taylor.
Decision modelling dramatically changed how I view business rules. In a nutshell, business rules are not particularly useful on their own. For example, if the business has a rule that a person must be aged 18 or over in order to have an account, that does not give us the full picture. What decision is the business trying to make when taking into account a person’s age? What other data (fact types) does the business take into account when making that decision? At which points in which processes can that decision be made? What other decisions take into account a person’s age?
A decision model takes into account all the fact types needed (there may be hundreds), organises them into logical groups (e.g., relating to a person’s health, employment history, credit rating, etc.), relates those groups to each other and shows how they all drive towards the decision that needs to be made. TDM also goes down to the individual statements (rows) of logic beneath the groups of fact types (DMN does not, although it doesn’t stop you doing it). Each of those statements of logic is essentially a business rule. The corresponding process model shows at what point the decision is made and what the consequences are.
Decision models sit within business architecture. They are a technology-agnostic business asset that not only state what the business’s logic is in relation to particular decisions, but they also allow a business to experiment with and test changes to the logic before deciding whether to implement those changes. Bear in mind that decision logic is not always implemented by software. It is often implemented directly by human beings; for example, when you ask shop assistant whether you may get a refund on a purchased item, the assistant implements decision logic based on certain fact types (purchase date, current date, whether the item was bought in a sale, condition of the item, etc), the company’s policy and consumer legislation.
Atomic tasks in process models are a good source of candidate user stories and since decision models correspond to atomic tasks (of type “business rule” in BPMN terms), for each one that needs to be automated we would have a candidate user story along the following lines:
As a [business role], I need the logic that allows me to decide Y to be automated, so that [expression of business value – this is essential, as it justifies spending money on automating the decision logic].
User stories are placeholders for conversations between the business and the people building and testing the software. In the case of a decision that is to be automated, the conversation needs to be about the logic of the decision. Of course, if the business comes to the conversation A) not knowing what the logic is and/or B) not having tested that the logic works, then it’s going to be a frustrating and fruitless conversation. The conversation needs to be about transferring trusted business logic to the software development team, not figuring out what that logic is.
That figuring out is called “decision modelling”. It sits entirely within the business domain and it should be done before the software implementation technology is even chosen. The logic should have been worked out and validated, and business test scenarios defined and executed against that logic. By doing so, when it comes to the software development sprint, all focus is on implementing the automation of trusted logic. Failure to do so means stepping into a sprint with invalidated and untested business logic.
And so we come to acceptance criteria. Back in 2012, I argued that acceptance criteria were not business rules, rather they were test scenarios designed to check whether a software implementation executed business rules (in this case) correctly. My position on that has not changed. I’ve recently discussed this with Agile coaches and decision modellers and we agreed that acceptance criteria (for a decision model) are essentially the test scenarios that the business should already have executed against their decision model. However, because the logic behind a particular decision can be quite sophisticated, that sophistication must also be reflected in the corresponding test scenarios. Documenting all of them, potentially hundreds, as acceptance criteria within a user story strikes me as an unnecessary duplication of effort. Instead, I propose a single acceptance criterion to state that the software will be acceptable if it passes the same test scenarios that the business previously applied to the decision logic.
Not only does this approach avoid duplication of effort in documenting test scenarios, it also places the onus on the business to ensure that only tested logic goes into a software development sprint.
What are your thoughts?
|
OPCFW_CODE
|
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include "utils.h"
#include "config.h"
/* Yes, that's three *'s, a triple pointer! */
static char ***dict = NULL;
static unsigned long wordsOnPage[DEF_MAX_WORD_LENGTH];
void SigHandler(int sig) {
if (sig == SIGINT)
exit(0);
}
void InitProgram(void) {
FILE *fd = NULL;
char fname[16];
int i, npages, wordlen;
unsigned long j;
/* Load the dictionary into memory if we're not searching on the disk */
if (!searchOnDisk) {
npages = maxWordLength - minWordLength + 1;
/* Allocate memory for however many pages we're going to use */
dict = (char***)malloc(npages * sizeof(char**));
if (dict == NULL) {
fprintf(stderr, "%s : Error allocating space for dictionaries%s", PROG_NAME, ENDL);
exit(1);
}
/* For each page, allocate space for all of the words on that page */
for (i = 0; i < npages; i++) {
/* First, initialize the pointer to NULL */
dict[i] = NULL;
/* Get the word length for this page */
wordlen = minWordLength + i;
/* Open the appropriate dictionary file */
sprintf(fname, "dict%cword%02d.txt", DIR_SLASH, wordlen);
fname[15] = '\0';
if ((fd = fopen(fname, "r")) == NULL) {
fprintf(stderr, "%s : Error opening dictionary file '%s'%s", PROG_NAME, fname, ENDL);
exit(1);
}
/* Figure out how many words are in the file */
wordsOnPage[i] = FileSize(fd) / (wordlen + 2);
/* Allocate space for that many words */
dict[i] = (char**)malloc(wordsOnPage[i] * sizeof(char*));
if (dict[i] == NULL) {
fprintf(stderr, "%s : Error allocating space for page %d%s", PROG_NAME, wordlen, ENDL);
fclose(fd);
exit(1);
}
/* Allocate space for each word */
for (j = 0; j < wordsOnPage[i]; j++) {
/* First, initialize to NULL */
dict[i][j] = NULL;
/* When allocating, remember space for the null terminator */
dict[i][j] = (char*)malloc((wordlen + 1) * sizeof(char));
if (dict[i][j] == NULL) {
fprintf(stderr, "%s : Error allocating space for page %d, word %ld%s", PROG_NAME, wordlen, j + 1, ENDL);
fclose(fd);
exit(1);
}
/* Read the word from the file into memory */
fseek(fd, j * (wordlen + 2), SEEK_SET);
fread(dict[i][j], wordlen, 1, fd);
dict[i][j][wordlen] = '\0';
}
/* Done loading that page, close the file */
fclose(fd);
}
}
/* Print some initial report information */
if (!quiet) {
printf("Minimum Word Length: %d%s", minWordLength, ENDL);
printf("Maximum Word Length: %d%s", maxWordLength, ENDL);
if (stopAfterHits || stopAfterWord || stopAfterLength)
printf("%s", ENDL);
if (stopAfterHits)
printf("Stopping after %ld words are found in the dictionary.%s", xHits, ENDL);
if (stopAfterWord)
printf("Stopping after %s is randomly generated.%s", xWord, ENDL);
if (stopAfterLength)
printf("Stopping after word of length %d is generated.%s", xLength, ENDL);
printf("%s", ENDL);
}
}
void ExitProgram(void) {
int i;
unsigned long j;
short stop = 0;
if (!quiet) {
printf("%s", ENDL);
printf("Tries: %ld%s", totalcount, ENDL);
printf("Hits: %ld%s", numfound, ENDL);
printf("Misses: %ld%s", totalcount - numfound, ENDL);
}
if (!searchOnDisk) {
/* Free up all of that memory */
if (dict != NULL) {
for (i = 0; i < maxWordLength - minWordLength + 1 && !stop; i++) {
if (dict[i] != NULL) {
for (j = 0; j < wordsOnPage[i]; j++) {
if (dict[i][j] != NULL)
free(dict[i][j]);
else {
stop = 1;
break;
}
}
free(dict[i]);
}
else
break;
}
free(dict);
}
}
}
/* Returns the number of bytes in a file */
unsigned long FileSize(FILE *fd) {
unsigned long curpos, length;
curpos = ftell(fd);
fseek(fd, 0, SEEK_END);
length = ftell(fd);
fseek(fd, curpos, SEEK_SET);
return length;
}
int CheckWord(const char *aword, int buflen) {
//return CheckOnDisk(aword, buflen);
return (searchOnDisk ? CheckOnDisk(aword, buflen) : CheckInMem(aword, buflen));
}
/* TODO: This function will NOT work with UNIX files. It expects a carriage return
* after each word. This should probably be fixed. */
int CheckOnDisk(const char *aword, int buflen) {
char filestr[DEF_MAX_WORD_LENGTH + 1];
char tfname[16];
long lwr, upr, cur;
int cmp;
FILE *fin;
sprintf(tfname, "dict%cword%02d.txt", DIR_SLASH, buflen);
tfname[15] = '\0';
if ((fin = fopen(tfname, "r")) == NULL) {
fprintf(stderr, "%s : Error opening dictionary file '%s'%s", PROG_NAME, tfname, ENDL);
exit(1);
}
lwr = -1;
upr = (FileSize(fin) / (buflen + 2)) + 1;
cur = (upr + lwr) / 2;
while ((cur != upr) && (cur != lwr)) {
fseek(fin, cur * (buflen + 2), SEEK_SET);
fread(filestr, buflen, 1, fin);
filestr[buflen] = '\0';
cmp = strncmp(aword, filestr, buflen);
if (cmp == 0) {
fclose(fin);
return 1;
}
else if (cmp > 0)
lwr = cur;
else
upr = cur;
cur = (lwr + upr) / 2;
}
fclose(fin);
return 0;
}
int CheckInMem(const char *aword, int buflen) {
static long lwr, upr, cur;
register int cmp;
register int idx = buflen - minWordLength;
lwr = -1;
upr = wordsOnPage[idx];
cur = (upr + lwr) / 2;
while ((cur != upr) && (cur != lwr)) {
cmp = strncmp(aword, dict[idx][cur], buflen);
if (cmp == 0)
return 1;
else if (cmp > 0)
lwr = cur;
else
upr = cur;
cur = (lwr + upr) / 2;
}
return 0;
}
|
STACK_EDU
|
A few simple steps for the absolute beginner to get you on your way. If you're totally new to programming or Linux and FOSS in general, I recommend taking a glance through the glossary before trying any programs if you aren't at all familiar with Linux and FOSS.
What you need
To get going with the practical tutorials and examples on this site, you will need:
- Linux operating system
- Python programming language
There are lots of reasons to use Linux! Here are just a few:
- It's based on software which promotes your freedoms.
- It's also free as in 'free beer' - you are free to download and use it at no monetary cost (though as you'll see in the licenses section some conditions do apply). Even so, a typical FOSS license is only a page or two long if that!
- It has an active and supportive community. An answer to any problem you may have is likely to be easily found online.
- It easy to work with and pratically infinitely customisable.
- It's not ashamed of its 'terminal' (or 'command line'). While these days knowing how to use the text-only interface with your computer is not necessary, learning the basic commands will open your eyes to a fast and efficient new ways of working.
Here's an example of how useful a terminal can be. The following command will create three folders instantly named one, two and three. Simple as that - no need to open up your folders, right click and select 'create folder' three times; although you can do it that way too if you prefer.
mkdir one two three
Choosing a distro
Because anyone is free to copy or edit code under a FOSS license, there are lots of different versions of the Linux operating systems available, known as 'distros'. This site is intended to be beginner inclusive, so I'm going to recommend using one of the more popular distros out there:
|Ubuntu is the most popular linux distro, and is typically the safest bet for a beginner. That said, the way menus and the desktop work are a little different that what you'll be used to on Windows or Mac.
|Fedora is another of the most popular distros, with a bit of a reputation for being more 'cutting edge' by getting new software available sooner. In the past this meant compromising on stability, but those days are past and you should have no problem at all installing and running Fedora for any computing needs. This the distro I currently use!
|This distro is something of a smaller operation compared to Ubuntu or Fedora. However, it is solid and extremely user friendly. Use the 'Cinnamon' desktop which is similar enough to Windows to be intuitive.
Any one of those will do nicely, but it probably won't be long until you're feeling more adventurous!
Download and install
Here are the steps to installing a distro. Use online tutorials and videos to help you - there is no shortage of resources and tutorials covering this.
- Go to the homepage for the Distro and download the 'ISO'.
- Create a 'live installation' by writing the ISO on to a USB stick. There is lots of software out there to do this - a popular one for Windows is Win32 Disk Imager and on linux I use the excellent GParted.
- Boot in to the live USB on the computer you want to install Linux on. Usually this just involves plugging in the USB, restarting the computer and changing the boot order in your BIOS so that you run what's on the USB before the currently installed operating system. Sometimes this involves extra steps depending on your machine, so just check the web if you are having problems.
- Once the USB boots, you will get to try out the operating system before you install it (that's why it's called a 'live' installation). There will be an obvious option to open an installer program - I recommend connecting to the WiFI on the 'live' before you start installing.
- Follow the instructions on the installer program. This is straightforward but again there are lots of videos online taking you through each step. At some point you will get a choice to remove your existing operating system entirely and replace it with your chosen distro. I recommend you do so, but you can also install it alongside another operating system (known as 'dual booting').
Check Python is installed
Once you've booted in to your new distro, open up a terminal (it will be a program named something like "terminal"). In these tutorials we'll use the latest major version of Python, python3. Type that in to your terminal, and it should open up the Python REPL, and tell you the current version you have installed, like so:
As you can see, I'm currently using python version 3.7.3
|
OPCFW_CODE
|
Thanks. So when Peter says "1600x1200 is easily handled by any 17" CRT made in the last five years" what he actually means is it can be handled by a small handful of premium 'pro' series monitors that were released /more/ than five years ago and are no longer produced. Ok. I'll try and avoid Peter's fact needle in future
Could you check your facts and supply some references please?
A quick look at Iiyama's website ( [link] ) showed only 2 17'' monitors and they both have a 'maximum resolution of 1280x1024@65Hz'. If they can be encouraged to get to 1600x1200 the refresh rate would be truly terrible.
The story is very similar over at Viewsonic ( [link] ) where the max a 17'' does is 1280x1024@66Hz. The recommended resolution for 'flicker free' operation of these monitors is 1024x768.
Going up to a 19'' CRT tells a different story however, and this is where you'd really want to know if the A9home can do your monitor justice.
Yes, I take what I said back; 1600x1200 is beyond what most people use simply because most people with newish computers are probably using 15-19" LCDs with resolutions of 1024x768 to 1280x1024. Most people with CRTs probably have ones that are only capable of up to 1280x1024 at reasonable refresh rates as well.
Anyway, 1600x1200@60Hz is perfect for a 20+'' LCD (pity no DVI output unless I missed a spec). My only query remains 1600x1200 on a CRT which at 60Hz wouldn't be acceptable to most people - if the designers are going to say it does something in their FAQ they ought to be completely honest about it and not mislead people who might have CRTs capable of 1600x1200 at high refresh rates.
You said you /thought/ it was 60Hz, and after Stuarts email I rather assumed I was wrong about it being 60Hz. If I was correct, evidently Stuart likes sending people pissy emails for no particular reason :/
There was a 19"er that did it, but typically you find this resolution at 20.1" and higher. No, that's not big. My 21" LCD is only 2" wider than my 17" CRT and 9" less deep. If comparing to another LCD surely the A9 being so small means you now have extra desk space for a larger monitor?
"As I understand it the A9 produces a resolution beyond what is generally used and well within the specs of the chips. Not an issue."
I've heard no claim that it produces beyond 1600x1200, and if you're saying that 1280x1024 (next lower common resolution) is 'beyond what is generally used' you're mixing things up. If 1280x1024 or lower is in common use it is because people are using ancient computers not capable of higher. They aren't willing to pay for a viewfinder or upgrade to an Iyonix, but a cheaper computer finally able to display modern resolutions (that everyone outside the RISC OS community enjoys) would be ideal. This rather breaks down if the new cheap computer doesn't actually do what its designers say it does in an acceptable way (1600x1200 on a CRT over 60Hz), which is what we were confused about.
Why not be helpful and encourage Stuart Tyrrell to reveal the refresh rate at 1600x1200 and then we can all be happy and just look forward to the machine's release.
|
OPCFW_CODE
|
/*
* File: curl_multi.cpp
* Author: Giuseppe
*
* Created on March 25, 2014, 11:02 PM
*/
#include <algorithm>
#include "curl_multi.h"
using std::for_each;
using std::move;
using curl::curl_multi;
// Implementation of constructor.
curl_multi::curl_multi() : curl_interface(curl_multi_init()) {
this->active_transfers = 0;
this->message_queued = 0;
}
// Implementation of overloaded constructor.
curl_multi::curl_multi(const long flag) : curl_interface(curl_multi_init(),flag) {
curl_multi();
}
// Implementation of destructor.
curl_multi::~curl_multi() {
for_each(this->handlers.begin(),this->handlers.end(),[this](curl_easy handler) {
curl_multi_remove_handle(this->get_url(),handler.get_url());
});
curl_multi_cleanup(this->get_url());
}
// Implementation of addHandle method.
curl_multi &curl_multi::addHandle(const curl_easy &handler) noexcept {
CURLMcode code = curl_multi_add_handle(this->get_url(),handler.get_url());
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
return *this;
}
// Implementation of addHandle overloaded method.
curl_multi &curl_multi::addHandle(const vector<curl_easy> &handlers) noexcept {
this->handlers = move(handlers);
return *this;
}
// Implementation of removeHandle overloaded method.
curl_multi &curl_multi::removeHandle(const curl_easy &handler) noexcept {
CURLMcode code = curl_multi_remove_handle(this->get_url(),handler.get_url());
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
return *this;
}
// Implementation of getActiveTransfers method.
const int curl_multi::getActiveTransfers() const noexcept {
return this->active_transfers;
}
// Implementation of getMessageQueued method.
const int curl_multi::getMessageQueued() const noexcept {
return this->message_queued;
}
// Implementation of perform method.
bool curl_multi::perform() {
CURLMcode code = curl_multi_perform(this->get_url(),&this->active_transfers);
if (code == CURLM_CALL_MULTI_PERFORM) {
return false;
}
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
return true;
}
// Implementation of getTransfersInfo method
const vector<curl_multi::CurlMessage> curl_multi::infoRead() {
vector<curl_multi::CurlMessage> info;
CURLMsg *msg = nullptr;
while ((msg = curl_multi_info_read(this->get_url(),&this->message_queued))) {
if (msg->msg == CURLMSG_DONE) {
for (auto handler : this->handlers) {
if (msg->easy_handle == handler.get_url()) {
info.push_back(CurlMessage(msg->msg,handler,(msg->data).whatever,(msg->data).result));
}
}
}
}
return info;
}
// Implementation of fdSet
void curl_multi::fdSet(fd_set *readFdSet, fd_set *writeFdSet, fd_set *executeFdSet, int *maxFd) {
CURLMcode code = curl_multi_fdset(this->get_url(),readFdSet,writeFdSet,executeFdSet,maxFd);
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
}
// Implementation of timeout
void curl_multi::timeout(long *milliseconds) {
CURLMcode code = curl_multi_timeout(this->get_url(),milliseconds);
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
}
// Implementation of assign
void curl_multi::assign(curl_socket_t sockFd, void *socketPtr) {
CURLMcode code = curl_multi_assign(this->get_url(),sockFd,socketPtr);
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
}
// Implementation of wait
void curl_multi::wait(curl_waitfd extraFds[], unsigned int numberExtraFds, int timeoutMilliseconds, int *numberFds) {
CURLMcode code = curl_multi_wait(this->get_url(),extraFds,numberExtraFds,timeoutMilliseconds,numberFds);
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
}
// Implementation of socket action
bool curl_multi::socketAction(curl_socket_t sockFd, int bitMask, int *runningHandles) {
CURLMcode code = curl_multi_socket_action(this->get_url(),sockFd,bitMask,runningHandles);
if (code == CURLM_CALL_MULTI_PERFORM) {
return false;
}
if (code != CURLM_OK) {
throw curl_error(this->to_string(code),__FUNCTION__);
}
return true;
}
// Implementation of errorto_string method
const string curl_multi::to_string(const CURLMcode code) const noexcept {
return string(curl_multi_strerror(code));
}
|
STACK_EDU
|
Limit shapes for cube groves with periodic conductances
We study solutions to the cube recurrence that are periodic under a certain change of variables. These solutions give us partition functions for cube groves. We study large groves sampled with the corresponding probability measure and show that they satisfy a limit shape theorem that generalizes the arctic circle theorem of Petersen and Speyer. We obtain a larger class of limit shapes that are algebraic curves with cusp singularities. We also compute asymptotic edge probabilities by analyzing generating functions using methods developed by Baryshnikov, Pemantle and Wilson.
// // Global Asymptote definitions can be put here. // usepackage(”bm”); texpreamble(””);
A function satisfies the cube recurrence(also known as the Miwa equation([Miwa]) or the discrete BKP equation) if for all
Define the lower cone of to be . Let be a subset of such that is finite and if then we have . Let . A set of initial conditions is defined to be . Let denote the set of all sets of initial conditions. The set of initial conditions corresponding to will be denoted by and is called the standard initial conditions of order .
If we assign formal variables for in a set of initial conditions and solve for where , we obtain rational functions in . In [FZ], Fomin and Zelevinsky showed using cluster algebra techniques that these rational functions are Laurent polynomials in with coefficients in .
In [CS], Carroll and Speyer studied a more general version of the cube recurrence, which they call the edge-variables version. Define edge variables for each . A function satisfies the edge-variables version of the cube recurrence if
for . Carroll and Speyer constructed combinatorial objects called groves(See Figure 1 for an example), which they showed are in bijection with the monomials in the Laurent polynomial generated by the edge-variables version of the cube recurrence. This was used to give a combinatorial proof of the Laurent property.
The cube recurrence is related by a change of variables from to conductance variables (defined in Section 3) to the transformation for resistor networks discovered by Kennelly([K])(see [FZ], [GK]). Petersen and Speyer showed in [PS] that random groves sampled with the uniform probability measure satisfy a limit shape theorem. In this paper, we extend their result to a larger class of probability measures on groves, namely those that arise from solutions to the cube recurrence that are periodic in the conductance variables. We construct such solutions to the cube recurrence and follow the method of Petersen and Speyer in [PS] to obtain generating functions for the probabilities of each of the three types of edges being present or absent at each site in a random grove. The periodicity assumption on the conductance variables leads to a system of linear equations(equation 7.3), whose solution gives the generating function. We then use the general theory of asymptotics of multivariate generating functions developed by Baryshnikov, Pemantle and Wilson([PW1], [PW2],[BP] and [PW3]) to find the limit shape and compute the asymptotic edge probabilities(equation 11.1). The non-periodic probability measures on groves studied by Petersen and Speyer have limit shapes that are ellipses, whereas we obtain a larger class of algebraic curves having cusp singularities reflecting the periodicity(see Figure 2).
Acknowledgements. I would like to thank my advisor Richard Kenyon for numerous discussions and ideas that led to this paper and Ian Alevy for reading through the paper and for many helpful suggestions.
2. Groves and the cube recurrence
We recall some some terminology and basic properties of groves from [CS]. A rhombus is any set in one of the following three forms for :
We call the edges and the long diagonal and the short diagonal of the rhombus respectively and similarly define the diagonals for the other two types of rhombi.
Let be the graph with vertex set and edges the long and short diagonals appearing in each rhombus in . Then an -grove is a subgraph of with the following properties:
The vertex set of is all of .
For each rhombus in , exactly one of the two diagonals occurs in .
There exists such that if all the vertices of a rhombus satisfies , the short diagonal occurs.
For large enough, every component of contains exactly one of the following sets of vertices, and each such set is contained in a component of .
, , and for all with and ;
, , and for .
It is shown in [CS] that groves are completely determined by their long diagonal edges. Therefore, we can represent groves as a spanning forest of a finite portion of the triangular lattice(see Figure 1), which is called a simplified grove.
Suppose is a set of initial conditions. The edge-variables version of the cube recurrence gives as a rational function in the variables . The following is the main result of [CS].
Theorem 2.1 ([Cs]).
where is the degree of the vertex in the grove .
Now suppose satisfies the cube recurrence. Since are positive real numbers, by Theorem 2.1,
defines a probability measure on . Therefore any function that satisfies the cube recurrence induces a family of probability measures .
3. Conductance variables and the move
We define the conductance of the long diagonal to be
and that of the short diagonal to be
Similarly we define the conductances of the other diagonals. The action of the move (see for example [GK]) gives
Each of these equations becomes the cube recurrence when written out in terms of . Therefore the on the conductance variables is related to the cube recurrence on the variables by this change of variables.
Note that if
We can therefore take the edge-variables to be the conductances. Let
Note that (3.1) is equivalent to
and using this, we get
4. Grove Shuffling
Suppose we have a solution to the cube recurrence such that the conductance variables satisfy . Grove shuffling is a local move that generates groves with measure and couples the probability measures for different initial conditions in a convenient way(see Figure 4). Grove shuffling takes a cube in , removes it and replaces a configuration on the left in Figure 4 with a corresponding configuration on the right. The only random part is (A) where the configuration on the left is replaced with one of the configurations on the right with probabilities indicated on the arrows. Note that the probabilities sum up to 1 by .
We can generate a random grove on initial conditions as follows. Start with the unique grove on . Use grove shuffling to remove cubes till you end up with initial conditions . The following lemma shows that this can always be done.
Lemma 4.1 ([Cs]).
Suppose . Then there exist such that (and so )
The key ingredient in computing the limit shape is the following theorem that shows that substituting the conductances for the edge-variables gives an algebraic interpretation for the measures .
Suppose is a solution to the cube recurrence such that the conductances satisfy (3.1). Then grove shuffling generates groves with measure , regardless of the order in which cubes are shuffled. Moreover in the edge-variables version of the cube recurrence we have
The proof is by induction on . If then it is clear. Suppose is not empty. Choose as in lemma 4.1. We obtain the initial conditions by shuffling the cube with vertex in . Consider any grove . Since belongs to three rhombi, it has degree 3, 2 or 1.
Suppose has degree 1. Then belongs to a triple of groves, say in the order shown in Figure 4 (A), all of which are obtained from a single grove by shuffling with the probabilities indicated. Without loss of generality, we may assume that is . Since each shuffle is independent of the others, the probability of obtaining is
In vertices and have degree one more than they did in , while had degree 3 in and has degree 1 in . Therefore, by Theorem 2.1
Therefore the probability of obtaining by shuffling is . By the induction hypothesis,
There are two new long edges in and therefore, by Theorem 2.1,
When has degree 2, all the vertices have the same degree as before and both and have the same long edges, and therefore and are identical. Since is obtained deterministically upon shuffling from , is the same as .
Suppose has degree 3. Then there are three groves, say , and (in the order in Figure 4(B)) that upon shuffling the cube at yields . Hence the probability of obtaining the grove is
In , vertices and have degree one greater in they do in , has degree 1 in and in , has degree in and in , and all the other vertices have the same degree. Therefore,
Similarly we obtain equations for and and summing and using the fact that satisfy the cube recurrence, we get
and so the probability of obtaining is . Let denote the term in corresponding to the grove . Then by the induction hypothesis, . By [CS] Theorem 1, we have
An analysis similar to the case of degree one above shows that this is ∎
5. Creation rates
Let . Let be a set of initial conditions such that and let have distribution . Define the long edge probability
These are well defined since if is another set of initial conditions, then we can use grove shuffling to move between and leaving the rhombus intact. Similarly define
and the creation rates
It was shown in [PS] that
and therefore by Corollary 4.3,
The following result lets us obtain the generating function for from that of .
Theorem 5.1 (([Ps] Theorem 2)).
The edge probabilities are given recursively by
6. Periodic conductances
We are interested in solutions to the cube recurrence whose conductance variables are periodic. For the sake of clarity, we compute the limit shape for a particular fundamental domain(see Figure 5), but the same method can be used to compute the limit shape for any choice of fundamental domain. Suppose we have satisfying the conditions
The function given by
is a solution to the cube recurrence with conductances
Let denote the family of probability measures induced by . Similarly we define with conductances
and probability measures .
Let and denote the respective creation rates.
Let and be such that is in the lower cone . Then
We translate so that moves to , that is define to be . Then satisfies the edge-variables cube recurrence with conductances if is even and if is odd. Therefore,
The formula now follows from (5.1). ∎
7. Generating functions for the creation rates and the limit shape
For with even, the edge-variables version of the cube recurrence for is
Differentiating with respect to , we obtain
Let and . Setting for all and using (6.1) we get
for . Similarly, taking odd yields
for . On the boundary, we have the following recurrences
and similarly for .
Solving this, we obtain
|
OPCFW_CODE
|
“The ability for a system to always be functioning and accessible, with minimal downtime”
For any business, it is vital for their customers to have access to their systems at all times, with no downtime. (Downtime = any time the system is not available).
For every second a customer facing system is down, it may equate to missed sales and could negatively impact the reputation of the business through lowered customer satisfaction. Even slow responses can have this impact. If a request is taking too long the customer will be dissatisfied with the service and may go elsewhere. The system must always be available and operating smoothly to facilitate business success.
High availability is not just important for customer facing systems. If an internal system is consistently down then productivity will be reduced and time will be taken from important tasks until the system is functioning again.
It is clear why High Availability is vital for systems, but how is it achieved? The answer is Fault Tolerance.
“The ability to remain fully operational, even if some components of the system fail”
The key point to understand is that Fault Tolerance allows for High Availability in systems. The concepts are linked, but still different. Systems designed with Fault Tolerance in mind can have areas fail without impacting the functionality of the system, thus achieving High Availability.
The core principle when designing a fault tolerant system is ‘redundancy’, always having alternative servers that can continue functioning and handle traffic if another server goes down.
Hypothetically a business could have thousands of additional servers setup and dealing with requests, which would prevent any one server becoming overloaded. However, there is also other considerations. For example, how does the system know when a server is down and that it should direct traffic elsewhere, saving customers from accessing an unhealthy server and getting a failure response.
In reality the cost of this setup would be astronomical due to purchase and maintenance of such a high number of servers. It is therefore not feasible - cost is the main barrier to achieving Fault Tolerance.
Everything required for a Fault Tolerant system is possible and simplified in the cloud.
Servers can be accessed On Demand and you are only charged for the resources utilised. No longer is there a need for an extensive backup infrastructure with servers being underutilised due to the requirement of having spare capacity in the event they are required to handle the load of a failed server. Using Auto Scaling in the cloud, capacity can be increased as needed, and you only pay for the capacity used.
Health checks can be easily created to monitor the running of a server, alert when there is a failure and redirect traffic to a healthy server. This instantaneous traffic redirection means there is no downtime if a server fails. Furthermore, a new server will instantly be provisioned to replace the failed resource. This Autohealing keeps the system at the required baseline for successful functionality.
A catastrophic event such as a natural disaster could cause an entire data center full of servers to be destroyed, instantly wiping out every resource and causing irreparable failure of the system if every server was stored in this area. To avoid this in the past, a huge cost for the business would need to be incurred to spread servers across two isolated data centers, which also added extra complexity to management of the system. Within the cloud, resources can be spread across a multitude of data centers with ease and even expanded across regions, to maximise Fault Tolerance and guarantee High Availability.
For advice on making your Cloud environments fault tolerant and highly available, contact one of our Solution Architects. Email [email protected]
|
OPCFW_CODE
|
Aktuell gibt es keine Neuigkeiten
Trustworthy Machine Learning
Machine learning has made great advances over the past year and many techniques have found their ways into applications. This leads to an increasing demand of techniques that not only perform well - but are also "trustworthy".
- Interpretability of the prediction
- Robustness against changes to the input, which occur naturally or with malicious intend
- Privacy preserving machine learning (e.g. when dealing with sensitive data such as in health applications)
As a proseminar’s primary purpose is to learn presentation skills, the seminar will feature two presentations from each student. As presentation and writing skills are highly interlink for each presentation also a very short - at most 2 pages - report has to be handed in.
In the first half of the semester, we will have presentations of two topics each week. After each presentation, fellow students and lecturers will provide feedback on how to improve the presentation. This general feedback must then be taken into account for the second half of the semester, where again each student will present.
The *first presentations and report* will count towards 30% of the overall grade, the *second presentation and report* will count towards 70% of the overall grade. Attendance in the proseminar meetings is mandatory. At most one session can be skipped, after that you need to bring a doctor’s note to excuse your absence.
The date for the meeting is fixed to Thursday, 14-16. All meetings will be virtualized via Zoom. Here are the details for joining the virtual meetings.
Meeting ID: 942 9134 5987
|May 7th||Kick off Meeting and topic overview (slides, video)|
|May 14th||How to present and write (slides, video)|
|May 28th||Analysing and dissection writing and presentation (paper1, video1, paper2, video2, seminar video)|
|June 4th||no seminar|
|June 18th||(first round) Interpretability, Adversarial Examples, DeepFakes, Model Stealing|
|June 25th||(first round) Uncertainty, Privacy, Fairness, Causality|
|July 2nd||no seminar|
|July 9th||(second round) Interpretability, Adversarial Examples, DeepFakes, Model Stealing|
|July 16th||(second round) Uncertainty, Privacy, Fairness, Causality|
First Round Papers (in random order)
Dropout as a bayesian approximation: Representing model uncertainty in deep learning
Y Gal, Z Ghahramani - International conference on machine learning, 2016
- Model Stealing:
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart, USENIX Security, 2016
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, Rory Sayres, ICML 2018
- Adversarial Examples:
Towards Evaluating the Robustness of Neural Networks.
Nicholas Carlini, David Wagner, S&P 2017
Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints
Ning Yu; Larry Davis; Mario Fritz, ICCV 2019
Deep Learning with Differential Privacy
Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang, CCS 2016
Fairness Constraints: A Flexible Approach for Fair Classification
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, Krishna P. Gummadi, JMLR 2019 (previously AISTATS 2017)
Discovering Causal Signals in Images
David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou, CVPR 2017
|
OPCFW_CODE
|
Initially not working with multiple devices
I have two wemo insights and both of them are detected by wemo-homecontrol but when you go to HomeKit only one of them appears. It seems that you create a new IPTransport per device:
t, err := hc.NewIPTransport(hcConfig, acc.Accessory)
I made a test having all the devices added to the same IPTransport and in that case all devices appear properly on HomeKit. When you register the device it appears as a gateway and once connected you have both of the switches.
I don't know if the problem is on the usage of hc (github.com/brutella/hc) or indeed, a problem in hc. Looking on it, it doesn't allow to add new accessories (devices) after an IPTransport is created.
Apart from that all is working like a charm, thanks for it. If you need any help with the code, let me know.
Something I am working on:
Force the addition of devices without discovering them, in order to add devices from different networks
Be able to run wemo-homecontrol in a docker container
Thanks for the report. I only have one of these devices, which would explain why it doesn't work with multiples. I suspect it's a problem with my usage of hc. I'll take a stab at fixing it, and ask you to test it when it's ready.
By the way, I'd be happy to take a Dockerfile for this.
Hmm, not a lot of other hc users do multiple accessories, but some that do (including one by the author) do it similarly to how I'm doing it in this module. An example, https://github.com/brutella/hklifx/blob/master/hklifxd.go#L128-L162
Well I don't know. I am testing with, "out of the box" configuration on all elements:
Configure the switches with the mobile wemo app
Both of them are visible on the app and you can turn them on and off
Clone your git repo, compile and run
Both switches are detected by the program
Open HomeApp on the iPhone
It detects only one of them as a switch, you can add it to your HomeKit environment
But now that I am writing this, I think the problem can be somewhere else, I will need to test it again when I am back home, but I think that the one detected is not the one added, I mean, it detected one but then when you see which one is in HomeKit is the other one, but I am not 100% sure on this. Looking into the function itself: https://github.com/brutella/hc/blob/master/ip_transport.go#L52-L68 it mentioned that the database can't be the same
// *Important:* Changing the name of the accessory, or letting multiple
// transports store the data inside the same database lead to
// unexpected behavior – don't do that.
Why I am 100% sure is that if instead all calling it once per found device I add them as a list of accessories, both of them appear properly on the homekit app.
t, err := hc.NewIPTransport(hcConfig, acc1.Accessory,acc2.Accessory)
Let me check a little bit more and I will let you know.
Ah! Thanks for the tip. I think the issue is that I am setting the storage path to be ~/.homecontrol/wemo, https://github.com/joeshaw/wemo-homecontrol/blob/master/main.go#L461. I think I was under the assumption that was a base path, and that hc would create a directory per-accessory under that. This should be a relatively simple fix, I will pass in a new config each time with the serial number tacked onto the end of the directory.
Confirmed, changing the StoragePath in the hcConfig solve the problem.
One last thing you could check for me: do your accessories have exactly the same name?
no, each one of them has a different name, I have not checked if you can even name them equally on the WemoApp. Anyhow if you want to be sure they are different, you can use the serial number.
Or sorry, were you asking me to test using the same device name for both switches?
No, I was just curious if they had the same name, for the purpose of the storage directory (since hc's default is to use the accessory name). But it'll just be simpler to use the serial number.
|
GITHUB_ARCHIVE
|
Covered by this topic
The following page defines data and fields that may be imported into MIE systems (WebChart, Enterprise Health) to create structured text (HTML) summary documents using the Summary Documents CSV API.
The abstract that follows should be presented to decision-makers or stakeholders interested in a general explanation of the Summary Documents CSV API. Technical details are provided in the remaining sections.
The Summary Documents CSV API imports non-discrete text data as an HTML document.
It is valuable to recognize the following terminology as it pertains to MIE systems:
- A document in EH is a way of storing information in patient charts. This includes patient photographs, insurance cards, physician or nurse notes, imaging studies, past medical histories, physician tasks for a patient, CCDs and CDAs, email correspondence about a patient, injections, and many other forms of data.
- A chart is a patient’s electronic medical information organized in tabular form. A chart is simply a way to collect different information on one topic, just like a physical patient chart would contain a variety of information on an individual patient.
- Free text refers to text that is entered free-form into a system and is not subject to any type of formatting or standards.
CSV refers to the type of file and format of data needed to import information into an EH system. API refers to how the data interacts with the EH system. See the Import Overview page for a more detailed explanation of terminology.
The following screenshots show a simple CSV file, and the resulting summary document in an EH system. Example data is available on the tab “DB_Example” in the specification (see link in Specification section of this page).
The first several columns in the example CSV dictate some discrete metadata for the document, such as the Chart ID, (documents.pat_id) and External ID (documents.ext_doc_id).
Following the discrete fields, a Section Header (section_header) several Name Value Pairs (name_value.NAME), and a Narrative with Prefix(narrative.PREFIX) follow to create the body of a case summary.
Each column in the CSV above corresponds to a line of text in the resulting summary document. In this example, there are two section headers centered at the top of the document, several field name and value pairs, and at least one narrative block of text under each section header. These fields are all optional, repeatable, and may be ordered in any way to create a custom document to fit the needs of the client’s data. Keep in mind that any of the data in the body of the document composed of the section headers, field name and value pairs, and narratives are not discrete and not searchable.
The last screenshot displays a list of the Document Summary in EH. This is a listing of all of the documents in a patient’s (employee’s) chart. This view allows a user to see at a glance service dates, locations, document types, and the document title or subject of all of the documents on a chart.
The following subsections outline situations in which summary documents are useful, and when they are not.
Summary documents are stored in EH as formatted text documents. They are most useful for storing notes (physician or nurse notes, emails regarding an individual, contact notes, etc.), or two-column structured content that is not used for reporting, such as lists of medications or injections.
Summary documents can be used to store any type of information relatively quickly. There is no mapping of discrete fields to MIE’s API or converting internal codes to MIE’s specifications, which is tedious in some applications. Use summary documents for quick conversions to create reference documents for use in the clinic.
The data in the body of a summary documents is not discrete. Only information in the document header (Chart ID (documents.pat_id), Service Date (documents.service_date), Location (documents.location), etc.) is stored as discrete data. The contents of the document body is not stored discretely and may not be searched or reported in EH.
Additionally, only text may be stored by this API. While documents in EH may be anything from images, PDFs, Word and Excel documents, or many other storage types, this API is only for structured text documents stored as HTML.
The prior section on Disadvantages of Summary Documents gives an overview of the main drawback of using this import: the data in the body of the document is not discrete data. Some data is stored discretely on each document. Typically this data is useful for categorizing and quickly finding a summary document. As discussed later in this document, many of these fields are considered best practice to specify data.
- Chart ID (documents.user_id): A discrete identifier is used to connect the summary document to a specific chart.
- External ID (documents.ext_doc_id) and Interface: The interface name entered at the time of data import as well as the External ID(documents.ext_doc_id) (typically the autoincrementing or unique key from the source database or spreadsheet) is stored discretely, although it cannot be viewed from the front end of EH.
- Author (Originator) ID (documents.origin_id) and Entering User ID (documents.user_id): Both the creator of the content and the one who enters the data are stored discretely.
- Service Date (documents.service_date), Origin Date (documents.origin_date, and Enter Date (documents.enter_date) : These dates are all stored discretely for each document.
- Location (documents.location): The service location may be used to specify either a clinic location or the system from which the data came (EG: Medgate, OHM, and so on). If a clinical or service location readily available to map to a location in EH, that is typically preferred.
- Document Type (documents.doc_type): The document type classifies the contents of a document. This helps to quickly understand at a glance what kind of data may be found in the document. Additionally, classifying documents with document types creates the ability to search for and report on documents in the system. Examples of document types include insurance cards, patient photos, nurse notes, and so on.
- Document Title (documents_txt.subject): The title or subject of the document is short text field that is unindexed in the database, but it is stored separately from the body of the document. It is used for quick reference to specify the contents of the document from an EH list without actually opening the document. Thus, it is included as key piece of discrete data to quickly provide a preview of the document.
The CSV Summary Document headers Section Header (section_header), Name Value Pair (name_value.NAME), Narrative (narrative), and Narrative with Prefix (narrative.PREFIX) are all used to build content for the body of the document. Any data included in the CSV under these headers is not discrete, searchable, or reportable.
Examples of Discrete and Non-Discrete Data
The first screenshot shows a list (listview) in EH. This can be seen by selecting the Document Summary tab from a chart. A list of documents is displayed showing dates, location, doc type, and title.
The second screenshot shows a document’s header, which contains much of the discrete data discussed above.
Document properties display discrete information about a document that is not available in the document header.
Documents may be searched using the following criteria in EH: patient name, entering user, authoring user, interface name, location, service date, creation date, revision date, subject, storage type (this API always creates HTML files), and Doc ID, which is an internally assigned identifier.
Many clients have opted to create summary documents for their medications, injections, or other data that is discretely coded in EH. In the legacy systems that are converting to EH (Medgate, OHM, spreadsheets, etc.) data is often entered as free text, including typos, without a coding system, or with incomplete data. Clients may not want to take the time to map free text or incomplete data from the legacy system to MIE’s coding standards at the time of conversion. In those instances, the client creates summary documents for the data, since creating summary documents may be much quicker to create with EH’s API. Then the data is reviewed with the patient/employee during the first clinic visit after EH is in use. At that time, a clinician can go through the legacy summary document and quickly add the relevant data discretely using EH’s fast autocompleting fields and drop-down menus, ensuring proper coding, and facilitating reporting on the data.
It is sometimes valuable to import questionnaire data as a summary document. In this example, the Name Value Pairs (name_value.NAME) columns function as questions from a questionnaire with the responses listed in the corresponding column.
The questionnaire document is listed for the specified patient (Dolly Bacon).
Questions and corresponding responses are listed in the questionnaire summary document.
The following sections provide insight for technical personnel working with the provided import specifications. Although the specifications provided include details on each field utilized in the import, the sections below include further discussion on best practices for imported data to provide the best functionality in Enterprise Health.
Additionally, user instructions are available for importing data in EH.
Definitions for the columns utilized in the specification, as well as commonly used specific coded values appear on the Data Import Standards page.
The following fields (indicated in the Data Name column) are noted as required (R) or are recommended as best practice (BP) in the Summary Documents CSV API specification. Additional details and considerations are provided here.
The following fields are required:
- Chart ID (documents.pat_id) and Chart ID Type documents.pat_id_type) are used to to correctly identify a chart.
- External ID (documents.ext_doc_id): Identifies a record in the original data source (i.e., this is often the primary or unique key on the table of the legacy database that is being migrated to the MIE system).
- Document Types (documents.ext_doc_id): Used to categorize documents, as mentioned above. All documents in EH have a document type. Note that the document type listed in the CSV must exist in the EH system (in EH, go to Control Panel > Document Types) or that line will be rejected. For further reading on document types, refer to the Enterprise Health online help titled Document Type Tab.
Although this information is not required, it is considered a best practice to use at least some of these fields to populate information in the header of a document for identification and organizational purposes:
- Service Date (documents.service_date): Shows the pertinent date for the summary document, and it is displayed in the document list view.
- Location (documents.location): Shows where the service took place. This piece of metadata may be used in reporting.
- Document Title (documents_txt.subject): Displays in the listing of documents in EH and helps identify a document quickly.
- Section header (section_header), Name Value Pair (name_value.NAME), and Narrative (narrative) are used to structure the contents of the document.
The following optional fields are needed to link the document to a patient encounter:
- Encounter External Identifier (encounters.ext_id)
- Encounter Interface (encounters.interface)
Including the field encounter order_id will also create an encounter order of the identified in the field.
For complex queries (one-to-many) that generate CSV content, you may concatenate multiple rows into a single document in EH.
Documents are grouped by required fields. The screenshots below show an example that creates two documents.
The combined summary documents display on the patient’s chart.
Examples using sample data are provided on separate tabs in the specification.
Unless otherwise specified, validation between the previous system and the new EH system requires the client to provide a number of test patients. This data can be compared in the previous system and EH using the validation test script.
|
OPCFW_CODE
|
Questions about combustion and isotopic abundance
the combustion of 0.214g of sulfur provided exactly 0.428g of sulfur dioxide. After a second try, 0.782g of SO2 was obtained. What mass of sulfur was used?
The combustion of lithium forms lithium oxide. Combustion involving a 1.451g sample of oxygen results in obtaining 2.710g of Li2O. What mass of lithium was involved in this reaction?
Natural argon has three isotopes: Ar-36 (35.967u), Ar-38 (37.962u), and Ar-40 (39.962u). if the isotopic abundance of Ar-36 is 0.006%, find the isotopic abundance of the other two
After pulling my hair out:
I still have no idea how to do this question.
I balanced the equation $4Li + O_{2} -> 2Li_{2}O$ then found the moles of both oxygen and lithium oxide, which was 0.090688 mols and 0.090333 mols. Used the smaller number of moles, set up the ratios $\frac{4Li}{2Li_{2}O} = \frac{x}{0.090333mols}$
x = 0.1807 mols, which equals to 1.2647g of Li
Is this correct?
35.967(0.00006) + 37.962(x) + 39.962(1-x) = 39.948
0.00215802 + 37.962x + 39.962 - 39.962x = 39.948
-2x = -0.01615802
x = 0.00807901
which mean Ar-38 is 0.8%, but the answer clearly says it is 0.588%
StackExchange should not do your homework for you, as that would be cheating.
@EricBrown First, this is not homework, they are just extra practices for the exam. Second, I don't know how to do this and there is nothing wrong with asking for help. There are more questions that goes in this format, I only asked one of each type of question so that I can apply the same method on the other questions. If I am intended to have people do my homework for me, I would have post all of them up.
Great. I'm sure that you can understand why I made my previous statement, as there are two camps of folks who make answers on SE -- the very few who will answer homework questions, and those who will with some demonstration of effort by the person who asks the questions.
So 'demonstration of effort' means something like... showing what I got up to and let people correct my mistakes? If so, then sure I'll do that.
I checked a couple of references and they say the natural abundance of Ar-36 is 0.334%; Ar-38 is around 0.06%
Yes the answers at the back says Ar-38 is 0.588% and Ar-40 is 99.406% But I have trouble getting these numbers.
I don't get those numbers either. In the first question, write the equation for the burning (oxidation) of sulfur in oxygen and you'll see that 1 mole of sulfur generates 1 mole of sulfur dioxide. Convert the masses of sulfur and sulfur dioxide to moles and you'll see that 0.007 moles of sulfur produced 0.007 moles of sulfur dioxide. Then they did a second experiment and obtained 0.782 gm sulfur dioxide. How many moles is that; so then how many moles of sulfur did they start with, and how many grams of sulfur is that?
whoops, wouldn't 3 involve (1-.00006-x)?
BTW, in problem 3, because you are dealing with such widely different isotope natural abundances (e.g Ar-38 = 0.006% while Ar-40 = 99.6%) it is critical to use isotope masses out to at least 6 decimal places. Otherwise your answer will be way off.
omg that's right I completely ignored the whole 0.00006 thing. Should be 1-0.00006-x. And I tried to keep a lot of decimal places too but the problem is that, usually during the exam, at the top of the page it would say something like 2 marks for keeping all significant figures. Sometimes there isn't any choice of how many decimal places I can keep. Anyway, thanks for the help.
$\frac{0.214g}{0.428g} = \frac {x}{0.782g}$
Your answer is ok, but on an exam you want to solve quickly.
2.710g - 1.451g = ?
wasn't that easier?
3.
35.967(0.00006) + 37.962(x) + 39.962(1-x) = 39.948
This is slightly wrong.
0.00006, x, 0.99994-x
|
STACK_EXCHANGE
|
Tabs can also be pinned/un-pinned using the keyboard shortcut ‘p‘.
New search keyword—pinned tabs
Use ‘is:pinned‘ to search only pinned tabs. ‘-is:pinned‘ searches only in unpinned tabs.
This keyword may be used together with regular search words and the ‘is:suspended‘ keyword.
New: Copy tabs to clipboard
Use the copy button, next to the tab count, in tab search window to copy tabs to clipboard.
Copies tab title and URL
Copies tabs filtered in tabs list
For suspended tabs, the original URL is copied
Use the setting on options page to specify the format of copied text—plain text, Markdown, or URLs only.
New: Ignore query when comparing duplicates
‘Ignore query’ option is now available as an experimental option (bottom of options page). When enabled, the query portion of the URL is ignore when preventing duplicates. E.g. ‘https://www.google.com/‘ and ‘https://www.google.com/?hl=hi‘ will appear as duplicates.
Enabling this setting may cause unexpected behaviour on many sites. Using the query parameter to refer to distinct content is a common web practice.
Suspend tab button is disabled for chrome:// and chrome-extension:// tabs as these can’t be suspended by The Great Suspender
Bug fix: Selected tab lost focus after an action in tab search
delete, home, end, and enter keys were not working in filter text input field
When The Great Suspender is detected as installed, the tab search window will show an additional tab action—to suspend a tab, or to activate a suspended tab.
Additionally, search for suspended tabs with ‘is:suspended’ tag in search. Search for active tabs with ‘-is:suspended’. These tags can be combined with other keywords, e.g. ‘amazon is:suspended’ to search for suspended amazon tabs.
Suspended tabs are shown with a dull grey background.
Finally, the selected tab may also be suspended using the keyboard shortcut ‘s‘ and unsuspended with keyboard shortcut ‘u‘.
Quickly search through all your tabs from extension popup ( keyboard shortcut1: alt + x). Switch to selected tab, or close them directly from the popup.
To switch to a tab, click on it in the list, or navigate to it with up/down keys, and the press enter,
To close a tab, click on the X icon next to its title, or navigate to it with up/down keys, and press delete,
Pressing escape clears the tab selection and the search box, and
Pressing F1 brings up the list of available keyboard shortcut.
Tabs suspended by ‘The Great Suspender’ extension appear with a slightly greyed out background.
By default, the extension lists tabs from all windows. You may change this in settings under Options > Tab Search to show tabs only from the current window.
The actions in the old popup (open settings, whitelist page, duplicate page, et al) are available under the Actions header. The extension opens in the same section as you used last time – actions or tab search.
Clutter Free - Version 2017.0531.4.7
This shortcut may not be enabled if another extension is already using it. You may set another shortcut by following the link in Options > General settings > Keyboard shortcuts. ↩
|
OPCFW_CODE
|
You can specify the configuration information that can be used to create Elastic Compute Service (ECS) instances in a launch template based on your needs, and then use the launch template as the basis for creating ECS instances. This topic describes the notes for creating launch templates, how to create a launch template, and operations that you can perform by using launch templates.
You can create up to 30 launch templates in each region within an account.
When you create a launch template, all parameters are optional. However, if a launch template does not contain the required parameters, such as the instance type or image, you must add the parameters when you use the template to create instances.
You cannot modify launch templates after you create them. However, you can create new versions for launch templates.
Create a launch template in the ECS console
You can create launch templates beforehand to simplify the creation of ECS instances, scaling groups, and auto provisioning groups.
Log on to the ECS console.
In the left-side navigation pane, choose .
In the top navigation bar, select the region and resource group to which the resource belongs.
On the Launch Templates page, click Create Template.
On the Launch Template page, specify parameters in the Basic Configurations (Optional) and Advanced Configurations (Optional) steps.
For information about the parameter configurations and descriptions, see Create an instance by using the wizard.Note
The first time you create a launch template, the Clone Template section is unavailable. If you have already created launch templates, you can select an existing template and a specific version and then modify the configurations.
In the Confirm Configurations step, enter a template name and a template version description. Then, click Create Launch Template.
Selected Configurations: You can click the icon in the Basic Configurations and Advanced Configurations sections to modify the parameters.Note
The parameters in the Basic Configurations and Advanced Configurations sections are required to create instances and simplify subsequent instance creation. These parameters are optional and can be configured as needed.
Save As: You can specify how to save the current configurations based on your needs.
Create Template: If you select Create Template in the Save As section, the current configurations are saved as the default version of a new launch template.
Create Version: You can select an existing template and save the current configurations as the latest version of the launch template.
Template Name and Version Description: You can enter a name for the launch template and a description for the template version for future management.
Template Resource Group: You can select an existing resource group to assign the launch template to the resource group.
If you want to create a new resource group, click here to go to the Resource Group page and create a resource group. For more information, see Resource groups.
In the Success message, click View Template to go to the ECS console and view the launch template that you created.Note
You can view all created launch templates in the template list on the Launch Templates page.
Create a launch template by calling an API operation
Call the CreateLaunchTemplate operation to create a launch template.
After you create a launch template, you can perform the operations that are described in the following table based on your business requirements.
Create an ECS instance
You can use an existing launch template to quickly create an ECS instance. This eliminates the need to repeatedly configure parameters.
Create multiple ECS instances at a time
You can use a launch template together with the RunInstances operation to create multiple ECS instances. This eliminates the need to enter a large number of parameters each time you create instances.
You must specify the LaunchTemplateId and LaunchTemplateVersion parameters when you call the RunInstances operation.
Create a scaling group
You can use an existing launch template to quickly create a scaling group based on ECS instances. The system uses configurations defined in the launch template to create a scaling group. If specific configurations do not fulfill business requirements, you can modify the configurations when you create the scaling group. For example, you can modify the virtual private cloud (VPC) and vSwitch in the scaling configuration.
Create an auto provisioning group
Auto provisioning groups use specific versions of launch templates as instance configuration sources. Attributes such as instance images, security groups, and logon credentials from the launch templates are used by auto provisioning groups to create ECS instances. After an auto provisioning group is created, an ECS instance cluster is started and provisioned at the specified point in time, which improves the efficiency of offering a large number of ECS instances at a time.
|
OPCFW_CODE
|
Web 3 "Value Internet" Quantitative Indicators: From BTC to Ethereum to MakerDAO
Source: Public Number Encrypted Valley Live
Original title: "Web 3 Series | Fabric Ventures: A Complete Overview of Quantitative Indicators for Value Internet"
After studying seven venture capital funds, we are convinced that key indicators play a pivotal role in assessing a company's performance, such as:
- “What is the current growth of MRR?”
- “Which customers are losing in different customer groups?”
- “How many DAUs do you have?”
- “What is the contribution rate of each of your customers’ net profit?”
These are all very important metrics for measuring the Web 2.0 business model, but they are not applicable to all Web 3.0 business models. If you try to force it to be applied to Web 3.0, you won't show any positive results, just as people don't evaluate the performance of a bank through the number of "daily active mortgage users", we should not pass the daily The number of user interactions to analyze the health of the MakerDAO ecosystem. Similarly, when no core company can achieve these revenues, the growth rate and turnover rate of MRR is meaningless.
- Ling listening 2020: the first time in the New Year's speech talks about the sense of certainty, chaos is not the abyss is the ladder
- DeFI: Ethereum's currency Lego
- Central Bank Report: Blockchain simplifies risk management and increases transparency; 173 virtual currency trading and financing platforms have withdrawn
As an investor in the Web 3.0 space, Fabric Ventures has improved the different types of metrics used to evaluate the network based on specific use cases and business models. From hash rates, gas usage, and lock-in value, this article will delve into the native metrics of these Web 3.0.
BTC: Primary primary asset, primarily used for value storage
- Hash rate : The hash rate of the BTC network is the total power of all miners in the system and is a manifestation of network security. Due to the probabilistic supremacy in BTC, any party with a 51% hash rate (that is, more than half of the computing power of the entire network) can create a longest chain consensus: a higher hash rate means attacking the network. The difficulty is also higher.
- Number of Miners Controlling 51% Hash Rate : Indicates network concentration around certain miners and pools. The lower the number, the higher the risk of collusion attacking the network.
- Transaction volume : can be used to indicate the use of the network as a medium of exchange, but does not fully reflect the use of the network as a way of storing value. Because it can be manipulated by machine algorithms that send a large number of low-value transactions, the actual volume of transactions can be distorted; on the other hand, a transaction can encode the transmission through techniques such as batch processing or sidechain/lightning networks.
- Total Transaction Value : The total value of all transactions over a period of time, showing the value transmitted over the network.
- Total Market Value: As a means of value storage, the total market value of the BTC represents the value it carries (note that some BTCs are already in a lost or dormant state).
- Block rewards available to miners : This award is the main source of income for the miners' protection network. The block reward multiplied by the price of the BTC represents the usable value of the exchange hash calculation.
- Transaction costs paid to miners : It is the source of income for miners, and its importance will increase as block rewards continue to halve. It is expected that this fee will be the main source of income for miners who are ensuring cybersecurity in the coming decades.
BTC's hashing power has recently hit a record high – Source:
Lightning Network: BTC's Layer 2 Expansion Solution
(This part was written by Casa's Jeremy Welch)
- Network Capacity : Points to the total number of BTCs promised by Lightning Network. Because the trading channel is completely private, this indicator is the main indicator of how much money is transferred on the system.
- Number of nodes with active channels : The total number of nodes represents the number of nodes actively trading on the lightning network, and also indicates that funds are routed to other nodes on the network.
- Number of payment channels : The total number of channels indicates how many direct node-to-node connections were created. The increase in nodes also leads to an increase in the channel, and as the number of lightning network use cases increases, the channel should grow faster than the total number of nodes.
- Average number of channels per node : This may increase with more lightning use cases and then decrease as network density increases (it is easier to route through other nodes than direct routing).
- Number of TorOnion service nodes : The default lightning connection code includes the node's IP address and port. The IP address can be used to determine the physical location of the node, which is a major operational security risk. It is now possible to use the Tor Onion service without exposing the geographic location.
Lightning Network Total Capacity and Maximum Capacity Node — Source: https://explore.casa/
Ethereum: The largest smart contract platform for application developers
Hash rates and transactions can be measured by some of the same metrics as BTC, but more include:
- Total gas usage : A metric that represents the number of trading/smart contracts used and the complexity of using smart contracts. This indicator is also used to illustrate the overall use of the Ethereum network as a decentralized computing platform (note that trading spam may cause serious bias in this indicator).
- Average Gas Usage per Trade : Describes the complexity of smart contracts/calculations used on the Ethereum network.
- Average gas price : Depends on the number of transactions at a given point in time, representing the usage and congestion of the Ethereum network.
- Lock-in value in DeFi : Decentralized Finance (DeFi) The core application is built primarily on Ethereum, the main currency used in this system for collateral. The value of the asset locked in it represents the use of the DeFi application and has a positive impact on the ETH price. Mainly MakerDAO, but also Compound, Uniswap, Dharma and Synthetix.
- Number of developers using Truffle/Ganache/Zeppelin : As a smart contract platform, how many people develop applications based on Ethereum is one of the most important indicators. The most easily quantifiable data may be the number of Truffle/Ganache/Zeppelin users (including the number of developers in other libraries), and keep in mind that each download does not mean adding a user/activity developer.
Currently, approximately 2% of ETH is locked into the DeFi ecosystem – Source: https://defipulse.com/
MakerDAO &DAI: The largest decentralized stable currency lending platform
- Total Loan : The total usage of the token MakerDAO platform, similar to other Web 2.0 loan providers.
- Number of open CDPs : Considering that the number of open CDPs is closest to MakerDAO's “active users”, it can be used as a proxy for the total number of platform users.
- The value of the stabilization fee : derived from the current 16.5% annual stabilization fee and the total outstanding loan amount. The value of the stabilization fee is burned out in MKR, because the reduction in supply will theoretically lead to an increase in the value of MKR tokens.
- Guaranteed value : refers to the value of all assets locked in CDPs (currently only ETH) as a guarantee for outstanding loans.
- Guarantee ratio : The ratio of the total value locked by CDPs to the value of outstanding loans, representing the risk rate of the entire system (less than 150% of CDPs will be automatically liquidated).
- Default ratios for CDPs: Less than 150% collateral in CDPs to assess the optimal stabilizing and guarantee ratios required.
- MKR voting participation : The MKR holder community participates in governance decisions regarding increased stability costs. For a token that builds its value largely on governance, this will be a useful indicator of future tracking.
- DAI Stability/Hook : The result of market incentives developed by MakerDAO to keep DAI supply and demand within a similar range. Changes can be made by increasing or decreasing the stabilizing fee and the future daily savings rate.
MakerDAO's total collateral has grown steadily to $450 million — Source:
Work token: supply-side tokens such as Keep, Augur or Livepeer
- Staking participation rate : An indicator used to track how many token holders in the online economy are active. The goal of the network is to achieve a certain amount of participation. When the participation rate of staking falls to a certain threshold, they can increase their block rewards (for example, Livepeer increases its inflation rate every day until it reaches 50% activity).
- Staking benefits from block rewards : Benefits provided to active participants in the network as a means of rebalancing from passive holders to active holders. In a network that uses tokens to motivate suppliers, the value of passive holders should not be increased, but should be diluted reasonably.
- Work Activity : The amount of work done on a given network: Protect Keeps on the Keep Network, solve market problems on Augur, or transcode video on Livepeer. This indicator shows the actual usage of the network and is still low or non-existent in most networks because they have just started or started.
- Total revenue flowing to the verifier : the value of the fee paid by the user to the network provider. For example, the total value paid to ensure Keep's security, and the benefits of predicting market problems on Augur. This is the most important indicator for evaluating a network. Through the analysis of the rediscounted income of future cash flows, all the tokens held by the verifier will receive a relatively fair and reasonable valuation to achieve its profit target.
Livepeer's participation rate is close to the target of 50% — Source:
We look forward to the healthy development of the decentralized network. This article is a comprehensive introduction to quantitative indicators, all of which have different levels of restrictions, not universally applicable, but it does a good indication of the direction of Web 3.0.
Author: Max Mersch
Translation: DUANNI YI
Edit: Sonny Sun
- More underground regulatory documents, the virtual currency industry goes to the bubble
- An Example of Government Affairs System: Blockchain Thinking in Public Affairs
- Central Bank Shanghai Headquarters Graphic Blockchain: Blockchain virtual currency, developing blockchain technology away from virtual currency speculation
- Professor Gong Yi of China Europe International Business School: the dusk of the company and the dawn of the blockchain
- Graphic: Why is cryptocurrency investment diversified?
- Bitcoin Core 0.19.0 official version released, what new changes?
- Sidechains and status channels that are confusing concepts: What is the difference, who is better?
|
OPCFW_CODE
|
Renpytom is creating the Ren'Py Visual Novel Engine Patreon
A game engine is a software-development environment designed for people to build video games. Developers use game engines to construct games for consoles , mobile devices, and personal computers .... The Ren'Py Visual Novel Engine is a free software engine. Ren'Py is a portmanteau of ren'ai ( 恋愛 ) , the Japanese word for 'love', a common element of games made using Ren'Py; and Python , the programming language that Ren'Py runs on.
Hatoful Boyfriend Wiki FANDOM powered by Wikia
Using APKPure App to upgrade Omega Pattern - Visual Novel, fast, free and save your internet data. By adding tag words that describe for Games&Apps, you're helping to make these Games and Apps be more discoverable by other APKPure users... An engine that can be used to manage and play Visual Novels with limited interactivity. It supports a simple scripting language so you can control the story from within your XML file (which is …
【Valkyrie Crusade】How to capture high quality card easily
One of our users and co-leader of the Lucid9 project, Diamon has written a fantastic and extremely thorough guide about the process that goes into making a visual novel, from his experience, for anyone that’s wondering about making one themselves. reallifecams google how to watch free vauger reallife cams Visual Novel Maker is an all-in-one package where you pay for the convenience of elements of game creation that you’d like to use, but either don’t want to spend the time figuring out or
Omega Quintet Free Download « IGGGAMES
An engine that can be used to manage and play Visual Novels with limited interactivity. It supports a simple scripting language so you can control the story from within your XML file (which is … how to start reading novels in goodreads TyranoBuilder and its sister tool TyranoScript are part of STRIKEWORKS' mission to bring the enjoyment of both making and playing visual novels to people around the world. While TyranoScript is a scripting only multi-platform visual novel engine, TyranoBuilder is a complete visual interface that builds on TyranoScripts functionality, making producing multi-platform visual novels easier and
How long can it take?
API Documentation 0m3ga VN Engine
- Use Visual Novel Engine store.steampowered.com
- Introduction to Visual Novel Engines The Teacup
- 0m3ga VN Engine The 0m3ga Visual Novel Engine
- Omega Pattern HD Visual Novel for Android - APK Download
How To Use Omega Visual Novel Engine
I completed the Astral route but the corresponding achievement didn't unlock. I also completed the other 2 routes and appear to have all the scenes unlocked, but the "Scenes" achievement didn't unlock.
- Love at First Sight is an unvoiced visual novel by FreakilyCharming and licensed by Sekai Project. It tells the story of the protagonist Fukunaga Mamoru, who stumbles across a mysterious one-eyed girl covered in bruises crying on the stairs leading to the rooftop.
- Visual Novel Engine. A Visual Novel Engine using Corona SDK. If you have any recommendations on how to setup the ReadMe files, please email me or commit and I'll check it out.
- Visual Novel Maker is an all-in-one package where you pay for the convenience of elements of game creation that you’d like to use, but either don’t want to spend the time figuring out or
- Visual Novel Resources (character art, backgrounds, music, etc.) *We need your help, feel free to post any sites you have found to be helpful for backgrounds, music, or characters and as long its not from somewhere that would be a conflict in interest we will add it to the main post.
|
OPCFW_CODE
|
Languages are listed in alphabetical order in English, and Arabic is in the first column. To ensure that you have access to all the bi-directional language features in Office for Mac, check that you have the latest updates installed before proceeding.
If "cloud" computing comes far enough along, it is conceivable that everything could move to the cloud and that desktop editions of Microsoft Office could come to an end.
The next full version of Office for Mac may take another 2 years. A small pop-up window opens with a keyboard map, which you can move to the corner of your desktop while working in Word. The bi-directional language features in Office for Mac work only with the keyboards included with the Mac operating system, not with keyboards downloaded from third parties on the Internet.
Google Docs and Google Drive. Jordanian Arabic, for example, uses the Arabic keyboard layout. For right now, please give either SkyDrive or Docs. Click the Arabic language if you want to set it as the default language and click the Set as Default button.
The Office for Mac user interface changes based on your operating system language preferences. Click the Apple menu, and then click System Preferences.
Type the text that you want, using the Keyboard Map as a guide. Video of the Day credit: Click the Menu and select the Arabic keyboard layout supported by the country you chose. Select Arabic from the list of languages. Office is more robust and you have to pay for a subscription to Officebut Office does not have VBA support and is still not the complete suite that you get with the Office or Office versions installed on your Mac or PC.
If you want to change the default language of your operating system to the new language, select Use [Language Name]. Click the Add a Language icon.
The Show Input menu in menu bar check box is automatically selected when you add a new input menu, which will allow you to easily switch between input sources. Rather than new features and bug fixes, we saw commands shuffled from toolbars and menus to the Ribbon. Click the Not Installed link beside the Arabic language and follow the onscreen instructions.
The past 2 to 3 years have seen only small improvements to the desktop Office suites. I am interested in your reaction to the web applications, which have just been revised. Whether or not the full version will support RTL is not known at this time. So as of now, RTL users on Macs have language support except for those who need the full suite.
This past week Microsoft announced its Mac strategy, and it might be somewhat encouraging for users of RTL languages. The huge success of Apple now bigger than Microsoft and Google combined has been relatively fast in happening.
Add the language you want to your operating system. The same process works for Windows 7, 8. Screenshot courtesy of Microsoft.University of Richmond Global Studio June 1 sgs TYPING IN ARABIC (MAC OS X) These instructions will help you set up your Mac for Arabic input.
First, you must enable an Arabic keyboard; once you have selected an Arabic keyboard, the Mac. Word will support Arabic if you do the following: 1- In your Gmail account open documents and create a word document and write any Arabic word like (عربي) 2- Go to File and Download as Word & save to your Desktop 3- Open Word in your Mac and open the save file that you created using Gmail.
I'm facing a problem with Office for MAC, for example, when you open WORDand key in some Arabic letters, each word will appear as discrete letters not connected as one single word as it Microsoft Office right-to-left (Arabic) doesn't work. Ask Question.
then each time you need to write text open your template in MS Word. How to Write Arabic in MS Word.
March 31, By: David Weedmark. Share; Share on Facebook; Arabic, as well as most languages, is fully supported in Microsoft Word However, you have to add the language to your computer if you want to type in Arabic. The same process works for Windows 7, and When you set a Word document's view to Right-to-left, both the page order in Print Layout view and the text direction in Outline view will be in a right-to-left direction.
On the Word menu, click Preferences. department – you will need to modify the default, standard Word document. This handout is intended to show you how to use the tools to make the necessary modifications.Download
|
OPCFW_CODE
|
Is it possible to know last time a Row was updated in a table or view in Snowflake?
I’m working on an implementation of a view in Snowflake. I want to know when was the last time that a single row was updated. Is there a column from Snowflake or maybe a way to do that?
For example. I have a user table with ID and NAME columns and a row just like this:
ID = 1
NAME = Allan
If I update the name from Allan to Juan I would like to know exactly when that happened for that row. There is a way to do that in the view?
I would like to add that "last update" column as a part of the view.
no is the answer, if you want to track row change time, you should implement a updated_timestamp on your updates. Given rows are columns in a partition files, if the partion write time (it is imutable) it would be possible to know the row had no been updated later than the partiton write time. But that is not accessible. Also for short term, Streams can track changes. But it seems you just need/want to track it yourself.
Row level modification timestamp is not available in the information_schema views currently.
However, there is a column LAST_ALTERED in the SNOWFLAKE.ACCOUNT_USAGE.TABLES view that gives the Date and time when the table was last altered by a DDL or DML operation.
For the specific requirement you have shared as part of the example, you may consider implementing a stream.
Standard stream (also known as Delta stream) should do. The catch with Standard streams is it performs a join on inserted and deleted rows in the change set to provide the row level delta. As a net effect, for example, a row that is inserted and then deleted between two transactional points of time in a table is removed in the delta (i.e. is not returned when the stream is queried).
If there is a requirement to track each and every single modification to every row, then we could consider implementing a combination of two streams - a Standard stream and an Append Only stream.
An append-only stream exclusively tracks row inserts. Update, delete, and truncate operations are not captured by append-only streams. For instance, if 10 rows are initially inserted into a table, and then 5 of those rows are deleted before advancing the offset for an append-only stream, the stream would only record the 10 inserted rows.
Reference: https://docs.snowflake.com/en/user-guide/streams-intro
@SimeonPilgrim appears to have the correct answer but it may also be worthwhile adding an UPDATED_BY, INSERTED_TIMESTAMP as well as the UPDATED_TIMESTAMP columns so you can better track changes in your table as is standard practice in S&P IHS Markit EDM (formerly CADIS) (ETL package used in financial services).
Something like this:
CREATE OR REPLACE TABLE T_EMPLOYEE(
EMPLOYEE_ID INTEGER,
TITLE VARCHAR(50),
FIRST_NAME VARCHAR(100),
LAST_NAME VARCHAR(100),
SALARY DECIMAL(10, 2),
INSERTED_TIMESTAMP TIMESTAMP DEFAULT CURRENT_TIMESTAMP(),
UPDATED_TIMESTAMP TIMESTAMP DEFAULT CURRENT_TIMESTAMP(),
UPDATED_BY VARCHAR(100) DEFAULT CURRENT_USER()
);
INSERT INTO T_EMPLOYEE (EMPLOYEE_ID, TITLE, FIRST_NAME, LAST_NAME, SALARY)
VALUES (1, 'DR.', 'Carrie', 'Madej', 100000);
SELECT * FROM T_EMPLOYEE;
Thanks for answering @SnowPro_Engineer I think in the example you provided INSERTED and UPDATED column are exactly the same, both of them will by add the current timestamp when we insert a new row, how about when we update a row?
When you update the row you need to manually update the UPDATED_TIMESTAMP and UPDATED_BY columns as opposed to an INSERT which will happen automatically due to the DEFAULT attributes. I hope that helps.
Not sure who formatted my SQL code but thank you very much - it looks beautiful now and I will be sure to use tildes in future - not sure how to access backticks on a UK keyboard. It looked great in Snowflake but then it lost the formatting on here. I am new to StackOverflow so please bear with me. :-)
Good answer from Suraj Ganiga above and thank you I learnt something new, however I prefer to add the auditing fields onto the table of interest for quicker access and to avoid joining onto the SNOWFLAKE.ACCOUNT_USAGE.TABLES every time.
|
STACK_EXCHANGE
|
As you might already know, “Service Principal Names” plays an important role in authentication process of Active Directory. If you haven’t watched my video on Kerberos, I suggest you have a look at that video because I explain Kerberos and use of SPNs in Active Directory. However, I will produce a separate video dedicated to SPN later on, so make sure to follow my YouTube channel for more videos. Anyways, in this article, I am going to briefly talk about one of the common problems of authentication and consequently “Secure Channel” which falls to the category of “Duplicated SPN”.
Recently, I was working on a funny and interesting subject of customizing icons of security principals within ADUC, but I came across another problem which led me to dig deeper the environment in order to find the root cause of the problem. During investigating different areas of Active Directory in my test environment, as a test, I wanted to join a PC to domain. The operating system of the PC was 2008 R2 and my AD test environment was 2012 R2. The PC could simply join to the domain with no issue, but right after I clicked ‘Ok’, it threw an error indicating: “While processing a change to the DNS hostname for an object, the Service Principal Name could not be kept in sync.”
The PC was however joined to the domain, but the error was a bit scary. Service Principal Name not kept in sync? This sounds like a bad issue! However in most articles, people were suggesting to ignore this problem, I was not quite comfortable with this manner. So I decide not to ignore the error and instead dig the problem.
So I tried to login to the system to see what is going on over there. Tried to login with appropriate username and password, but this time the error was a bit funny!
At first I thought it might be another typical problem of “Broken Secure Channel”, but the problem was, I had joined the PC to domain nearly 5 minutes ago, how that was possible for secure channel to break? I guessed this is something related to the first error indicating SPN not kept in sync. So the best thing in this case, was to check SPN attribute of the computer account and verify if they exist. After navigating through the attribute editor and find the serviceprincipalname attribute, I noticed the problem below:
Not a single SPN for this computer account? That is the problem indeed! When there is no SPN inside computer account, there would be no Kerberos process for the PC. How someone can login to a PC where there is no SPN written to serviceprincipalname attribute of that PC?
So I decided to manually write the default SPNs into the computer account. Having exported service principal names of a computer account, I tried to import the SPN for the corrupted computer account. These default SPNs were:
However, once I was importing the SPN, I noticed that one of the SPNs could not be written to attribute of the corrupted PC, throwing error related to duplicate SPN! I was wondering why duplicate? There is no computer within my forest with the same name. But I could not easily rely on my memory, so I searched the entire forest for similar name and amazingly I found out that there is a computer account with identical name of the computer account where I was troubleshooting. So basically everything became clear to me. “SPNs must be unique for the entire forest, if not, Kerberos cannot issue tickets for them”. Since it was clear then, I removed the old PC from remote domain and then rejoined the PC back to the domain. This time there was no error and of course serviceprincipalname attribute was populated.
Tip of the day: Computers are joined to domain in first step, once it is joined and computer account is created, SPNs will get written to that computer account. So when there is duplicate SPN within your forest, the SPN will not get written to PC because it is duplicate! The PC is joined to domain but with no SPN is written.
Update: A little right after that I found about the issue, Guillaume LUANAIS, pointed out the there are some more reasons for this error. Things like wrong SRV records and missing FSMO roles can be also the root causes of this problem. Also a full article of this issue can be found here: Duplicate SPN check on Windows Server 2012 R2-based domain controller causes restore, domain join and migration failures
|
OPCFW_CODE
|
This page is not finished yet.
You can help by improving it:This needs to be linked somewhere, and the content needs to be updated
Getting Involved with ReactOS
There are many different ways to get involved with ReactOS. Immense time and effort went into creating the NT family of operating systems, including Windows XP and 2003. As ReactOS aspires to be a replacement for Windows, the same amount of resources would ensure the rapid progress of the project. This is where you can play a part.
Getting involved with ReactOS is easy and straightforward! We only ask that you have not had access to Microsoft source code for the area you want to work on. More
If like most developers, the above does not apply to you, then consider yourself clean and good to go!
The best way to get involved is to start by installing an SVN client and downloading the source code. Next download and install RosBE, the build environment used to facilitate the ReactOS build process. From there one can either poke through the source code to get a feel for the project, or contact a developer responsible for one’s area of interest. A quickstart also exists that distills the very minimum of what is needed to get started developing ReactOS.
Submit your patches via our task tracker and after sending several high quality ones you will be offered commit access! This access may be to a specific branch or direct access to the trunk!
No operating system is usable if it is found to be unstable and prone to crashes. As ReactOS development work involves studying the behavior of an operating system that is not completely documented, testing should be of even greater importance in order to fulfill the objective of binary compatibility with Windows.
You can assist the ReactOS development effort by installing regular trunk builds available here and providing feedback on issues/problems you encountered during and after installing the OS via our bug tracker . More information with regard to debugging is available here and is recommended reading for those who wish to submit a report to our task tracker.
ReactOS aims to be usable by many people around the world. This requires that not only the operating system be translated into multiple languages, but also the website be translated too. This is especially an area in which the community can help as the core project members cannot possibly cover all the languages out there. To help with translating the operating system, it is useful to know how to check out and build the source code. We also have a quick introduction about translating
A freely accessible codebase is of little educational value if nobody is able to comprehend it. To this end, the ReactOS Wiki contains a great deal of information about the NT architecture and other technical matters. Well written documentation is essential in helping developers and would-be developers understand the codebase and the design underlying ReactOS. Documentation is also needed in the source code itself, to describe the intent of the code and also to allow tools like Doxygen to generate reports that people can use as references. The material posted on the Wiki should be the authors’ own work. Copyrighted work must adhere to the license it was released under.
A well-run project depends on there being a solid and usable infrastructure: website, mailing lists, bug tracking system, documentation systems and others. Those people who do not have experience programming in lower level languages but do know scripting languages like Python or PHP are welcome to assist the infrastructure and web teams in maintaining the build and test infrastructure and project websites respectively. In addition, people with experience in systems administration of Linux and web servers are also welcome to contribute in those areas.
Even if ReactOS achieves considerable technical success, that success matters little if people are not aware ReactOS exists as an alternative to Windows and Linux. A successful public relations campaign involves not only spreading the existence of ReactOS through word of mouth, but also through the production of media and holding of presentations. The project is always on the lookout for digital artists and writers who can come up with graphics and articles for the project.
The project would gratefully accept any donations offered. Such donations would go towards maintenance of the project’s server infrastructure and specific feature implementation projects.
If you have an idea that may help the project in any way, or would like some specific guidance as to what can be done, feel free to drop by our IRC channel. We hope to hear from you soon.
|
OPCFW_CODE
|
Must have Firefox Add-ons: Every web developer needs useful add-ons and other plugins to do his job in more efficient way. A web browser is the base, whether we’re working on developing, management projects, web designing or if we just surfing internet.
Browser based web applications gave rise to improve web browser from different perspectives in order to do a specific job. As there are some Must-Have Extensions for web Browsers to assists a general user, in the same way there are Add-ons for web browsers which assist web developers.
You may already know about the Best Browser for Web Developers. Google Chrome is generally considered as the good option to be used for web developing. But due to some issues like, Flash Plugin Crashing in Chrome etc. in Chrome, many developers prefer Mozilla Firefox as the 2nd best browser for developing.
12 Best & Must Have Firefox Add-ons for Developers
No matter what browser you’re using, you need its respective Ad ones for web developing. In this guide, you’re going to learn about best Add-ons for Firefox browser. Mozilla has introduced a version of Firefox which is built specifically for web developers. It is generally known as Firefox Developer Edition. The Add-ons for Firefox browser which I’m going to discuss are generally work great on Firefox developer edition.
This is a very useful Firefox Add-on which lets you capture or annotate any web page. Generally, the software engineers use it to track bugs in browser or for managing or collecting feedback on website prototype. You can create screenshots and save them directly to your project in this way you can discuss about changes or other issues with your teammates.
Download link: Usersnap
#2 Web Developer
The web developer Add-on is used to add different type of web developing tools in your Firefox browser. It gives many great features to web developers and most favorite one of maximum web developers who are working on Firefox. It enhances your work experience on Firefox by allowing you to use multiple developing tools at the same time. It assists developers with various CSS, image options and by giving access to the structure of page. This Add-on is mostly used in developing edition of Firefox, but it also works best on regular edition as well.
Download link: Web Developer
#3 User Agent Switcher
This Firefox Add-on allows web developers to switch the user agent of a browser. How this work? By adding new options to the Tools settings for switching to some other user agent.
Download link: User Agent Switcher
This is another famous and most widely used Firefox Add-on which provides an instant access to the multiple developing tools. You can do editing, debugging and monitoring of CSS, HTML and JS using Firebug Add-on.
The reason why it is so famous is due to its user friendly environment. It mostly works best on regular edition of Firefox browser.
Download link: Firebug
This is the HTTP analyzer for Firefox. It is used to monitor the incoming and outgoing HTTP traffic between the web browser and web servers. I recommend to have this too.
Download link: HTTPFox
If you want to vie installed trackers, pixels or snippet on any website then you should use Ghostery Add-on for Firefox browser. Though it is available for other browsers as well but the one in Firefox is more widely used. It displays the trackers which are collecting data on any a web page. You can also block these trackers in case you don’t want your sessions to be tracked.
Download link: Ghostery
ColorZilla is another Firefox Add-on which is used pick colors and eyedroppers. Color of any pixel can be selected and used in browser window. Very simple and easy to use.
Download link: ColorZilla
The BuiltWith Add-on analyzes the website according to the software and technology which is used. It helps you in analyzing competitor’s website and also allows to check behind a webpage.
Download link: BuiltWith
This Add-on actually enhances the working of Firebug for jQuery. It means you first have to install Firebug in your Firefox browser in order to utilize FireQuery.
Download link: FireQuery
#10 Modify Headers
As the name suggests, this Add-on allows a developer to modify HTTP request headers which are sent to web servers. Mostly, it is best for Mobile Web development HTTP testing. If you’re working on Mobile web developing then you must use it.
Download link: Modify Headers
#11 Advances Cookie Manager
If you’re also interested in managing and monitoring cookies then you should install this Add-on in your Firefox browser. A developer can add view, modify or delete cookies using this Add-on. It also support exporting and importing cookies.
Download link: Advances Cookie Manager
This Add-on actually analyzes the performance of different webpages according to some parameters. It also gives suggestions to improve the performance of a web page as well. It runs in Firebug’s environment so you’ve to first install Firebug if you’re interested in using YSlow Add-on in Firefox browser.
Download link: YSlow
These are some of the most commonly used and famous Add-ons for web developers and designers in Firefox browser. Though, the importance and usefulness of an Add-on depends on the work focus of a developer.
Preference varies from developer to developer. You may not found many of above mentioned Add-ons useful for you, but at the same time they are most useful for many other developers. Considering this, I decided to compile a list of most famous and widely used Add-ons for web developers, for Firefox web browser. Hope you enjoyed reading that.
|
OPCFW_CODE
|
Proof states, there are actually 4 important group of operating system which are enjoying critical purpose in past and now, like
Use an software firewall that can detect attacks from this weakness. It may be useful in circumstances by which the code cannot be fastened (as it is controlled by a 3rd party), as an emergency prevention evaluate whilst much more extensive application assurance steps are used, or to provide defense in depth. Efficiency: Average Notes: An software firewall won't include all attainable enter vectors.
! !--- the classification of assault traffic ! deny tcp any any fragments deny udp any any fragments deny icmp any any fragments deny ip any any fragments
System administrator or IT supervisor gather the info of full Server, in context of memory utilization, Room obtainable And the way CPU Doing the job, general performance and many others. There is certainly approach to work with or collect facts team applying health and fitness monitoring applications.
Ways that developers might take to mitigate or eradicate the weakness. Developers could decide on one or more of these mitigations to suit their own needs. Observe that the efficiency of such methods fluctuate, and many approaches could be blended for larger defense-in-depth.
ICMP unreachable messages: Packets that cause ICMP unreachable messages due to routing, most transmission unit (MTU), or filtering are processed because of the CPU.
The AAA framework is significant to securing interactive use of community gadgets. The AAA framework gives a remarkably configurable environment that could be personalized depending upon the requires on the community.
CPU managing of special knowledge-plane packets is platform dependent. The architecture of the specific Cisco NX-OS System will dictate what can and cannot be processed by components and what should be passed for the CPU.
The principal purpose of routers and switches is always to forward packets and frames in the unit onward to closing Places. These packets, which transit the equipment deployed through the network, can have an impact on the CPU operations of a tool.
The feeling was so incredible and this turned feasible on account of you fellas. The tutoring course from your web page came out to generally be so powerful that now I can confidently fix even the complicated query inside of some seconds. Till now I have not uncovered another tuition institute that helps inside the rapidly development of The scholars.
The litre (also spelled "liter"), will be the device of volume and was defined as one thousandth of a cubic metre. The metric unit of mass may be the kilogram and it had been outlined given that the mass of one litre of h2o. The metric system was, inside the words and phrases of French philosopher Marquis you could check here de Condorcet, "for all individuals for all time".
This is especially crucial for many models of evaluate – for instance a Listening to help calls for about one mW (milliwatt) whilst the central air con device in a large Business block may well involve 1 MW (megawatt).
ZENSOS V.five.0 as open up source community monitoring tool which has capacity to control wellbeing of server at bit stage and in addition checking entire network backbone. This mail e-mail and SMS warn just in case any warning or function will come up.
Cisco NX-OS has the built-in capability to optionally enforce robust password checking when a password is ready or entered. This aspect is enabled by default and can protect against the selection of More Help a trivial or weak password by requiring the password to match the following standards:
|
OPCFW_CODE
|
SEOPress 5.0 is now available. We encourage you to update your site as soon as possible to take advantage of the latest features and improvements.
🎉 New – Universal SEO Metabox, a true revolution
SEOPress 5.0 introduces the first Universal SEO Metabox! Edit your SEO metadata (title, meta description, social, robots …) and analyze your content from any page builder. Yes you have it: no more back and forth between your favorite builder and the WordPress editor to optimize your SEO, everything is now centralized and accessible in one click.
A single metabox, a single code to maintain, completely independent of changes from page builders for greater maintainability, scalability and efficiency.
A metabox to rule them all: we have tested it on more than 20 different builders and themes with success, namely:
- Beaver Builder
- Zion Builder
- Block Editor (to be activated from the SEO, Advanced, Appearance settings page)
- Avada (theme) / Fusion Builder (plugin)
- Astra (theme)
- Enfold (theme)
- Extra (theme)
- GeneratePress (theme)
- PRO by ThemeCo (theme)
- Storefront (default WC theme)
- Thrive Theme builder (theme / builder)
- Twenty Nineteen (theme)
- Twenty Seventeen (theme)
- Twenty Twenty (theme)
- Twenty Twenty One (theme)
- Themify builder
Even if your theme or builder is not listed here, there is a 99% chance that you are already compatible and can use our new metabox without further delay.
Entirely built in React, the same library used by the Block Editor but also WooCommerce, this universal metabox, responsive design and accessible, has its own REST API for better performance, increased interactivity and user experience. This opens the way to “headless” and the creation of static websites which we will talk about later in this article.
Best of all: you can edit your SEO without ANY editor!
Simply browse your site while logged in with content editing capability, then click on the SEOPress beacon to open our metabox, edit your metadata, save everything, and voila! Easy as pie!
How to enable the Universal SEO metabox?
Since update 5.0.4, the new metabox must be activated from SEO, Advanced settings page, Appearance tab.
Just uncheck “Disable the universal SEO metabox” option and save changes.
🎉 New – User interface
Second major novelty of this v5: a brand new design! Largely inspired by the WooCommerce administration, this new user interface is clearly more modern, practical and easy to use while being as native as possible, including by and for the universal metabox (especially with the Block Editor).
Let’s start with the “SEO” page, a real Dashboard split into two columns:
- the one on the left, with its brand new notification center,
- and the one on the right which includes management of the extension’s main functionalities, its Google Analytics integration to visualize your visit statistics, its Google Page Speed score visible at a glance, SEO news and tips.
Without forgetting the “Getting started with SEOPress” block allowing you to launch the configuration wizard located above these two columns.
Each notification or section can be hidden and / or configured.
On all of the SEOPress settings pages, you will find an admin bar with breadcrumbs for easier navigation, as well as an icon for access to the side panel of the documentation. This will allow you to directly search our knowledge base if needed.
The SEO subpages have also been completely revised, standardized and refactored for more consistency and better maintainability. Modern, responsive, accessible, this new interface brings clarity and productivity without changing your habits: each option is always where it should be.
We have added multiple descriptions and additional help for a better understanding of each feature.
One example among many: it is now much faster to add dynamic variables for your title and meta description templates.
🎉 New – Road to headless, REST API and static websites
Introduced for and by the universal SEO metabox, the SEOPress REST API made its appearance with this version 5.0.
Intended above all for developers, this programming interface will allow you to expose your SEO metadata via “routes” in a secure way, all in JSON format.
Since version 4.7 of WordPress, you can retrieve via routes and endpoints, your data from posts, pages, types of posts etc. Now it will be possible to retrieve via our REST API:
- meta description
- facebook title
- Facebook description
- Facebook image
- Twitter headline
- Twitter description
- Twitter image
- meta robots:
- canonical url
This constitutes a first iteration: new routes, endpoints, data will be added gradually.
Developers building static sites will find their job much easier.
Read our get started with SEOPress REST API to learn more.
🎉 Other news
Many other new features and improvements have been added like:
- the possibility of removing /product-category/ from your URLs (or any other custom structure defined for your product categories in the Permalinks settings page),
- an IP anonymization option, a “referrer” column and bulked actions (mark as 301, 302 …) for the redirect manager,
- improved content analysis: detection of target keywords already in use, corrections to the severity score on nofollow links, correction of a bug on headings, etc.
- products are now ordered by category in the HTML sitemap,
- new hooks for developers:
- and many more changes to read in the changelog at the end of the article.
2 months of hard and intense work were necessary to design and develop this version 5! Thank you to all our collaborators without forgetting, you, dear users who supported us through your remarks, feedbacks, suggestions, contributions, opinions etc. This new version is just the start and we can’t wait for the next step.
So stay connected by subscribing to our newsletter to receive SEO news, product news and useful resources to optimize your SEO with SEOPress.
Are you in love with SEOPress? Help us by writing a 5 star review on the official WordPress.org plugins directory!
This update contains the following changes (full changelog here):
= 5.0 (29/07/2021) = * NEW <strong>[HUGE]</strong> Universal SEO Metabox: edit your SEO from all page builders 🎉🎉🎉 * NEW User modern interface 🎉 * NEW SEOPress REST API (first iteration) 🎉 * NEW Remove /product-category/ in your permalinks 🎉 * NEW Add bestRating / worstRating properties for Review schema (including SoftwareApp) * NEW Reset count column for Redirections * NEW Bulk actions for Redirections (mark as 301, 302, 307, 410, 451) * NEW IP Logging options for Redirections with anonymization IP * NEW Add Referrer column in Redirections if available * NEW 'seopress_lb_widget_html' hook to filter Local Business HTML widget (https://www.seopress.org/support/hooks/filter-local-business-widget-html/) * NEW 'seopress_can_enqueue_universal_metabox' hook to disable the SEO beacon (https://www.seopress.org/support/hooks/disable-seo-beacon/) * NEW 'seopress_404_ip' hook to filter IP for 404 monitoring (https://www.seopress.org/support/hooks/filter-ip-address-for-404-monitoring/) * NEW 'seopress_sitemaps_html_product_cat_query' hook to filter product categories query in HTML sitemap (https://www.seopress.org/support/hooks/filter-html-sitemap-product-category-query-for-products/) * NEW Translation for "Author:" for Breadcrumbs * NEW Notification if Swift Performance is caching your XML sitemap * NEW Order products by category in HTML sitemap * NEW Check if a target keyword is already used with our Content Analysis feature * INFO Improve nofollow links analysis * INFO Automatically strip protocol / domain name when adding a redirection origin * INFO Add VetenaryCare subtype to Local Business schema (automatic / manual) * INFO Add Quick tags to meta description template in global title settings * INFO Allow webp images for Facebook / Twitter metas * INFO Update i18n * FIX Headings analysis issues * FIX Notices in Redirections * FIX IP logging in Redirections * FIX Send full post thumbnail URL in XML sitemaps (props @cookingwithdog) * FIX Close and Edit cookies button for WPML / Polylang configuration file * FIX Warning preg_match(): Unknown modifier if "/" in category permalink structure * FIX CSS conflict with Easy Digital Downloads and WooCommerce * FIX Compatibility issue with Thrive Builder * FIX Add @id property to Service schema (automatic / manual) * FIX Fatal error in rare cases: Uncaught TypeError: end() * FIX Cookie bar secondary button options * FIX Google Analytics stats in dashboard slowdowns * FIX Hide SEO columns in post type list if Advanced toggle is disabled * FIX PHP 8 oembed notice * FIX Quick tag buttons in Titles and Metas settings page * FIX Broken link checker in specific cases
|
OPCFW_CODE
|
How to Create a Thread-Safe ConcurrentHashSet in Java?
Creating Thread Safe ConcurrentHashSet is not possible before JDK 8 because of the java.util.concurrent package does not have a class called ConcurrentHashSet, but starting with JDK 8, the newly added keySet (the default) and newKeySet() methods to create a ConcurrentHashSet in Java that is supported by ConcurrentHashMap.
ConcurrentHashSet can be created by using ConcurrentHashMap as it allows us to use keySet(default value) and newKeySet() methods to return the Set, which happens to be a proper Set. This gives us access to necessary functions like contains(), remove(), etc. These methods can only be used in the ConcurrentHashMap class and not in the ConcurrentMap interface, so we need to use the ConcurrentHashMap reference variable to hold the reference, or we can use a type conversion to convert the ConcurrentHashMap object stored in the ConcurrentMap variable.
The problem with this method is that there is only one map and no set, and it cannot perform a set operation on ConcurrentHashMap using virtual values. When some methods require a Set, you can’t pass it, so it’s not very useful.
Another option is by calling the keySet() method and the keySet() method actually returns a Set, in which the Set operation can be executed and passed but this method has its limitations that we cannot add new elements to this keyset because it will throw an UnsupportedOperationException.
Due to all these limitations newKeySet() method introduced which returns a Set supported by a given type of ConcurrentHashMap, where the value is Boolean.
How to create ConcurrentHashSet using newKeyset():
This above-mentioned example is not the only way for creating a thread-safe Set in Java.
ConcurrentHashSet using keySet(default value):
Initial set: [Threads, Java, GeeksforGeeks, Geeks, Java 8] before adding element into concurrent set: [Cello, Reynolds, Flair] after adding element into concurrent set: [Cello, Classmate, Reynolds, Flair] YES true after removing element from concurrent set: [Classmate, Reynolds, Flair]
These are the methods to create ConcurrentHashSet in Java 8. The JDK 8 API has all the major features like lambda expression and streams along with these small changes which make writing code easy.
The following are some important properties of CopyOnWriteArraySet:
- It is best suited for very small applications, the number of read-only operations far exceeds variable operations, and it is necessary to prevent interference between threads during traversal.
- Its thread is safe.
- Rasters do not support variable delete operations.
- The traversal through the iterator is fast and does not encounter interference from other threads.
- The raptors are able to keep a snapshot of the array unchanged when building the iterator.
|
OPCFW_CODE
|
AllCards Lite is a free business card app for Windows 8 that allows you to store, manage, and organize business cards in a simple and intuitive UI. Just fire up this free app anytime and access all your important business cards within a single app in Windows 8. The app allows you to import the business cards via an image file or the camera.
Also, you can add all kinds of information about the business apps in a simple interface. This is the lite version of the app and hence it only allows you to store 49 cards in 4 groups created by the user, but for an average person or user 49 cards is good enough. So if you have the need of a business card app for Windows 8, then try out AllCards Lite and keep all your most used and important business cards safe, secure, and organized.
Download this free business card app for Windows 8 from the link provided at the end of the post. The link will re-direct you to the Windows Store from where you can download and install this free app onto your Windows 8 device. After installation you will see the interface in front of you as shown above. But obviously it will be completely empty. To add cards to this business card app for Windows 8 simply click on the ‘Add card’ button.
After you have clicked on the ‘Add card’ button, a dialog will pop up in front of you as shown below in the screenshot. This dialog gives you two methods of selecting the image file of a business card: by file, or by camera. Select any one required option from the provided.
If you click on the file option, the app will take you to a file browser that will allow you to browse and select image files from your computer and the second one enables you to take a photo using the built-in camera or webcam and then crop it, so that just the image of the business card remains. After you have added a business card, the app will ask you for the tile of the business card in simple dialog. In this manner you will be able to add business cards into the app and keep in an easy to access interface.
When you open up a card from within the app, you will be provided with the zoomed in view of the card as seen below in the screenshot. Also all the information about the card will be shown in the right side of the app window. But by default, this business card app for Windows 8 will only show the title of the card that you entered.
To add more information about the card simply click on the ‘Add card field’ button and you will be able to see dialog as shown below which will allow you to add more information about the card. In this manner you can add and store all kinds of information related to the business card.
|
OPCFW_CODE
|
Cuifeng Ying is a Senior Lecturer in Electrical Engineering at the Department of Engineering in the School of Science and Technology.
Dr. Cuifeng Ying received her Ph.D. in Physics in the research group of Prof. Jian-Guo Tian in 2013 from Nankai University, where she used photonic crystals and plasmonic nanocavities to enhance fluorescent signals for biosensing applications. She followed up her doctoral studies during a first postdoctoral appointment with Prof. Yongsheng Chen in Nankai University's Centre for Nanoscale Science and Technology, where she began performing single-molecule sensing using nanopores. In 2016, she joined the Biophysics group of Prof. Michael Mayer as a postdoctoral researcher at the Adolphe Merkle Institute at the University of Fribourg. Her research focused on characterizing single proteins using nanoplasmonic optical measurements and nanopore technology.
Nanopore technology: Single-molecule DNA and protein characterization by nanopore sensing; fabrication, characterization of solid-state nanopores; surface functionalization of synthetic nanopores.
Nanophotonic biosensing: plasmonic nano-tweezers for interrogation of single-protein conformational dynamics; photonic nanocavities to enhance fluorescent signals for biosensing applications.
Co-guest Editor of the journal Nanomaterials (2020-2021)
Reviewer of journals: ACS Sensor, ACS Applied Materials & Interfaces, ACS applied nano materials, Chemistry Communications.
A. Saurabh, P. Sriboonpeng, C. Ying, J. Houghtaling, I. Shorubalko, S. Marion, S. J. Davis, L. Sola, M. Chiari, A. Radenovic, M.Mayer, Polymer Coatings to Minimize Protein Adsorption in Solid-State Nanopores. Small Methods 2000177 (2020)
D. Schrecongost, Y. Xiang, J. Chen, C. Ying, H. Zhang, M. Yang, P. Gajurel, W. Dai, R. Engel-Herbert, C. Cen, Rewritable Nanoplasmonics through Room-Temperature Phase Manipulations of Vanadium Dioxide, Nano Lett. 20, 7760 (2020)
J. Houghtaling, C. Ying, O. Eggenberger, A. Fennouri, S. Nanivida, M. Acharjee, J. Li; A. Hall, M. Mayer, Estimation of Shape, Volume, and Dipole Moment of Individual Proteins Freely Transiting a Synthetic Nanopore, ACS Nano 13, 5231-5242(2019)
L. de Vreede, C. Ying, J. Houghtaling, J. Figueiredo Da Silva, A. Hall, A. Lovera, M. Mayer, Wafer Scale Fabrication of Fused Silica Chips for Low-Noise Recording of Resistive Pulses Through Nanopores, Nanotechnology 30, 26 (2019)
O.M. Eggenberger, C Ying, M. Mayer, Surface Coatings for Solid-State Nanopores, Nanoscale 11, 19636-19657 (2019)
M. Eggenberger, G. Leriche, T. Koyanagi, C. Ying, J. Houghtaling, T. BH Schroeder, J. Yang, J. Li, A. Hall, M. Mayer, Fluid Surface Coatings for Solid-State Nanopores: Comparison of Phospholipid Bilayers and Archaea-Inspired Lipid Monolayers Nanotechnology 30, 325504 (2019)
C. Ying, J. Houghtaling, O. M. Eggenberger, A. Guha, P. Nirmalraj, S. Awasthi, J. Tian, M. Mayer, Formation of Single Nanopores with Diameters of 20-50 nm in Silicon Nitride Membranes Using Laser-Assisted Controlled Breakdown, ACS Nano 12, 11458 (2018)
Y. Wang, C. Ying*, W. Zhou, L. de Vreede, Z. Liu, J. Tian Fabrication of Multiple Nanopores in a SiNx Membrane via Controlled Breakdown, Sci. Rep., 8, 1234 (2018)
S. Jin, W. Hui, Y. Wang, K. Huang, Q. Shi, C Ying, D Liu, Q Ye, W Zhou, J. Tian, Hyperspectral Imaging Using the Single-Pixel Fourier Transform Technique Sci. Rep., 7, 45209 (2017)
Y. Zhao, C. Ying, Q. Huang, Y. Deng, D. Zhou, D. Wang, W. Lu, H. Cui, Fabrication of Controllable Mesh Layers Above SiNx Micro Pores with ZnO Nanostructures Microelectron Eng. 169, 43 (2017)
C. Ying, Y. Zhang, Y. Feng, D. Zhou, Y. Chen, C. Du, D. Wang, J. Tian, 3D Nanopore Shape Control by Current-Stimulus Dielectric Breakdown, Appl. Phys. Lett. 109, 063105 (2016)
Y. Zhang, Y. Chen, Y. Fu, C. Ying, Y. Feng, Q. Huang, C. Wang, D. Pei, D. Wang, Monitoring Tetracycline through a Solid-State Nanopore Sensor Sci. Rep. 6, 27959 (2016)
S. Jin, W. Hui, B. Liu, C. Ying, D. Liu, Q. Ye, W. Zhou, J. Tian. Extended-Field Coverage Hyperspectral Camera Based on a Single-Pixel Technique. Applied Optics 55, 18 (2016)
Y. Deng, Q. Huang, Y. Zhao, D. Zhou, C. Ying, D. Wang, Precise Fabrication of a 5 nm Graphene Nanopore with a Helium Ion Microscope for Biomolecule Detection, Nanotechnology 28, 045302 (2016)
Zhou, Y. Deng, C. Ying, Y. Zhang, Y. Feng, Q. Huang, L. Liang, D. Wang, Rectification of Ion Current Determined by the Nanopore Geometry: Experiments and Modelling D. Chin. Phys. Lett., 33, 108501 (2016)
Y. Xiang, W. Luo, W. Cai, C. Ying, X. Yu, X. Zhang, H. Liu, J. Xu, Ultra-Strong Enhancement of Electromagnetic Fields in an L-shaped Plasmonic Nanocavity Opt. Exp. 24 (4), 3849-3857 (2016)
Y. Feng, Y. Zhang, C. Ying, D. Wang, C. Du, Nanopore Based Fourth-Generation Sequencing Techniques, Genomics, proteomics & bioinformatics, 13, 4-16. (2015)
Y. Li, W. Zhou, C. Ying, N. Yang, S. Chen, Q. Ye, J. Tian, Novel Cone Lasing Emission in a Non-Uniform One-Dimensional Photonic Crystal J. Opt. 17 (6), 065403 (2015)
Y. Xiang, P. Wang, W. Cai, C. Ying, X. Zhang, and J. Xu, Plasmonic Tamm States: Dual Enhancement of Light inside the Plasmonic Waveguide, J. Opt. Soc. Am. B 31, 2769-2772 (2014)
Z. Li, Y. Liu, M. Yan, W. Zhou, C. Ying, Q. Ye, J. Tian, A Simplified Hollow-Core Microstructured Optical Fibre Laser with Microring Resonators and Strong Radial Emission Appl. Phy. Lett. 105, 071902 (2014)
Y. Xiang, W. Cai, L. Wang, C. Ying, X. Zhang, J. Xu, Design Methodology for All-Optical Bistable Switches Based on a Plasmonic Resonator Sandwiched Between Dielectric Waveguides J. Opt. 16, 025003 (2014)
C. Ying, W. Zhou, Y. Li, Q. Ye, N. Yang and J. Tian, Miniband Lasing in a 1D Dual-Periodic Photonic Crystal, Laser Phys. Lett. 10, 056001 (2013)
C. Ying, W. Zhou, Y., Q. Ye, N. Yang and J. Tian, Multiple and Colorful Cone-Shaped Lasing Induced by Band-Coupling in a 1D Dual-Periodic Photonic Crystal, AIP Advances 3, 022125 (2013)
W. Yan, C. Ying, X. Kong, Z. Li, J. Tian, Fabrication and Optical Properties of Inclined Au Nanocup Arrays, Plasmonic 8, 4: 1607-1611 (2013)
X. Zhang, X. Chen, X. Li, C. Ying, Z. Liu, J. Tian, Enhanced Reverse Saturable Absorption and Optical Limiting Properties in a Protonated Water-Soluble Porphyrin J. Opt. 15 (5), 055206 (2013)
Y. Xiang, X., Zhang, W. Cai, L. Wang, C. Ying, J. Xu, Optical Bistability Based on Bragg Grating Resonators in Metal-Insulator-Metal Plasmonic Waveguides AIP Advances 3 (1), 012106 (2013)
C. Ying, W. Zhou, Q. Ye, Z. Li, J. Tian, Mini-Band States in Graded Periodic 1D Photonic Superstructure Appl. Phys. B 107 (2), 369-374 (2012)
C. Ying, W. Zhou, Q. Ye, X. Zhang, and J. Tian, Band-Edge Lasing in Rh6G-Doped Dichromated Gelatin at Different Excitations, J. Opt. 12, 115101 (2010)
|
OPCFW_CODE
|
- For Financial Services
- Dev Center
- About NuoDB
- Get Community Edition
- Watch the Demo ❯
You are here
NuoDB 2.6, Part 3: Scale-Out with Table Partitioning and Storage Groups
Feb 14 2017
This week we’re returning to our, “What’s New in NuoDB 2.6,” series with a discussion about the table partitioning and storage group capabilities of this latest release.
Table partitions can be used to improve data management and query performance. Storage Groups enable scale out and improved input/output performance at the storage layer. Both of these features together allow the application to perform actions such as data aging or archiving more efficiently, assign specific workloads to slower or higher performance disks, take advantage of parallel computing power, and process higher throughput volumes of reads and writes.
Let’s dig into this a bit. First we’ll go over what table partitioning is, why it’s useful, and what we support in NuoDB 2.6. Then, we’ll explain Storage Groups and why they are an important part of NuoDB architecture, unique to NuoDB. Lastly, we’ll explain how NuoDB uses these two features together to boost performance and obtain effective storage layer scale out.
SQL-Standard Table Partitioning
One feature of table partitioning is that it allows you to perform partition pruning. That is, separating individual tables into sections in order to quickly process different parts of the table separately. One example where you might want to do this is an inventory table.
Let’s say you run a large e-commerce website and have a large number of items for sale. When customers visit, explore, and shop on the site, they constantly perform actions that query the inventory table. Depending on customer traffic, the table could easily run into performance issues when trying to handle all the transaction requests. Table partitioning allows you to separate the inventory data to optimize for access, processing, and performance.
Previously offered in technical preview, NuoDB 2.6 fully supports table partitioning. You can use standard SQL commands to create a partitioned table, specify partition by range or by list, and alter a partitioned database. Once you create your partitioned table, you can map the partitions to individual storage groups.
A Symbolic Storage Unit: The NuoDB Storage Group
In a traditional partitioning approach, the table partitions are typically mapped directly to physical storage on a single server. With NuoDB, each table partition is assigned to exactly one (and not more than one!) symbolic Storage Group. A Storage Group includes one or more table partitions and symbolizes a unit of storage. It is identified with its own, unique name.
Let’s use a graphical example to dig into how this works:
In the example shown above, you can see that the table is partitioned into TP1, TP2, TP3, etc…
- TP1 and TP2 are assigned to a Storage Group named SG1
- TP3 and TP4 are assigned to Storage Group SG2
- TP5 is assigned to Storage Group SG3
Storage Groups are then mapped to different Storage Managers, each of which runs on its own server, effectively allowing your table’s data to span multiple servers. This approach provides flexibility for scale-out and performance optimization beyond what you typically achieve in traditional systems.
What’s a Storage Manager? For those new to this blog and NuoDB’s distributed architecture, NuoDB appears as a single, logical, database to the application. Under the hood, it has a peer-to-peer, two-layer architecture that can be deployed within or across data centers and which includes an in-memory layer of Transaction Engines (TEs) and a storage layer of Storage Managers (SMs). The in-memory layer allows the application to naturally build up its own caches of frequently accessed data, and the storage layer provides ACID guarantees, data redundancy, and data persistence. Each layer has the ability to elastically scale out (and back), simply by adding and removing TEs and SMs.
So, a Storage Manager is the process node that provides durability for your data. It both writes data to disk and manages data on disk. By assigning a Storage Group to one or more Storage Managers, you control the physical location of where your data is located, and you control how many copies of that data are stored for redundancy, continuous availability, and separate processing purposes. The diagram below shows one approach for accomplishing this:
We previously divided up our five table partitions among three Storage Groups. And now we’ve assigned the Storage Groups (symbolic storage units) to two Storage Managers, each. This particular configuration provides full data redundancy for each partition at the storage level in our NuoDB system.
Read more about table partitioning and storage groups by visiting the NuoDB Documentation.
Separation of States: Table Partitioning, Storage Groups, and Storage Managers
One of NuoDB’s key advantages is its peer-to-peer, distributed architecture. The technology presents itself as a single, logical database to the application. However, invisible to the application, the database architecture provides a great amount of the flexibility to optimize for performance and availability requirements. Both within and across data centers, you can dynamically scale out and back in to optimize for database performance and operational characteristics such as data redundancy, availability, specialized processing, read/write throughput, overall storage capacity, and more without having to worry about interrupting service to the application or worrying about adjusting application code so the application is “aware” of the underlying database changes.
Our implementation of table partitioning and Storage Groups continues to take advantage of this unique architecture. Table partitioning occurs at the application level while Storage Groups are handled by the database operator separate from - and invisible to - the application. Being a logical storage unit, Storage Groups, themselves, are an independent entity from the physical storage location. This “separation of states” specifically allows the database operator a large amount of flexibility and control over the location or locations where data is being stored, granular levels of data redundancy (for high availability), and aligning hardware capabilities with data processing needs.
Example Use Cases
There are many use cases where table partitioning and storage groups can come in handy.
Let’s first take an e-commerce example. Below, we’ve partitioned our inventory table by product type into five different partitions:
The divisions fulfilling orders are Fasteners, Hands Tools, and Power Tools. Each of these divisions have an application responsible for processing transactions and fulfilling orders, but each division is only interested in fulfilling requests for specific types of products. Instead of querying the entire table each time they need to fulfill a request, with table partitioning, each division’s application will query only the partitions that include the types of products they care about.
In addition, since each Storage Group is available on two Storage Managers, if one Storage Manager becomes unavailable, the second Storage Manager will be available to provide uninterrupted data service to the application.
Here’s another example to think about. Two weeks ago, we explained how NuoDB can be deployed as a single logical database across multiple data centers, providing active-active benefits with none of the complexities or costs typically associated with this type of capability. When you use Storage Groups in an multi-data center situation, you can precisely control levels of redundancy and availability for your data without needing a storage area network (SAN) setup or other expensive technologies. Let’s say the application in Availability Zone #1 reads and writes to TP1 and TP2 (assigned to SG1), while the application in Availability Zone #2 reads and writes to TP3 and TP4 (assigned to SG2):
You can set up your Storage Groups so that you maximize availability for SG1 in Availability Zone #1, while storing SG2 as backup, and maximize availability for SG2 in Availability Zone #2, while storing SG1 as backup.
There are plenty of other ways you can optimize performance and availability using table partitioning and storage groups! The main point, however, is that when you combine NuoDB’s architectural approach with this type of functionality, you receive a great amount of flexibility in what you implement. The choice of what to optimize for is yours to make, according to the needs of your organization.
This is Part 3 in a multi-part series getting in-depth with NuoDB 2.6! Check out Part 1, Introduction to NuoDB 2.6, to learn everything we released in NuoDB 2.6 and Part 2, Continuous Availability with AWS Active-Active. Look out for our last blog in this series about SQL Enhancements coming soon.
|
OPCFW_CODE
|
The whole idea of effective altruism is in getting the biggest bang for your charitable buck. If the evidence about how to do this was simple and incontrovertible, we wouldn't need advanced rationality skills to do so. In the real world, choosing the best cause requires weighing up subtle balances of evidence on everything from if animals are suffering in ways we would care about, to how likely a super intelligent AI is.
On the other side, effective altruism is only persuasive if you have various skills and patterns of thought. These include the ability to think quantitatively, avoiding scope insensitivity, the ideas of expected utility maximization and the rejection of the absurdity heuristic. It is conceptually possible for a mind to be a brilliant rationalist with the sole goal of paperclip maximization, however all humans have the same basic emotional architecture, with emotions like empathy and caring. When this is combined with rigorous structured thought, the end result often looks at least somewhat utilitarianish.
Here are the kinds of thought patterns that a stereotypical rationalist, and a stereotypical non rationalist would engage in, when evaluating two charities. One charity is a donkey sanctuary, the other is trying to genetically modify chickens that don't feel pain.
The leaflet has a beautiful picture of a cute fluffy donkey in a field of sunshine and flowers. Aww Don't you just want to stroke him. Donkeys in medows seem an unambiguous pure good. Who could argue with donkeys. Thinking about donkeys makes me feel happy. Look, this one with the brown ears is called buttercup. I'll put this nice poster up and send them some money.
Genetically modifying? Don't like the sound of that. To not feel pain? Weird? Why would you want to do that? Imagines the chicken crushed into a tiny cage, looking miserable, "its not really suffering" doesn't cut it. Wouldn't that encourage people to abuse them? We should be letting them live in the wild as nature intended.
The main component of this decision comes from adding up the little "good" or "bad" labels that they attach to each word. There is also a sense in which a donkey sanctuary is a typical charity (the robin of birds), while GM chickens is an atypical charity (the ostrich).
The rationalist starts off with questions like "How much do I value a year of happy donkey life, vs a year of happy chicken life?". How much money is needed to modify chickens, and get them used in farms. Whats the relative utility gain from a "non suffering" chicken in a tiny cage, vs a chicken in chicken paradise, relative to a factory farm chicken that is suffering? What is the size of the world chicken industry?
The rationalist ends up finding that the world chicken industry is huge, and so most sensible values for the other parameters lead to the GM chicken charity being better. They trust utilitarian logic more than any intuitions they might have.
Insofar as your answer makes predictions about how actual “rationalists” behave, it would seem to be at least partly falsified: empirically, it turns out that many rationalists do not respond well to that particular suggestion (“modify chickens to not feel pain”).
(The important thing to note, about the above link, isn’t so much that there are disagreements with the proposal, but that the reasons given for those disagreements are fairly terrible—they are mostly non sequiturs, with a dash of bad logic thrown in. This would seem to more closely resemble the way you describe a “stereotypical non-rationalist” behaving than a “stereotypical rationalist”.)
In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It's very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.
I've continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you're one of the few people to consistently call out when it happens. This comment is a good example. It seems like there's a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn't extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.
This kind of thing is insidious, and can be done by well-meaning people. W... (read more)
Thank you for the encouragement, and I’m glad you’ve found value in my commentary.
I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.
It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.
That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.
|
OPCFW_CODE
|
The Oxford English Dictionary defines provenance as (i) the fact of coming from some particular source or quarter; origin, derivation. (ii) the history or pedigree of a work of art, manuscript, rare book, etc.; concr., a record of the ultimate derivation and passage of an item through its various owners. In art, knowing the provenance of an artwork lends weight and authority to it while providing a context for curators and the public to understand and appreciate the work’s value. Without such a documented history, the work may be misunderstood, unappreciated, or undervalued. In computer systems, knowing the provenance of digital ob jects would provide them with greater weight, authority, and context just as it does for works of art. Specifically, if the prove- nance of digital ob jects could be determined, then users could understand how documents were produced, how simulation results were generated, and why decisions were made. Provenance is of particular importance in science, where experimental results are reused, reproduced, and verified. However, science is increasingly being done through large-scale collaborations that span multiple institutions, which makes the problem of determining the provenance of scientific results significantly harder. Current approaches to this problem are not designed specifically for multi-institutional scien- tific systems and their evolution towards greater dynamic and peer-to-peer topologies. Therefore, this thesis advocates a new approach, namely, that through the autonomous creation, scalable recording, and principled organisation of documentation of systems’ processes, the determina- tion of the provenance of results produced by complex multi-institutional scientific systems is enabled. The dissertation makes four contributions to the state of the art. First is the idea that provenance is a query performed over documentation of a system’s past process. Thus, the problem is one of how to collect and collate documentation from multiple distributed sources and organise it in a manner that enables the provenance of a digital ob ject to be determined. Second is an open, generic, shared, principled data model for documentation of processes, which enables its collation so that it provides high-quality evidence that a system’s processes occurred. Once documentation has been created, it is recorded into specialised repositories called provenance stores using a formally specified protocol, which ensures documentation has high- quality characteristics. Furthermore, patterns and techniques are given to permit the distributed deployment of provenance stores. The protocol and patterns are the third contribution. The fourth contribution is a characterisation of the use of documentation of process to answer questions related to the provenance of digital ob jects and the impact recording has on application performance. Specifically, in the context of a bioinformatics case study, it is shown that six different provenance use cases are answered given an overhead of 13% on experiment run- time. Beyond the case study, the solution has been applied to other applications including fault tolerance in service-oriented systems, aerospace engineering, and organ transplant management.
|
OPCFW_CODE
|
/**
* Spies property usage of a given object.
*
* Works in modern browsers only, these that support:
* - Object.defineProperty
* - Object.keys
* - Array.prototype.forEach
*
* Be careful with IE8. Even though it supports Object.defineProperty, it's implementation is not valid.
*
*
*
* # Sample Usage
*
* // Object to be spied:
* var user = { name: 'Maciej', surname: 'Smolinski', fullName: 'Maciej Smolinski' };
*
* // Spy it's properties, debug info namespaced with 'user' label:
* spyProperties('user', user);
*
* // Access object's properties as usual:
* console.log('Full Name is: ' + user.name + ' ' + user.surname);
*
* // Watch debug info in the console:
* [Property Usage] user.name
* [Property Usage] user.surname
*
*
* @param {String} debugNamespace Debugging label (namespace)
* @param {Object} objectReference Object which properties you want to spy
* @return {void}
*/
function spyProperties (debugNamespace, objectReference) {
// @TODO: Add JSHint rules, use Strict Mode
// @TODO: Spy unless already spied
// @TODO: Detect Object.defineProperty, Object.keys and Array.prototype.forEach
// @TODO: Maybe add an execution time to each debug information so the developer gets an understanding on how long does each part of the code execute
Object.keys(objectReference).forEach(function (property) {
var __property = '__' + property;
try {
// Store original value as __<propertyName>
objectReference[__property] = objectReference[property];
// Reset property to undefined
objectReference[property] = undefined;
// Define spy (will write debug info into console and return original value)
Object.defineProperty(objectReference, property, {
get: function () {
// Write debug info into console
console.debug('[Property Usage] %debugNamespace.%property'
.replace('%debugNamespace', debugNamespace)
.replace('%property', property)
);
// Return original value
return objectReference[__property];
}
});
} catch (error) {
// The only workaround for Object.defineProperty problems in IE8
if (objectReference[__property]) {
// Restore original value
objectReference[property] = objectReference[__property];
// Reset __<property> to undefined
objectReference[__property] = undefined;
}
}
});
}
|
STACK_EDU
|
import { TestkitRequest, TestkitResponse } from "./domain.ts";
import { iterateReader } from "./deps.ts";
export interface TestkitClient {
id: number;
requests: () => AsyncIterable<TestkitRequest>;
reply: (response: TestkitResponse) => Promise<void>;
}
export async function* listen(port: number): AsyncIterable<TestkitClient> {
let clientId = 0;
const listener = Deno.listen({ port });
for await (const conn of listener) {
const id = clientId++;
const { requests, reply } = setupRequestsAndReply(conn)
yield { id, requests, reply };
}
}
interface State {
finishedReading: boolean
}
function setupRequestsAndReply (conn: Deno.Conn) {
const state = { finishedReading: false }
const requests = () => readRequests(conn, state);
const reply = createReply(conn, state);
return { requests, reply }
}
async function* readRequests(conn: Deno.Conn, state: State ): AsyncIterable<TestkitRequest> {
let inRequest = false;
let requestString = "";
try {
for await (const message of iterateReader(conn)) {
const rawTxtMessage = new TextDecoder().decode(message);
const lines = rawTxtMessage.split("\n");
for (const line of lines) {
switch (line) {
case "#request begin":
if (inRequest) {
throw new Error("Already in request");
}
inRequest = true;
break;
case "#request end":
if (!inRequest) {
throw new Error("Not in request");
}
yield JSON.parse(requestString);
inRequest = false;
requestString = "";
break;
case "":
// ignore empty lines
break;
default:
if (!inRequest) {
throw new Error("Not in request");
}
requestString += line;
break;
}
}
}
} finally {
state.finishedReading = true
}
}
function createReply(conn: Deno.Conn, state: State) {
const textEncoder = new TextEncoder()
return async function (response: TestkitResponse): Promise<void> {
if (state.finishedReading) {
console.warn('Discarded response:', response)
return
}
const responseStr = JSON.stringify(
response,
(_, value) => typeof value === "bigint" ? `${value}n` : value,
);
const responseArr =
["#response begin", responseStr, "#response end"].join("\n") + "\n";
const buffer = textEncoder.encode(responseArr)
async function writeBuffer(buff: Uint8Array, size: number) {
try {
let bytesWritten = 0;
while (bytesWritten < size) {
const writtenInSep = await conn.write(buff.slice(bytesWritten));
bytesWritten += writtenInSep;
}
} catch (error) {
console.error(error);
}
}
await writeBuffer(buffer, buffer.byteLength);
};
}
export default {
listen,
};
|
STACK_EDU
|
In this part, we will illustrate how theconcomitant information can be useful for estimating the under Bohn-Wolfe (BW) model. Actually, there are two different ways toestimate introduced respectively by Ozturk(2008 and 2010). Since the second method is computationally intensive, the first onewill only be considered as done in Sgambellone (2013).
The BW model could beexpressed as where is the population in-stratum CDF and is the element of indicates the probability that order statistics is assigned to order statistics. The interesting characteristic associated toBW model is that reflects the ranking mechanismquality of the RSS. As if the rankings are done perfectly, this leads equals the identity matrix.Whereas if the ranking process is exact imperfect, all the elements of will equal . Ozturk (2008) showed that can be useful for diversestatistical applications such as constructing non parametric confidenceinterval and computing Mann-Whitney-Wilcoxon test.
Here, we will also add a newuseful usage for as illustrated in the next section.Ozturk (2008) decidedto estimate by minimizing the difference between the sampling and the expected in-stratumCDF under BW model. He considered as a sampling in-stratum CDF. Hementioned that the minimization process must be implemented in the light of thefollowing constraints: 1- Each must be within the interval . 2- The sum of each row and each column must be equal one.
3- for . It is obvious that the first constrain guarantees that is a probability matrix, whilethe second implies that satisfies the doubly stochasticcondition and third constrain refers to,for simplicity, the symmetry of in order to reduce the number of unknown items from into . Accordingly, the optimization problem can be formulated as Ozturk (2008) used the function Solve.QP in theR-library QUADPROG to obtain the solution of the optimization expressed in (2)denotedby . Computational details for getting can be found in Sgambellone (2013).
Generally speaking, it isreasonable to think if onewould like to obtain another estimate for such that be much better than , itis sufficient to replace with another more efficientestimator in (2). Since suffers from a seriousdisadvantage that depends on which does not always agree withthe SO constrain, it is good choice to interchange in (2) with , , and leading to new estimators , and respectively. It is observantthat and have the advantage ofincorporating the concomitant information turning most likely to more accuracythan this obtained from and .Toexamine the effect of using different in-stratum CDF estimators on estimating , we did a small simulation study. Since the BWmodel cannot be applicable for getting concomitant-based RSS, we used againDell and Clutter (1972) model to generate concomitant-based RSS under perfect ( ) and imperfect ( ) ranking with and and .
Foreach combination of and , iterations have been done. For each iteration,the estimators , and are computed.
|
OPCFW_CODE
|
VMware NSX has been a driving force for many customers on their network transformation journey over the past few years. Enabling our customers to use a more programmatic approach to the network has put them in a position to no only respond to demand quickly but operate these software-based networks more efficiently. The evolution of NSX-T has been gradual and with this “milestone release” you will see the true power behind it’s innovation. Everyone that can and will consume these resources (DevOps teams, developers, IT administrators) can continue to evolve and meet the needs to support multi-hypervisor data centers, bare-metal workloads, CNA, public clouds and most importantly multiple clouds.
What I am going to cover in this blog article is the Day 0 Installation procedures, more specifically involving NSX Manager. Part 2 of this blog article will cover the procedures for setting up the NSX Edge, Transport Zones and the Host Transport Nodes. The recommended order of procedures for installation can be found on Page 11 of the NSX-T Data Center Installation Guide for NSX-T Data Center 2.4. I do not have KVM setup in my lab so I will be skipping those steps (for now).
This blog will cover…
- Review of the NSX Manager Installation Requirements
- Review the necessary ports and protocols.
- Installation of NSX Manager.
- Log into the NSX Manager deployment and deploy the additional NSX Manager nodes to form our cluster.
This is a rather easy blog and procedure to follow. It’ll get more complicated in Part 2 when we deploy/configure other NSX-T components.
NSX Manager Installation Requirements
The NSX Manager is a virtual appliance (OVA) that I am going to deploy into my vSphere 6.7 lab environment; specific version is NSX 2.4.1 – Build 13716579. The system requirements for this VM depends on the size that you wish to deploy. Disk space and hardware version is the same across each size and that is 200GB of disk space (3.8 GB disk space if thin provisioned) and hardware version 10 or later.
- Extra Small VM – 2 vCPU, 8 GB memory
- Small VM – 4 vCPU, 16 GB memory
- Medium VM – 6 vCPU, 24 GB memory
- Large VM – 12 vCPU, 48 GB memory
One thing to be aware of is the Extra Small VM only applies if you are deploying the Cloud Service Manager. The NSX Small VM is to be used for POC or lab deployments and not for production. If you have less than 64 hypervisors for NSX-T then deploy the Medium sized VM; if you have more than 64 hypervisors for your target deployment then deploy the Large VM.
The VM Tools are installed with the appliance and should not be removed or upgraded. Make sure the password requirements for the root, admin and audit accounts meet the complexity requirements. If you do not meet the requirements you will need to change the password after deployment before proceeding (reference page 26-27 for more info).
NSX Manager Network Requirements
- Static IP Address (cannot be changed after installation)
- Max latency between NSX Managers in the NSX Manager Cluster is 10ms.
- Max latency between NSX Managers and the transport nodes is 150ms.
- Hostname that does not have any invalid characters or underscore.
- Verify the forward and reverse lookup in DNS. If by any chance you deploy the appliance with an invalid character the appliance will default to ‘nsx-manager’ for the hostname.
- The NSX Managers can also be accessed via static IP. If you wish to configure the appliance for access via DNS you can read the additional details summarized on page 27-28 of the documentation.
- Decide ahead of time which VM network port group the appliance will connect to. I recommend deploying the appliance on the same network segment where vCenter Server resides. If for some reason you have multiple management networks make sure they are accessible (static routes can be added to the NSX manager appliance if needed). Plan ahead your IPv4 or IPv6 address scheme.
- TCP and UDP ports table used by NSX Manager can be found on pages 21-22 but I recommend being aware of all the ports required for the NSX Edges, ESXi, KVM and/or bare metal servers for planning because at some point you will be deploying these other components so its best to start working with your network team now and get ahead of it.
NSX Manager Storage Requirements
- Deploy the appliance on highly available shared storage to avoid any possible storage outage. A storage outage would result in the NSX Manager file systems to be placed into read-only mode. So make sure the storage technology that backs your environment is designed to be highly available. Consult your storage vendor prior to deployment if necessary.
- 200 GB of disk capacity (thick provisioned); max disk latency access should be under 10ms.
That is a quick summary of the requirements. Make sure you review everything in the documentation from page 14 -31. Don’t skip past anything and assume it doesn’t apply to you.
NSX Manager Installation Procedure
Let’s begin the step-by-step installation procedure for NSX Manager from the vSphere Client.
- Initiate the ‘Deploy OVF’ wizard from the vSphere Client. Select the downloaded NSX-T ‘unified appliance’ OVA file and click Next.
- Enter the name of the appliance as you want it to appear in the VM inventory and click Next.
- Select the target compute resource, vSphere cluster if DRS is enabled or specific ESXi host, and click Next.
- Review the details of the appliance and click Next.
- Select the Configuration size and click Next (Notice if you select ‘ExtraSmall’ it states under Description that it is only supported for the ‘nsx-cloud-service-manger’ role).
- Select the target datastore for the appliance and click Next.
- Select the virtual network (port group) and then choose the protocol and click Next.
- Scroll through the various settings for customizing the template. This is where you will enter the complex passwords that meet the password requirements, hostname, role, static IP information, DNS, NTP, SSH and on. Do not enter anythin g in the ‘internal properties’ section. Click Next.
- Once the NSX Manager appliance boots, you can either open the VMRC console or SSH (Putty) to the appliance to run a few commands. I like Putty because I can scroll through the extended output that is generated much easier than the console. Execute the command ‘get services’.
- After reviewing the output from the ‘get services’ command you will then start a few services and set the SNMP community. Start the following services as you see in the screenshot below. This part is also covered in Step 20 on page 34 of the Installation Guide.
- Next from a supported browser connect to the newly deployed NSX Manager and log in with the ‘admin’ account.
- Upon login accept the EULA and choose whether or not to join CEIP.
- Next I am going to add a ‘Compute Manager’, in this case my vCenter Server. Navigate by clicking ‘System -> expand Fabric -> Compute Managers’ and then click Add.
- Enter the information for your vCenter Server and click Add. The SHA-256 thumbprint will automatically be detected, click Add when the warning appears.
- Here is where we are going to expand our NSX Manager cluster to 3-nodes. Select Add Nodes. (Notice from here you can see the NSX version deployed.)
- In the ‘Add Nodes’ window select the compute manager, enable SSH (or root access), your node credentials, DNS and NTP info and the form factors. I select ‘Small’ and click Next.
- Notice the warning at the top of the window regarding the number of recommended nodes (A). I then provide the information for my 3rd node (B) and click Finish.
- The two additional NSX Manager nodes will begin to deploy.
- Monitor the deployment of the nodes from the Management Cluster view. When a node is complete and online it will state that it is UP and the Repository Status as ‘Sync Completed’ or ‘Sync in Progress’ as you see with 2nd and 3rd nodes below.
- You can see additional group status information by selecting ‘Degraded’ at the top and you will see which elements of the cluster are Up, Stable, Degraded or Down.
- Once everything is online and stable you should see the Management Cluster as STABLE. Repeat Steps 10 and 11 above on the two new nodes (connect via SSH and start services).
- Next I am going to set the Virtual IP (VIP) for my management cluster. Select EDIT and the ‘Change Virtual IP’ window will appear. Enter the VIP address and click Save. You can now connect to NSX Manager using the Cluster VIP address. I also have a DNS host record created for this VIP; you can connect to it from your browser using the IP or the FQDN, whichever you choose.
I wanted to separate the NSX Edge and other components from the NSX Manager because it is much different than what you may have done in the past when deploying NSX-V. The logic is similar but there are a lot of differences. The NSX Edge in NSX-T is much different than the NSX Edge Services Gateway (ESG) that you may have worked with previously in NSX-V. We will get into more details on this later.
Aside from the NSX-T 2.4 Installation Guide (PDF) link at the beginning of this blog you should also review the following information.
- VMware NSX-T Reference Design Guide 2.0 (Released January 2018) – 117 page PDF document that you should read in its entirety multiple times.
- VMware NSX-T Data Center 2.4 Release Notes – rule of thumb for any admin regardless of what you are working on would be to read the release notes to not only understand what is new in the release but to understand compatibility, look at revision history, understand the Resolved and Known Issues and so on. Always very important!
- NSX-T Data Center Installation Guide (online version) – everything in the PDF can also be found online here.
- If you are looking to deploy NSX-T across multiple sites (2 or more) then you should start with the NSX Manager Cluster Requirements for Single, Dual and Multiple Sites web page. Quick Spoiler Alert…
- Dual Site deployment would require a stretched vSphere cluster (management cluster) and properly configured anti-affinity rules. You design this so the three (3) NSX Manager nodes all reside in Site-A and failover to Site-B upon site failure only; recovered by vSphere HA.
- In a 3 or more site deployment there would be one vSphere Management cluster per site with one NSX Manager per site. With a single site failure the two remaining NSX Manager continue operating in a degraded state. The recommendation here would be to manually deploy a 3rd node to replace the lost cluster member.
- A two-site failure would be a loss of quorum and have an impact to all NSX-T operations.
4 thoughts on “Installing NSX-T 2.4 (Part 1) – NSX Manager”
|
OPCFW_CODE
|
Stellate relies on Fastly infrastructure for our offerings
Fastly experienced a partial outage of their KV Store offering on June 17th and June 18th, which affected Stellate. They provide a summary of this incident on their status page at https://www.fastlystatus.com/incident/376022
August 17th 10:46 UTC - A customer reported their stellate endpoint failing in the FRA (Frankfurt) point of presence (POP), as well as in several other edge locations. This was due to them pushing an update to their configuration, specifically the originUrl .
10:50 - We identified the issue as being a stale KV value in the FRA POP, as well as several others.
10:55 - We created an incident on our status page for degraded KV in the FRA POP and several others.
13:08 - We realized that Rate Limiting and Developer Portals were affected by this outage as well.
13:30 - We reported this incident to Fastly.
August 18th 4:00 UTC - Fastly was not yet able to provide us with a satisfactory response on what was causing this and didn’t acknowledge the ongoing outage.
6:23 - A large e-commerce customer reported their website was unavailable. This was due to a KV key disappearing in the FRA POP, as well as several others.
7:09 - Additional reports started to come in via Intercom about services not responding properly.
7:15 - We escalated the incident with Fastly as from our view more regions seemed to be affected and becoming unavailable.
7:16 - We deployed a partial fix that disabled our new infrastructure. This fixed edge caching for users who didn’t recently push configuration changes (the majority of services). Rate Limiting, JWT-based scopes, and the Developer Portal were still affected by the KV outage.
8:01 - Fastly was able to reproduce the bug based on a reproduction that we provided earlier and started working on a fix.
10:19 - Fastly communicated to us that the cause was an issue with surrogate keys in their C@E caching layer.
August 22nd - Fastly shared their confidential Fastly Service Advisory with us providing additional information about this incident and how they want to prevent this from happening again.
We have had several calls with Fastly over the last couple of days, working with them to analyze what went wrong, why it took them so long to escalate this internally, and how we can improve communication and collaboration going forward.
As a direct outcome of this, we have re-connected with our European contacts at Fastly and designated a direct contact to involve in conversations and escalations going forward.
We are going to investigate a fallback option for Fastly KV.
Additionally, we will review all possible failure points that could make Stellate core services inaccessible (in the event of a third-party outage) and investigate options for additional redundancies for those services.
Posted Sep 11, 2023 - 10:53 UTC
This issue has been resolved. We have temporarily switched all services back to our "old infrastructure" and are running additional tests as well as working with Fastly before we reopen the "new infrastructure".
We will also publish additional details once we conclude our internal post mortem process.
Posted Aug 18, 2023 - 12:17 UTC
Fastly has implemented a fix for the issue, all services are working as expected again.
We have temporarily disabled switching over to the new infrastructure and are working with Fastly to better understand what happened on their end, why it took so long to identify and rectify this and how we can better monitor and prevent this in the future. We well enable the new infrastructure again, once we are confident in any services we rely on.
We are continuing to work on a fix for this issue.
Posted Aug 18, 2023 - 07:29 UTC
The incident with KV stores, which are used for service configuration, is now spreading to additional edge locations and affecting overall service availability for services on the new infrastructure. We have disabled the new infrastructure to provide our partner more time to identify and resolve the issue on their end.
Posted Aug 18, 2023 - 07:15 UTC
Our infrastructure partner has identified the issue and is working on fixing it.
Posted Aug 17, 2023 - 16:03 UTC
We are continuing to investigate this issue together with our infrastructure providers.
If you haven't made configuration changes to your service recently, you are not affected by this issue.
Posted Aug 17, 2023 - 13:56 UTC
We are investigating an issue with configuration updates propagating to the respective services. If you didn't make configuration changes recently, your services are not impacted by this incident.
Posted Aug 17, 2023 - 11:55 UTC
This incident affected: GraphQL Edge Caching and GraphQL Rate Limiting.
|
OPCFW_CODE
|
If you are not sure whether you have obtained such permission before, we recommend that you reach out to us anyway so that we can minimize any possible downtime of the license server while the authorization is being granted. Product-Specific Control Panel Permanent Tickets To revoke a permanent ticket issued to a specific client, click the revoke link in the third table column next to this client's credentials. We can use shortcuts with some obscure techniques and tricks to build more code with less writing in less time. Until that version is released I'm ok, but if there is a new version I'd love to update now. The anwer of chris-betti is correct.
You can subsequently log out by clicking the Logout link in the top right corner of any License Server page. Therefore, it includes a pair of built-in countenance and tools for combining advanced tools development and web development. License Server Settings After you have successfully logged in, the JetBrains License Server home page displays. They are released manually by a client application or server administrator. As a student who has been using JetBrains products for just about a year now, it's hard to think of not being able to use one of their tools ever again. Generating end-user licenses, supporting the evaluation licenses, centralizing license administration and management via , providing an on-premises license server, license protection, an in-product licensing module, and more are all taken care of. You can specify that period in seconds in this field.
You can view the entire log by opening it from the Tomcat root directory. How to Provide Custom Verification To apply a custom verification procedure, you should create a JavaBean implementing ClientVerifier public interface: If isAuthorized method in any ClientVerifier implementation returns false, the requesting client is considered unauthorized and is not granted a license ticket. The License Server allows for single-user keys to be used as concurrent licenses. Do not message moderators for help with your issues. The Login page Enter the credentials that you specified during server setup in the E-mail and Password fields. Thus, it supports you to build faster, better, and cheaper apps. Redundant questions that have been previously answered will be removed.
I bet they've started using a new one and phasing out the old one. If the activation code is successfully validated, this will be confirmed by the Permanent Ticket Received dialog box. When a client obtains a permanent ticket, its floating ticket is released. Log This tab displays the contents of the log file maintained by License Server. Therefore, it is probable to get quite a notepad with memory.
Should you have any questions or requests about old license servers, please contact us by completing for quick assistance or contact License Server support via. Keep in mind that the Permanent Tickets tab remains hidden unless you select this check box. Click it to open a pop-up window , and paste the entire body of the e-mail message with license keys provided to you by a JetBrains representative. Instead of looking at the search results that may or may not be real and work, I went straight to the bottom of the page: A-ha! License Server receives requests for license tickets from client applications and issues tickets to them upon verification, eliminating the need to configure clients individually. Make sure to set necessary database connection settings using the url property.
Billing and Sales Infrastructure With the marketplace, third-party plugin vendors can entirely outsource billing and sales operations to JetBrains, and we will take care of the check out and payment processing, quotes, invoices, refunds, community programs, discounts, and even sales support for your plugins. That said, each license key provides one ticket. We will assist you with positioning, promotion, and various other marketing and sales activities beneficial to our end-users and plugin vendors. When the Remove Permanent Ticket? License Server serves as a central point for distribution of licenses among multiple users and client machines in a network environment. To start Apache Tomcat distribution bundled with License Server and deploy licenseServer. BasicDataSource bean properties, comment out driverClassName property referencing the embedded database, and uncomment driverClassName property corresponding to the external database of your choice.
Should you have any questions, you can reach out to us via. The License Server will allow for the exact number of concurrent instances as purchased commercial licenses imported into the License Server. In addition, we will utilize our to make the plugins available for purchase via the JetBrains distribution network which is very important for some markets where direct sales possibilities are limited. Product-Specific Control Panel Report The Total Max row displays the maximum daily number of tickets issued to individual versions as well as to all versions of the product within the specified period of time. License Server doesn't cache verification results, meaning that a client is verified for each request it sends. Can you link to the blog you're talking about? For every implementation of ClientVerifier, you should create a separate Spring bean. License Keys This tab contains the Add Keys From Purchase E-mail link.
Read below to look into setting up your personal server. A single ticket grants permission to use a single copy of a product. This limit does not apply to. Permanent Tickets This tab displays only if the Enable Permanent Tickets check box is selected in a product-specific Settings tab. The second link here is all we need for now. You can open other web resources or close the browser window in the meantime.
Second, assuming you choose to use init. The message that displays after processing and saving license keys If no keys were processed and saved after you've copied the message into the pop-up window, make sure you've pasted the entire message body. At this time, we are ready to start technical testing for plugins that will be ready to switch to the Marketplace in the next 3-9 months. Floating licenses are supported starting from dotTrace 3. If the controls in Add License Keys From Purchase E-mail are grayed out, try upgrading your browser to Internet Explorer 7. Going to the complaint page, sure enough, has links to where there are probably real cracks.
|
OPCFW_CODE
|
I think i was just rambling after that "btw" and lets just forget it altogether for now haha
So, I pm'd you the 3 required details that default-downloaded bioses do not have, and I'd like version 1101. https://dlcdnets.asus.com/pub/ASUS/m...-ASUS-1101.zip
should work as a direct d/l link, otherwise https://www.asus.com/us/Motherboards...HelpDesk_BIOS/
definitely should have it provided.
I will also upload my 2202 version that I am using atm on my motherboard bios. It does include the 3 codes I pm'd you... but just in case it's not readable or whatever (i'm not sure at this point haha), I included them in a PM to you - since you might not want to do anything with this file i've uploaded, altogether...
So, I essentially want 1101 bios version. But I need the HPET enable/disable option
, and also the TCO timer
if you could somehow include it? (btw, ever since I started using the 2202 version - High Precision Event Timer
has completely been greyed out
inside the OS, unlike with the older versions of bioses that i've used before... disabling them in bios did not grey them out
, just left them there in the system devices under device manager. - I also use devmgr_show_nonpresent_dvices 1 in the user environment variables to see every possible thing inside device manager when checked see hidden devices
Also I would like the updated ME and CPU codes
, if those would help stability? But, I noticed that I was barely able to hold a 4.4 ghz overclock on my i7-6700k with 1.32vcore voltage with 3801 or whatever later bioses I was using, and my CPU VCCIO/ CPU System Agent/ VPPDDR and PLL voltages
had to be a lot higher for the system to be stable... Yet, on this 2202 (currently), 1.285vcore voltage is more than enough to run a stable system, and all the other voltages i just mentioned here besides vcore had to be on auto or very low / 1.1v and such set manually or else my OS would freeze!! If tried on the older voltages, system would freeze within 4-6 seconds of booting! I even thought my M.2 drive died when I was trying to install windows earlier, but then I tried default settings and everything worked flawlessly. So, if updating the ME/CPU codes does mess with the voltages... i'd rather just have a cooler system (not sure if you know this, but if you don't, i'd rather you upload two versions one with and one without the cpu/me codes or perhaps 4 files with all the possibilities? (sorry i am asking for so much sir - may god bless you my goood good sir!!))
Also, I would very much like the Spread Spectrums to be avaiable for turning off and such - BCLK Spread Spectrum, VRM Spread Spectrum, PCIe Spread Spectrum
. There was also an option called PCIe pll SSC, which was essentially the same as PCIe Spread SPectrum (SSC), so I don't know if that'll help... Sorry I'm a noob at this stuffs.
I'll also upload the current 1101 i'm working on - which I think I've enabled some pretty cool options (Surprizingly - 1101 has a lot more practical options you can mess with that are NOT available in the older versions i've seen, even 1302 doesn't have them!). It does not include the 3 codes though that are native to my motherboard so you can add them or I can add them myself later... not an issue, FD44Editor_0.9.2_win easily lets me do that. And please do look over the options i've enabled inside 1101, as I have not had the chance to test if they're of any actual help inside a bios-loaded motherboard, but they seemed eerily "good" for a lack of better words. haha.
Please take your time with this, I don't mean to rush you sir, and again you are awesome, may God bless you!
here's a list of more settings I never found or whatever, if you could somehow get them to be in there as well while i'm asking for the greatest gift of you
-1394 controller - enable/disable option
-Execute Disable Bit - enable/disable
-On-board Audio AND on-board video off (I think I did this with the 1101 I was working on, but i am honestly not sure! I saw stuff like GPIO and the frequency of the Onboard, and stuff like Standby Rendering but I'm not sure if those disable on-board video honestly.
-CPU Spread Spectrum (not sure if it's same as BCLK spread spectrum or not?) (Maybe it's the same as Spread Spectrum clock chip?)
- Internal PLL overvoltage = off
- EPU Power Saving Mode = Disabled (I only have auto/ TP1-air cooling and TP2-water cooling atm)
- LLC and PLL Overvoltage - both off, or PLL off; specifically "Off" not "Auto"
- BCLK Recovery - Disabled
- Intel Adaptive Thermal Monitor - Disabled (not sure if same as Thermal Monitor under Advanced-> CPU tab)
- Intel Rapid Start - disable
- Intel Smart Connect - Disable
- initiate graphic adapter = PCIE
- SMART Status Check = Disable (I believe I got this in the 1101 editing so far, not 100% though)
- PCIEx16_1 Link Speed = manually set to Gen3 (I think i got this in 1101 as well, not 100% again)
- Bluetooth - Disabled
- Wifi controller - Disabled
- Marvel Storage - Disabled
- ASM1061 Storage Controller - Disabled
- execute disable bit
I believe that's all!
And please leave all the default options intact as well, I'm not including them here so to not be TOO nooby/redundant/obnoxious but don't get rid of em LOL
and please do believe me when i say take your sweeeeeet time cuz the bios and everything atm is working pretty fantastic
... i just want it sweeter cuz u know that "cancerous perfectionism" thingy ma jig inside some of us... heh heh
|
OPCFW_CODE
|
TORONTO, ON, July 12, 2000 — MSN.CA today announced an unprecedented milestone in Canadian e-mail usage: 5 million active accounts for MSN Hotmail, the world’s leading globally accessible free-web-based electronic mail service. To mark this occasion, MSN.CA is celebrating in the streets around COMDEX/Canada 2000, Canada’s largest technology industry trade show currently taking place in Toronto. MSN.CA also has a number of activities planned both around the city and across Canada, including a new advertising campaign for MSN Hotmail. In Canada, Hotmail is accessible through MSN Canada’s portal at http://www.msn.ca .
“Thank you Canada for choosing MSN Hotmail for your free web-based e-mail!”
said Judy Elder, General Manager of Microsoft Canada Co.’s Consumer Group.
“Canadians are clearly embracing the communications possibilities of the Internet, and the anywhere, anytime access of free-web-based e-mail through MSN Hotmail. What a great reason for a celebration!”
In the spirit of transferring
from one place to another for free, MSN.CA will be offering Torontonians free rickshaw rides tomorrow on 60 rickshaws that will be running from 8:15 AM to 3:30 PM, July 13 th , between Front Street and Queen Street, and from Yonge Street to Peter Street. The rickshaws and their drivers will be easily identified by their
signage, T-shirts and hats. MSN.CA is also supporting the Toronto Millennium Moose in the City Campaign with the
“Hotmail Moose, in celebration of 5 million Canadian Hotmail accounts,”
that will reside in front of the Metro Toronto Convention Centre, the home of Comdex 2000.
Across Canada, Much Music and MSN.CA are teaming up to let Hotmail users vote on a series of ‘what’s-hot-what’s-not’ categories, one each week for four consecutive weeks starting July 17th. Voters will automatically be entered into Much Music’s What’s Hot What’s Not Contest. This contest will enable people who have Hotmail accounts (or wish to sign up for one at http://www.msn.ca ) to enter a Canada-wide contest that will ultimately determine what’s cool in hip categories. Each week a winner will be announced during Much Music’s
“Go With the Flow”
program. Winners will receive a Pocket PC with a wireless modem. An advertising campaign with also coincide with the Much Music/Hotmail contest to bring awareness to Canadians across the country.
About MSN Hotmail
There are 66 million active Hotmail accounts worldwide, with an average of 270,000 accounts opened daily around the world. Hotmail is now available in seven languages, including English, French, German, Spanish, Brazilian Portuguese, Italian and Japanese. Microsoft continues to build on the popular Hotmail messaging platform, adding services to enhance the capabilities and benefits of the free service to its customers. These include MSN Messenger Service, the easy-to-use Internet messaging service that provides consumers with instant and private contact with their friends, family and colleagues; and Microsoft Passport, a suite of e-commerce services that features single sign-in for all Passport-enabled sites, including MSN Hotmail. Microsoft’s Hotmail Team is also a leader in consumer advocacy and is widely recognized for its anti-spam measures and is committed to providing a safe e-mail environment for its members.
About MSN.CA MSN.CA makes available to Canadians, on one Web page, compelling Internet services, information, entertainment, a variety of communication options including MSN Hotmail web-based e-mail service – the world’s leading globally accessible free web-based electronic mail service. MSN.CA also provides such high-quality interactive services as MSN Gaming Zone, MSN Web Communities, MSN Messenger Service, and many more for the Microsoft Windows operating system and Apple Macintosh users. Please visit the Web site at http://www.msn.ca .
|
OPCFW_CODE
|
NSFGEO-NERC: Understanding the Response to Ocean Melting for Two of East Antarctica's Most Vulnerable Glaciers: Totten and Denman
Understanding Totten and Denman Glaciers
The snow that falls on Antarctica compresses to ice that flows toward the coast as a large sheet, returning it to the ocean over periods of centuries to millennia. In many places around Antarctica, the ice sheet extends from the land to over the ocean, forming floating ice shelves on the periphery. If this cycle is in balance, the ice sheets help maintain a stable sea level. When the climate cools or warms, however, sea level falls or rises as the ice sheet gains or loses ice. The peripheral ice shelves are important for regulating sea level because they help hold back the flow of ice to the ocean. Warming ocean waters thin ice shelves by melting their undersides, allowing ice to flow faster to the ocean, and raising sea level globally. Thus, an important question is how much sea level will rise in response to warming ocean temperatures over the next century(s) that further thin Antarctica?s ice shelves. Currently, West Antarctica produces the majority of the continent?s contribution to sea level. Albeit with large uncertainty, ice-sheet models indicate that Totten and Denman glaciers in East Antarctica could also produce substantial sea-level rise in the next century(s). This international study will focus on improving understanding of how much these glaciers will contribute to sea level under various warming scenarios. The project will use numerical models constrained by oceanographic and remote sensing observations to determine how Totten and Denman glaciers will respond to increased melting. Remote sensing data will provide updated and improved estimates of the melt rate for each ice shelf. Two float profilers will be deployed from aircraft by British and Australian partners in front of each ice shelf to repeatedly measure the temperature and salinity of the water column, with the results telemetered back via satellite link. The melt and oceanographic data will be used to constrain parameterized transfer functions for ice-shelf cavity melting in response to ocean temperature, improving on current parameterizations based on limited data. These melt functions will be used with ocean temperatures from climate models to force an open-source ice-flow numerical model for each glacier to determine the century-scale response for a variety of scenarios, helping to reduce uncertainty in sea level contributions from this part of Antarctica. Processes other than melt that might further alter the contribution to sea level over the next few centuries will also be examined. On the observational side, the demonstrated deployment of float profilers from a sonobuoy launch tube in polar settings would help raise the technology readiness of operational in-situ monitoring of the rapidly changing polar shelf seas, paving the way for an expansion of observations of ocean hydrographic properties from remote areas that currently are poorly understood. In addition to being of scientific value, reduced uncertainty in sea-level rise projections has strong societal benefit to coastal communities struggling with long-range planning to mitigate the effects of sea-level rise over the coming decades to centuries. Outreach activities by team members will help raise public awareness of Antarctica's dramatic changes and the resulting consequences. This is a project jointly funded by the National Science Foundation?s Directorate for Geosciences (NSF/GEO) and the National Environment Research Council (NERC) of the United Kingdom (UK) via the NSF/GEO-NERC Lead Agency Agreement. This Agreement allows a single joint US/UK proposal to be submitted and peer-reviewed by the Agency whose investigator has the largest proportion of the budget. Upon successful joint determination of an award recommendation, each Agency funds the proportion of the budget that supports scientists at institutions in their respective countries.
AMD - DIF Record(s)
Data Management Plan
None in the Database
This project has been viewed 2 times since May 2019 (based on unique date-IP combinations)
|
OPCFW_CODE
|
norm fit producing incorrect fit
Why does the following produce an incorrect output for muf and stdf?
import numpy as np
from scipy.stats import norm
x=np.linspace(-50,50,100)
sig=10
mu=0
y=1/np.sqrt(2*sig*sig*np.pi)*np.exp(-(x-mu)*(x-mu)/(2*sig*sig))
muf, stdf = norm.fit(y)
print muf,stdf
This prints
0.00989999568634 0.0134634293279
Thanks.
How do you know it's incorrect? What would you expect instead?
I would check the definition of norm and compare this to your equation of y. Do they give the same values for the sam einput?
The documentation of scipy.stats.norm says for its fit function
fit(data, loc=0, scale=1) Parameter estimates for generic data.
To me this is highly ununderstandable and I'm pretty sure that one cannot expect this function to return a fit in the usual sense.
However, to fit a gaussian is rather straight forward:
from __future__ import division
import numpy as np
x=np.linspace(-50,50,100)
sig=10
mu=0
y=1/np.sqrt(2*sig*sig*np.pi)*np.exp(-(x-mu)*(x-mu)/(2*sig*sig)) #
def gaussian_fit(xdata,ydata):
mu = np.sum(xdata*ydata)/np.sum(ydata)
sigma = np.sqrt(np.abs(np.sum((xdata-mu)**2*ydata)/np.sum(ydata)))
return mu, sigma
print gaussian_fit(x,y)
This prints
(-7.474196315587989e-16, 9.9999422983567516) which is sufficiently close to the expected values of (0, 10).
You misunderstood the purpose of norm.fit. It does not fit a Gaussian to a curve but fits a normal distribution to data:
np.random.seed(42)
y = np.random.randn(10000) * sig + mu
muf, stdf = norm.fit(y)
print(muf, stdf)
# -0.0213598336843 10.0341220613
You can use curve_fit to match the Normal distribution's parameters to a given curve, as it has been attempted originally in the question:
import numpy as np
from scipy.stats import norm
from scipy.optimize import curve_fit
x=np.linspace(-50,50,100)
sig=10
mu=0
y=1/np.sqrt(2*sig*sig*np.pi)*np.exp(-(x-mu)*(x-mu)/(2*sig*sig))
(muf, stdf), covf = curve_fit(norm.pdf, x, y, p0=[0, 1])
print(muf, stdf)
# 2.4842347485e-08 10.0000000004
Very nice concise answer. Thank you.
|
STACK_EXCHANGE
|
What is global state management
In layman's terms, global state management allows data to be passed/manipulated among multiple components easily (breaking the chain of passing-props-forever).
Comparing the options
Released in React 16.3. Context API creates global data that can be easily passed down in a tree of components. This is used as an alternative to "prop drilling" where you have to traverse through a component tree with props to pass down data.
- Simpler to use.
- No need for third-party libraries. Does not increase your bundle size. Baked into React by default.
- It's slow. When any value in the context changes, every component that consumes this context will rerender, regardless of whether it actually uses it. Therefore, high-frequency updates or sharing the whole application state through context would cause excessive render lifecycles and be very inefficient, making it only great for low-frequency updates like locale, theme changes, user authentication, etc.
A state management library used to store and manage the state of a React app in a centralized place. Redux abstracts all states from the app into one globalized state object.
- When parts of the state object get updated, only the components that use these states will rerender. Redux is more efficient when you have an app that is updating frequently.
- When store gets updated, it gets updated immutably: the previous store is cloned with new state values. This makes it possible to track previous updates and such things as time traveling along the update history to help debugging. This makes Redux much easier to test, maintain and scale.
- Great debugging tools: Redux DevTools
- Out of state management libraries, Redux has the biggest community support
- Lots of boilerplate. Complex structure
- Third-party library that we have to install. Increases the bundle size
- Downside of immutable store is that the store can quickly turn into an enormous json file
- Simplified structure: removes the boilerplate of defining and using actions, reducers, and stores. Thus, quick to implement and easy to read.
- Makes immutable updates easier using the Immer library
- Takes care of common Redux configuration settings
- Redux is a third-party library that we have to install. Increases the bundle size (compared to Context API)
MobX is a state management library that applies functional reactive programming(i.e. @observable) to make state management simple.
- MobX uses @observable to automatically track changes thorugh subscriptions. This removes the overhead of Redux developer cloning and immutably updating data in the reducers.
- Less boilerplate compared to Redux. Easier to learn
- MobX supports multiple stores whereas Redux allows a single store. This enables us to have a separate store for UI state and the domain state(server API data). Since the UI state is kept separate, we can keep the domain state matched to the server data and make connecting to the server simple.
- MobX states are overwritten during updates. While it is easy to implement, testing and maintaining can become difficult due to the store being a lot less predictable.
Redux vs MobX
| Redux | MobX |
| ------------------------------------- | ------------------------------------------ |
| Manually tracks updates | Automatically tracks updates |
| Explicit | Implicit (a lot is handled under the hood) |
| Passive | Reactive |
| Read-only state | Can read and write to state |
| Pure. (prevState, action) => newState | Impure. state => state |
|
OPCFW_CODE
|
Re: Can you hold down the power button
I think that you mean "Access Standby" or "Mode Execute Ready".
17 posts • joined 15 May 2012
I think that you mean "Access Standby" or "Mode Execute Ready".
I used to program in Coral66 which institutionalized something akin to the wonderful #defines in the article. In Coral66 keywords are in primes, e.g. 'IF', 'THEN', 'ELSE', 'BEGIN', 'END' and so on.
This fine except that it meant that you could spaces in variable names but the spaces were ignored by the compiler. These all refer to the same thing -
my variable, myvariable, m y v a r i a b l e, my variable
As you can imagine it became a nightmare trying to find a variable in a big program where developers had used spaces in their own unique way. We tried to ban the use of spaces inside variable names but there were always the maverics that insisted on using them and legacy code and the careless Friday afternoon code post pub lunch.
Sorry but I won't be grieving over the damn things. Uncomfortable, slow and noisy with the turning circle of a small country. Prone to water leaking in to the cabin even on brand new ones. If you have all the bells and whistles they break. If you don't then you either freeze, boil or can't see out of the windows for condensation depending on the season. Iconic but awful.
FORTRAN - check
ALGOL - check
Assembler - lots and lots of weird CPUs (so check)
US Citizen - nope
Damn - guess NASA won't be calling me then...
Was contacted last year to see if I wanted to work on some CORAL code (no thanks). It's amazing how much of this old stuff is still kicking around.
"Joking aside, its always interesting to read this sort of research into the science of human developement and the way it gives the lie to guff like intelligent design. If we aren't evolved from ancestors who used to live in trees and walk on all fours, but were 'designed' to be the way we are today,"
That's easy to explain. God really doesn't like us thats why he intelligently designed in all the flaws.
"bigger hammer" a.k.a. "universal persuader"...
I remember going to an RAF Finningley open day around 1975/76 and seeing a Vulcan scramble followed by a low(ish) level flypast. As cdilla says - the whole-body feel of the power of the Vulcans was incredible.
If you are like me and don't appreciate being spammed by Microsoft then don't install patch KB 3035583 (marked as "recommended"). If you did install it then uninstall this update to get rid of the MS adware for Windows 10. Unfortunately Windows Update will keep trying to reinstall this update as it is marked as "important".
Microsoft - here's a clue - installing software that is going to nag me about Windows 10 updates isn't "important", it's bloody annoying.
Microsoft - here's another clue - I will decide if and when I upgrade to Windows 10, not you. I certainly won't be installing anything until at least SP1 fixes all the broken shite in the original release. I also won't be installing an OS that breaks existing applications without bothering to tell me what it is going to break.
Epic fail Microsoft.
We need a proper numeric scale of uselessness here - following on from the article can I suggest the "Admiral Adama Index"
AAI10 = truly and utterly and completely useless
AAI 5 = really useless
AAI 1 = almost not quite useless but you are never getting any of these things in my home thanks all the same
Even though on the old register I always asked to be removed from the edited register Derbyshire Dales are automatically opting people in to the open register. I received a letter from DD saying I was already registered on the electoral roll and the open register and that it was up to me to contact them to get removed. Just phoned them to be removed and the officer agreed with me that the default should be opted-out not opted-in but that was the policy blah blah blah. All hail big corporations and the sale of contact information...
Question? Do councils make money from selling the Open Register or is it central guvmint that is making the money?
Been a professional user of Photoshop since 4.0. Normally upgraded every major upgrade (not the .5 versions). Effectively this pretty much worked out the same cost as CC is now. However for my needs there is just no compelling reason to upgrade to CC and many disadvantages:
- I can (and do) use CS6 completely offline for landscape photography trips to outer nowhere.
I am never worried that I am dependent on Adobe staying up and no threat that I can't open files
because I'm away from the webs or Adobe is titsup or ...
- The much vaunted smaller incremental updates hasn't happened, we still have had the big bang
for CC 18 months after the initial release.
- I don't really see anything in CC that is a compelling feature.
- Eventually the updates to camera raw may become an issue but there are plenty of excellent 3rd party tools that are way cheaper than CC.
- I don't want/need the additional software in CC and I ain't going to pay for stuff I won't use - that's why I only bought Photoshop and not one of the suites
Sorry Adobe but you are still not selling CC to me. The recent complete titsup episode hasn't helped your case.
Is that near Warminster-on-sea?
Don't tell 'em Pike. Don't Panic! etc.. Reckon Corporal Jones would have made mincemeat out of the aliens in Battleship. They don't like it up 'em you know.
No fuel, launch them with a solar sail powered by lasers based at home. That way its definitely a one-way trip.
<misty eyed nostalgia>
I remember the teletext modem well. My first job was to write an emulator of the Teletext for Schools service (Wordsworth) for use in Sheffield schools that had BBC micros but not Teletext modem. My version (Keats) ran standalone and provided a page editor - I remember scratching my head for days trying to work out how to do a line-drawing algorithm with Teletext graphics characters. Happy days.
</misty eyed nostalgia>
> Don't mess around with vampire bats.
Don't mess with *any* bat unless you are wearing protection. Bats in this country carry rabies. A bat worker in Scotland died from rabies a few years ago. He was bitten by an infected Daubenton's Bat while he was handling the bat. The advice these days is a) don't touch a bat if you don't know what you are doing and b) if you must touch a bat wear gloves,
The equivalent Nikon "1" FT1 adapter to allow DSLR Nikkors lenses to be used on the J1 or V1 compacts is £229 quids. Ouch.
A Sheffield teaching hospital (I'm looking at you Northern General) pays £125 for log books for the wards. These very hi-tech and very expensive beasties are ruled hard-back notebooks with the pages numbered. Err. That's it. For a mere £125. Each.
The company I'm working for pays over the odds for similar notebooks at £10 each. That's less than 10% of the cost to the Northern General and still massively overpriced.
I'm gonna get me a contract supplying to the NHS...
|
OPCFW_CODE
|
Following inconsistent selection practices, such as allowing certain candidates who do not meet the passing criteria of an assessment to move forward in the process while all other candidates cannot, limits the legal defensibility and fairness of the hiring practice. However, this does not stop certain assessment end users from requesting exceptions, especially when candidate pools are limited or time is tight. In some cases, an exception is even out of a hiring expert’s control – but, as a hiring expert, it is important to understand how to best advise those who request exceptions during selection processes and how we can prevent exceptions from becoming prevalent.
In cases where an exception must be made or an exception is out of our control, one’s legal counsel should be consulted first. An organization’s legal counsel may demand or strongly consult against the exception, or at a minimum, make a larger impact on the end user in terms of the legal consequences for following inconsistent practices (e.g., the potential for being sued, fined, etc.). In any case, whether the counsel supports or disproves the exception, they can provide best practices for compliance and process going forward.
If and when exceptions are approved, the leaders and stakeholders of those requesting the exception should be heavily involved. Accountability for requesting the exception policy should be taken by these individuals, along with any repercussions that may result. These individuals should be educated on the errors that following inconsistent selection processes let in – including the potential of more exceptions being requested. This alone could deter a hiring manager or recruiter, as examples, from going forward with the exception process.
You might also like: 4 Best Practices For Legally Defensible Interviews
Another best practice is to put as much consistency as possible in the exception policy process. This can be done by establishing a series of rules that ultimately allows the exception. For example, let’s say that your company is in need of Sales Managers, and an exception process has been approved. You have 10 candidates in the hiring process, and only 4 of them have passed the assessment. For this particular job, candidate X, Y, and Z, all who failed the assessment, are eligible to move forward to the next step in the process (a face-to-face interview) because they they meet an established set of criteria. These criteria are 1) they have 10 or more years of sales management experience, 2) the candidate was referred internally, and 3) an exception request was signed by the hiring manager and/or hiring manager’s leader. This example is not provided to suggest that this is exactly how one should handle an approved exception. Simply put, this example is to enforce that rules and criteria should be in play for who actually receives the exception.
Exception policies can be extremely tricky to navigate. Allowing exceptions in hiring processes goes against the core principle of a fair hiring process: consistency. When needed, consult with your legal experts and make sure all those involved in the request understand the limitations. Education is key to limiting exceptions and ensuring exceptions are being handled as consistently as possible.
|
OPCFW_CODE
|
Proxy Error Document Contains No Data
Are you new to LinuxQuestions.org? This may be because the site does not accept connections from your computer, the service may be down, or the site does not support the service or port that you tried You can also use an alternative service such as OpenDNS (discussed here), Google Public DNS, or TreeWalk. With yahoo.com, I don't get the alert until I go to the sports section. http://spamdestructor.com/proxy-error/proxy-error-reason-document-contains-no-data.php
I googled for posts about the problem at Mozilla.zine. Dave -----Original Message----- From: Rene Pijlman [mailto:[hidden email]] Sent: Tuesday, November 15, 2005 1:40 PM To: Rossignol, Dave Cc: [hidden email] Subject: Re: [Plone-Users] Re: 502 Proxy Error - Document Contains I'm using KIS beta 126.96.36.199, but the problem has also been in the past few builds. Cache settings should not do harm. http://kb.mozillazine.org/Document_contains_no_data
Examples of problematic extensions: The Skype Extension for Firefox may cause pages to continually reload or stop loading The DownThemAll extension can cause random pages or all pages to stop In this case, the error message is: "the connection timed out when trying to reach
Search the bug by DuckDuckGo or Google also. How can I ensure the timeout won't happen again ? If you really think that this is not happening in your sceneario, tell us, if doesn't invade your privacy, and we will test it. I can reproduce the error (in any browser) with a Perl script that doesn't return any information (or doesn't return it quickly enough) after a submission.
Its the standard 80 Gig hosting package from Lycos. Free forum by Nabble Edit this page Not endorsed by or affiliated with SAP Register| Login Want to sponsor BOB? (Opens a new window) Index|About Sponsors| Contact Us| Calendar| If python has an ldap module with one version, then where is the other ldap module coming from? Options:Reply•Quote Re: Document contains no data ??
Site policy | Privacy | Contact FutureQuest Community > General Site Owner Support (All may read/respond) > PHP, Perl, Python and/or MySQL "Document contains no data" error User Name Remember maybe it couldnt update some of packets. Last edit at 03/15/2014 03:01AM by guenter. Regards Dom Average of ratings: - Permalink | Show parent | ReplyRe: Proxy Error: (LOADS OF THEM) Server receieved an invalid response from an upstream server...???Matt MolloyWednesday, 4 October 2006, 12:49
- Please enter a title.
- Edited 1 time(s).
- For anyone looking in, check here for the other discussion.Average of ratings: - Permalink | Show parent | Reply Import tool not workingCannot run a simple batch file backup for mysql
- It looks like zope started okay but then as soon as I made a call to the plone site in my browser, I got the "Segmentation fault" line you see at
Maybe you have a mismatch between low-level library versions / product versions. Note You need to log in before you can comment on or make changes to this bug. Average of ratings: - Permalink | Show parent | ReplyRe: Proxy Error: (LOADS OF THEM) Server receieved an invalid response from an upstream server...???Dom LondonWednesday, 4 October 2006, 12:15 AMHi Kyle Then when I went to make more edits later in the day it wasn't working.
The registry entry is officially documented here (Windows 2000) and here (Windows XP). useful reference Try updating all extensions to the latest version first, then disable your extensions or try Safe Mode. And use a value of 300 or more MB. Format For Printing -XML -JSON - Clone This Bug -Top of page Home | New | Browse | Search | [help] | Reports | Product Dashboard Privacy Notice | Legal Terms
I have restarted Plone (2.0.3 running on redhat) but that doesn't solve any problems. Then several areas of the page has the text “Net Reset Error - The document contains no data The link to the site was dropped unexpectedly while negotiating a connection or Thanks, Dippy Like Show 0 Likes(0) Actions Go to original post Actions About Oracle Technology Network (OTN)My Oracle Support Community (MOSC)MOS Support PortalAboutModern Marketing BlogRSS FeedPowered byOracle Technology NetworkOracle Communities DirectoryFAQAbout my review here root 1634 1 0 Nov14 ? 00:00:00 /usr/bin/python2.3 /usr/lib/plone2/lib/python/zdaemon/zdrun.py -S /usr/lib/plone2/lib/python/Zope/Startup/zopeschema.xml -b 10 -d -s /var/lib/plone2/main/var/zopectlsock -x 0,2 -z /var/lib/plone2/main /var/lib/plone2/main/bin/runzope plone
The proxy server could not handle the request GET /wi/bin/apachewi/cookie/wiqt/queryTechnique/ExtractHtml. Any help would be appreciated. interestingly enough, i was able to repro the problem using a debug CVS build under linux.
If the address is known to be valid, or if the problem occurs for many sites, it may be an issue with your proxy service (if you use one) or the
Options:Reply•Quote Re: Document contains no data ?? SHould I change my hosting company or my webbased Courseware (Moodle?) It is unreliable 50% of the time on almost all links... Right-click -> Toggle. If you want, you can permanently disable negative caching by editing the registry, as shown here.
auction = $auction
$aucfile = 'http://pages.ebay.com/aw/newitemquick.html';
if (!($myfile = fopen("$aucfile", "r"))
http://spamdestructor.com/proxy-error/proxy-error-407-proxy-authentication-required-putty.php Still the 502 Proxy Error.
The Chrome browser (CoolNovo) version of this error has this English wording. "No data received Unable to load the webpage because the server sent no data. A related DNS server issue that can result in failure of repeated attempts to open certain websites is that Windows 2000 and XP cache unsuccessful DNS lookup attempts. There isn't any issue in K-meleon as it is how web works and the way the browsers gives information to the user about an error. Options:Reply•Quote Most Document contains no data Posted by: guenter Date: March 15, 2014 03:00AM QuoteZero3K I think I managed to fix the issue by changing the maximum Disk Cache size to
WFM means Works For Me. When entering a Norwegian newspaper I get numerous alerts with “The document contains no data” as mentioned by mvdu (and the page never loads completely; “waiting for…”)I configured my Firefox 1.0.7 www.netscape.com), followed by a path to the content (or just "/").
|
OPCFW_CODE
|
Multiprocessing inside subprocess in python
Lets say I have a python script A, which communicates with some proprietary software using its API, and this API does not allow multiprocessing/multithreading. In script A, there is a function (func) that can utilize multiprocessing for better efficiency. Since I cannot use multiprocessing in script A for func, I define another python script B with the func, and then i use subprocess.run() to spawn a subprocess to run this Script B.
Script A looks like as below
import subprocess
def main():
py2output=subprocess.run(["python.exe","ScriptB.py"],capture_output=True).stdout
print(py2output.decode('UTF-8'))
if __name__ == '__main__':
main()
and Script B looks like as below
import multiprocessing
import time
def do_something():
print("Inside Do Something")
print("Sleeping 2 sec...")
time.sleep(2)
print("Done Sleeping")
if __name__ == '__main__':
print("In main")
p1=multiprocessing.Process(target=do_something)
p2=multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
The output I expected was
In main
Inside Do Something
Sleeping 2 sec...
Done Sleeping
Inside Do Something
Sleeping 2 sec...
Done Sleeping
The output i got was
Inside Do Something
Sleeping 2 sec...
Done Sleeping
Inside Do Something
Sleeping 2 sec...
Done Sleeping
In main
In the output main function(Script B) gets executed at the end ?? Could anybody explain what is happening in the above scripts ?
EDIT How I really communicate with the proprietary software:
So the proprietary software provides me with a .exe(soft.exe) file. So how I actually run is I open the cmd and type the following command,
path_of_the_proprietary_soft_exe\soft.exe path_to_the_script_A\ScriptA.py -args _additional_argurments_here
So you in a nutshell you could say that soft.exe, runs my ScriptA.py, which then, in turn, will use ScriptB.py to create multi processes. Since soft.exe controls the main process(ScriptA.py) it does not lets me to spawn additional processes. I hope I was clear with this information.
ScriptA is not valid code, some mistakes with brackets. Could edit your post so the code is valid?
I don't quite get the constraint. If the API cannot be used concurrently, that shouldn't prevent the script from doing other tasks using concurrency (multiprocessing, threading, …). Are you sure you aren't trying to solve an XY Problem?
I cannot reproduce the issue with the code shown: the output In main, followed by both subprocesses pre-sleep, followed by both subprocesses post-sleep, as I would expect. Note that the print("In main") must execute before the subprocesses are even started.
@MisterMiyagi I was able to reproduce the issue when running the script normally. I tried debugging it with VS Code and the produces the expected output. Could it be something with times of threads?
@Tzane Hm, I was able to reproduce each multiprocess having its own output block, but the main print still comes first (and putting one at the end comes last). Adding time.asctime() to the prints shows that they are created at the correct time, but not captured. I assume it's due to buffering, which uses different defaults for tty and non-interactive callers.
@MisterMiyagi I had added additional information on how I actually use the API methods and communicate with the proprietary software. Regarding the issue, I was able to reproduce the issue when I tried with the above Scripts in my VS Code. Hope this helps.
|
STACK_EXCHANGE
|
Add diagnostic headers to Akamai'd websites
Akamai debug headers makes it much easier to figure out what's happening with websites fronted by Akamai. This simple, lightweight extension adds Akamai debug HTTP headers to your HTTP(S) requests, providing extra information like cache hits/misses, TTLs and cache keys. to see it in action, download it now and test it out on reddit.com, dailymail.co.uk or xbox.com
- (2020-09-04) Juan Luis Baptiste: It is not working, no additional headers appear when looking at a request from a site being on Akamai.
- (2019-08-03) Jesse Coddington: Causing ERR_SPDY_PROTOCOL_ERROR errors on a ton of sites. Also since Chrome 76, this extension is causing the following error: Access to XMLHttpRequest at '...' from origin '...' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status. Hopefully this gets fixed
- (2019-04-01) Eddie Yee: Useful but needs to get fixed. The extension definitely still causes ERR_SPDY_PROTOCOL_ERROR errors on many sites that use the SPDY protocol.
- (2018-10-17) Kiran Edem: This plugin is causing sites like Adobe.com and expedia.com to not to load. The moment I add this plugin, I am seeng ERR_SPDY and 502 errors on few websites.
- (2018-07-18) Warren Hohertz: NOTE: This extension may cause intermittent ERR_SPDY_PROTOCOL_ERROR errors in Chrome. The Akamai Engineering team are working on resolving this issue, but in the mean time, the following Solution / Workaround can be used. Solution: Remove the "akamai-x-get-extracted-values" Akamai Pragma header from the HTTP request header when accessing the site
- (2018-02-15) Aaron Silvas: Was great, but on some Akamai routes it's resulting in ERR_SPDY_PROTOCOL_ERROR
- (2017-10-12) Steve Howard: Doesn't appear to work on newer releases.
- (2017-07-18) It does what it should do, no bug sighted. Nice one. But I wish I could have switched this extension on/off, without opening an extensions management panel on Chrome. Awesome as Akamai is, more than 30% of the internet sites are using Akamai these days. That means this extension is basically slowing down Chrome for 30% of the time.
- (2017-05-12) Tommy Lee: Здравствуйте. Готовы купить Ваше расширение по хорошей цене. Либо помочь его отмонетизировать. Пишите, обсудим. [email protected] Skype: Tendy.Max Good day! Ready to buy your extension at a good price. Or we can help you for monetizing of it. Contact me for details. [email protected] Skype: Tendy.Max
- (2017-01-10) Ernesto José Pérez García: It would be nice if it had a way of enabling/disabling it easily from the button in the top bar. I don't want to send the Pragma header for all sites, so I have to manually enable/disable the plugin.
- (2015-12-29) Jerry Wan: very good
- (2015-10-21) alaa b. amireh wara: good
- (2015-06-15) Evan Jongske Sangma: Excellent tool. Loved it. Good job developers.
- (2015-03-23) Aneesh R: Great plug-in for debugging akamai header values and caching rules. Excellent.
- (2015-02-26) Barney B: Awesome extension.
- (2014-11-06) Josh Deltener: It just works. Easy as pie.
- (2014-08-26) Excellent product for debugging Akamai caching.
- (2020-09-15, v:1.1) Patrick N Van Staveren: Fix or remove this extension from the web store
Hi there, There are a lot of complaints about this extension already, and I ran across the same issue with it today. The extension by default is "on for all sites" but it breaks CORS changes made in Chrome v76. You need to update it or pull it from the store (or I will continue saying bad things about Akamai and move to a better CDN!) https://support.google.com/chrome/thread/11089651?hl=en&msgid=11328844 HTH Patrick van Staveren Global Head of IT Infrastructure Mintel Group Ltd
- (2019-08-09, v:1.1) Francois Boulais: Chrome 76
All CORS requests are blocked by the Chrome, only when this extension is enabled. Websites are broken (ex: google fonts don't load). With the latest Chrome version (76.0.3809.100) See: https://support.google.com/chrome/thread/11089651?hl=en
- (2019-05-13, v:1.1) Oliver Bacher: Same here ERR_SPDY_PROTOCOL_ERROR all over the place
Chrome is 74
- (2018-02-14, v:1.1) Dhanesh Jaipal: Seeing ERR_SPDY_PROTOCOL_ERROR
Hi, When I use this extension and try to access a particular site, it was throwing this error - ERR_SPDY_PROTOCOL_ERROR - and made the webpage unavailable. My Chrome version is 'Version 64.0.3282.168 (Official Build) (64-bit)'. It used to work earlier but for some reason it is making the website unreachable. The site I accessed was jabong.com and ftd.com Thanks, Dhanesh
- (2017-05-10, v:1.1) x-akamized-staging
Hi, For some strange problem, I am not getting x-akamized-staging response header after spoofing the IP to test my staging network. Thanks, Sandesh
- (2017-04-17, v:1.1) Casey Manion: ncaa.com
Repo: Enable plugin Navigate to ncaa.com (or www.ncaa.com) Actual (bug): Location header redirect to error page Expected (fix): No redirect to error page
- (2016-06-28, v:1.1) Amith Hariharan: X-Check-Cacheable always showing NO
X-Check-Cacheable always showing NO eventhough the request is cacheable in Ubuntu Chrome (51.0.2704.103 (64-bit))
- (2016-05-18, v:1.1) Naveen Ullattil Parakkunnath: Akamai edgescape header and True client IP
Hi, Do we have Akamai edgescape and true client ip in the pragma? if not, can we add that please? Thanks, Naveen
- (2015-10-12, v:1.1) Gonzalo Servat: How to disable plugin on-the-fly?
Some Akamai sites that have WAF protection will block me as the request headers I send include the Akamai pragma options, and I can't disable this plugin on-the-fly. Is there a way to do this? If not, consider this a suggestion.
- (2014-11-06, v:1.0) Josh Deltener: More Pragma
It seems you are missing: akamai-x-feo-trace akamai-x-get-request-id Having these would be swell.
- (2014-10-02, v:1.0) Sathish Kumar: Not working when chrome remote debugging with mobile device
Akamai headers are not working when chromre remote debugging with mobile device https://developer.chrome.com/devtools/docs/remote-debugging
|
OPCFW_CODE
|
There is a lot of misinformation and there are even outright lies being perpetuated by the media about the economy. Similar observations even show up in the liberal blogosphere. In this article I will offer a different, uncomplicated perspective. My purpose is to make a deliberately abstruse topic more easily understandable. I will try to avoid value judgments, and simply report on the way I have observed the economy to behave. It is not meant to be an economic treatise, nor to advance any particular school of thought, such as Neoclassicism, Keynesianism, the Chicago School, the Austrian School or Marxism. I hope my observations will be of use to some, though I am sure many will take exception to my comments.
Because of capitalist hegemony, I will restrict my comments to capitalism; specifically laissez-faire capitalism since the U.S. economy is headed in that direction.
There are really only two things that most people need to know about laissez-faire capitalism (in the future, when I mention capitalism, I will mean laissez-faire capitalism.)
First, under capitalism, only money has value.
- Other items have value only to the extent they can be converted to money or can generate money. This includes things such as labor, commodities and property.
- What cannot be converted to money has no value and is often eliminated. This can include people.
- Profits are more valuable than the ecosystem or worker safety.
Second, the purpose of capitalism is to move as much money to the top 0.1% of society, from those who are not (and will never be) at or near the top.Corollaries:
- Wealthy individuals, with few exceptions, do not come by their fortune by their own productive labor. Instead, they appropriate as much as possible from other people's productive labor. Capitalists themselves believe that they are entitled to this wealth; even if they did little to earn it.
- Illegality for the elites is inconsequential. Even if something is technically illegal, if it is not prosecuted it becomes de facto legal.
- Governments work either for their people or for the rich, they cannot work for both. In virtually all western societies, the ultra rich (individuals and corporations) have captured their governments, to a greater or lesser degree. For the U.S. federal government, this capture is virtually complete. Once the privileged class has control of the government, they can have whatever laws passed that they want, including those that make their crimes retroactively legal.
There, in 241 words (Principles and Corollaries,) is the essence of capitalism as actually practiced. Once these points are understood, the machinations behind current events in the areas of economics, politics and foreign affairs become evident.This article was deliberately presented in black and white. Those who want grey can get it from the main stream media.
|
OPCFW_CODE
|
A comparison between Linux and Windows while selecting the server operating system is like being in stalemate while playing the chess game where the outcome is unpredictable. Various versions of the Microsoft—from Windows—and the Linux-based operating systems are available in plenty today. But deciding the best option is a tougher task, rather, finding the right solution that fits the organizational requirements is easier.
Stability is one of the most important aspects for consideration while choosing a server system. Linux systems are widely renowned for its ability to run without failure for several years. Moreover, Linux crashes are too rare, which means there are really fewer probabilities for a downtime. Linux also handles a large number of processes running at once.
Unlike other systems, Linux servers typically never require a rebooting. Almost all Linux configuration changes can be done while the system is running and without affecting unrelated services. The option of defragmenting also is never required in a Linux system.
Gazing at the security aspects, Linux is innately more secure than its competitors, as Linux, which is based on UNIX, was designed from the start to be a multiuser operating system. Only the administrator, or root user, has administrative privileges, and fewer users and applications have permission to access the kernel or each other. That keeps everything modular and protected. On Linux, the system admin always has a clear view of the file system and is always in control.
Another added advantage of Linux is that the system is slim, trim, flexible, and scalable. It performs admirably on just about any computer, regardless of processor or machine architecture. The Linux server operating system can be reconfigured for running all the applications that are based on the organizations requirements, thus further reducing memory requirements, improving performance and keeping things even simpler. Moreover, the total cost of ownership is free with Linux, as the OS is completely free of cost as well as Open-Source.
The Windows operating system from Microsoft, with over 75 percent of the OS market share is widely used among organizations due to its standardized features and user friendliness. Regardless of the cost and license factors, most admins across organizations prefer the Windows server OS. Microsoft server operating system, in addition, provides the ability to operate applications with the Remote Desktop Services over the Internet, enabling end-users to run software without installing it on their PCs. But when using Microsoft, if the user is looking to run physical servers, then they will likely keep the physical hardware for around five years before a hardware refresh. This ultimately will make the product’s support from Microsoft end before a hardware upgrade. Then, using an operating system beyond its lifecycle opens up potential security issues also.
Another aspect is that the Windows server is highly equipped for supporting the majority of Microsoft products. Windows Server provides seamless support for the Active Server Pages (ASP) that is a widely used programming that empowers developing database-determined and element site pages.
When considering Microsoft Windows OS, the cost is also an important factor as the license fees are expensive. The more employees, the more expensive it will become.
The fact that Linux is ideally suited to be utilized as a server than Windows cannot be denied. Moreover, Linux is generally free while Windows comes at a price, but offers seamless support options and features. In addition, with Linux, there is no commercial vendor trying to lock the users, thus allowing full freedom to mix and match various applications depending on the user requirements.
|
OPCFW_CODE
|
New intelligent search capabilities and experiences powered by the Microsoft Graph help you find content and people faster and discover relevant knowledge from across your organization. At Microsoft, our vision of search in the enterprise centers on bringing personalized search results to wherever you may be working, from across your intranet, and even the internet. By infusing artificial intelligence (AI) into our familiar experiences in Office, search and discovery become part of your everyday work, rather than a separate destination.
Routine tasks such as finding the right version of a document, getting back to a presentation you were editing, or learning about a topic you are not familiar with by locating experts and content, are now simpler and faster. Harness your organization's collective knowledge by fully leveraging who you know and what they know to improve decision making and unlock creativity.
In May at the SharePoint Virtual Summit, we announced that personalized search was coming to the SharePoint home in Office 365. Today we're delighted to announce that search in SharePoint is becoming smarter and faster to use, and will surface more relevant results from across Office 365. Search results will be personalized for you by the Microsoft Graph.
Microsoft Graph gleans insight from the people, sites and documents you work with, and ranks search results relevant to your needs. You'll still be able to see all the results that satisfy your query, but personalized search will prioritize the results that are most likely to achieve your objective.
The search experience has been redesigned and streamlined to make it easier for you to find and filter results, and results will now include list items everywhere, not just the results in the enterprise search center, so all content in a SharePoint site is now included.
Find what you need, faster, with personalized search results and a streamlined search results page in SharePoint.
You'll also find that search is, itself, faster than ever in SharePoint. Recent performance improvements and a smoother, faster interface, frees up your time by re-using what has been created before.
Personalized search results in SharePoint home will be available later this year, across all geographies, and the new search interface will roll out at the same time.
Within the search experience, you can live preview files to quickly ascertain if this is the content you are looking for.
Whether it's apps, documents, email messages, company resources, or people - across local devices and Office 365 - that same personalized search experience is now in Windows. Gives your employees a place to search for and quickly surface what they're seeking, right from the taskbar. Find documents locally and in Office 365, even you can even query content inside the document. If you don't remember where you put a document from folders on your PC or OneDrive or a group in SharePoint, searching via Windows is the fastest way to find it. In addition, demonstrating intelligent discovery when searching for people - based on who you work with most, you'll see their contact options to connect instantly.
Finding colleagues straight from Windows, even search by just first name to quickly find the people you work with most
Search is built into so many of our activities during the work day and to support you further, we are announcing the Private Preview of a new modern workplace capability.
Bing for business is a new intelligent search experience for Microsoft 365, which uses AI and the Microsoft Graph to deliver more relevant search results based on your organizational context. This new experience from Bing for your enterprise, school, or organization helps users save time by intelligently and securely retrieving information from enterprise resources such as company data, people, documents, sites and locations as well as public web results, displaying them in a single experience. Bing for business can be used with a browser on any device, transforming the way employees search for information at work, ultimately making them more productive.
Using your browser or Windows search, you can securely surface work related results in addition to web results as part of one simple interface.
Bing serves as a great entry point when you don’t know where to start your search – easily discovering content from your intranet and the internet. Activities such as a quick look up of a colleague you haven’t met before, surfaces their profile info right at the top of the browser search results, linking to their Delve profile and a helpful map to their office location. And as you work through your day, you might take a couple of minutes during your lunchbreak to look up ‘how to pay my healthcare provider’ where Bing for business will surface a best bet directly to the relevant intranet page.
Shielding queries to the internet is also important for employees. Bing for business offers enhanced protection for your Bing web searches and treats your enterprise data in a compliant way. Searching with Bing for business requires Azure Active Directory authentication to access results, and the results that are returned are ones the authenticated user has access to, coming directly from the trusted cloud. Search queries are anonymized, aggregated across all companies and separated from public Bing search traffic. Additionally, these queries are not used for displaying targeted ad based on your work or company identity, and company-specific queries are not viewable by advertisers. This provides a level of protection unavailable anywhere else in the industry.
Bing for business is available for private preview starting today and will be available as part of existing subscriptions to Office 365 Enterprise E1, E3, E5, F1, Business Essentials, Business Premium, and Education E5 subscriptions. Get more information about Bing for business and if you are interested in receiving an invitation to participate in the private preview, visit http://aka.ms/b4bprivatepreview.
Over the next year as Bing increases its connectivity scope within organizations, you will see more than simple search results and a move to actionable insights. Bing for business uses Machine Reading Comprehension and Deep Learning to understand the intent of the question across all documents in your enterprise. And since it knows who you are based on your authentication session, it can synthesize the best answer for your specific query across all the documents you can access - from the public internet to your private intranet. Using both the AI of Bing and the intelligence of the Microsoft Graph, a search will extract and surface up tasks and approvals directly in the browser for action. For example, expense reporting sourced from ERP system data will be available for approval. Asking natural language questions with answers that have been automatically extracted from existing intranet resources will also be possible. Combining the best of your intranet and internet experiences in a careful, considered way helps you find what you need to know fast, powered by a consistent set of results from the Microsoft Graph from across Office 365, of course including SharePoint.
Over the next year, you'll see this same question and answer capability with the ease of natural language conversation available across other Microsoft 365 experiences, including SharePoint.
The new personalized intelligent search and discovery features are extending to wherever you get work done. We're also pleased to have recently announced the evolution of your starting point to all Office 365 services, Office.com. When you log into to Office.com, you'll see a beautiful new experience, optimized to help you discover relevant documents in your intranet, surface most recently used content, sites and folders and learn about the other capabilities of Office to help you unlock your creativity. Offering personalized search across Office 365, your web apps and content, people and SharePoint sites, right from Office.com and recommended section surfaces recent activity that's most relevant to you.
Visual content is a rich source of information and the SharePoint and OneDrive teams have been working hard to tap that source to make them an even better experience for you.
This summer, we announced the ability to search for photos using the objects that are in them. When you upload an image into OneDrive or SharePoint, whether a snap of a whiteboard, a business card, a receipt, a screen shot, a vector graphic, line drawing or even x-ray film, we'll automatically detect it, and make it available in search, without you having to do anything other than upload the image.
Augmenting that, we're pleased to announce our ability to extract text out of images, whether they were originally printed on paper or digital. We're using intelligent services behind the scenes, so your search experience doesn't change in OneDrive or SharePoint. Connect to people, events, projects and meetings using both detected objects and image metadata, such as date and location
Upload your images through your iOS or Android phone or PC, and they'll become searchable soon after. This feature will be made available within a couple of weeks after Ignite.
Extract text and objects for easy recall. Use them as triggers to classify in SharePoint and OneDrive and we'll have more to share early next year on integrating even deeper into your workflow.
Our commitment to your privacy and control over your own data has never been stronger. Today we are announcing support for Multi-Geo Search whether your data is stored in many data regions around the world or just a couple. The index of the data will be stored locally in region with the data, and the search results will span and unify those indices based on your search and location.
Multi-geo search capabilities for M365 will surface in the enterprise search center, OneDrive, SharePoint home and Delve and as customer you will be able to add a query parameter to enable multi-geo search anywhere.
We're excited to share these new capabilities to help you harness collective knowledge and unlock creativity in your organization. Happy finding, less searching!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
|
OPCFW_CODE
|
M: The Swedish Model - olalonde
http://online.wsj.com/article/SB10001424052748704698004576104023432243468.html
R: EgeBamyasi
What Johnny thinks fails to mentions is that since 2006(a major shift in the
parlament from socialism(No, not soviet socialism, rather "I dont care how
much money you have in your pocket, you are no difference from a poor man, NOW
GET BACK IN LINE!") to right wing(Well hello there Mr. Blue Blood, you see
that poor man in the line right at the front? Yeah him, well go take a piss on
him and take his place. You deserve it, you are after all rich!)) more and
more of the good things about Sweden have disappeared.
The leader of Alliansen, Moderaterna, have in the past voted against universal
suffrage(1904-1918), 8 hour workdays(1919,1923), the right for women to
vote(1919), two weeks paid vacation(1938), free lunches at schools(1946),
three weeks paid vacation(1951), civil union(1994)<br /> Thats not even the
tip of the iceberg, thats just a snowflake about to land on top of it.
Sweden used to be about social security, the need of the many outweigh the
need of the few, arts and giving anybody a fighting chance. That was the
Swedish model
Its getting colder, and its getting cold really fast. The Swedish model 2011:
getting rid of the expensive social security and lowering the taxes for the
wealthy.
R: olalonde
Actually, what brought prosperity to Sweden was capitalism, not socialism.
Those good things you mention are good things in intention, not necessarily in
results.
[http://www.youtube.com/watch?v=ENDE8ve35f0&feature=autof...](http://www.youtube.com/watch?v=ENDE8ve35f0&feature=autofb)
R: EgeBamyasi
Of course capitalism is a must for building up a welfare, it provides the
technical and economical foundation for a working socialist state, I'm not
going to argue about that.
How ever as a Swede I have seen the direct changes in the community due to the
parliament shift, not all bad of course. But when a country that used to take
care about its own, all of a sudden points the middle finger to those who is
in need I feel angry and disappointed and somewhat alienated from my own
country. I was brought up being told that tax money was going to hospitals,
schools, Unemployment benefits etc etc. because everyone has the right to
these things, and everyone has the duty to finance it. Now what is happening
is that we have the same tax rate as before, except if you are wealthy, then
the taxes have dropped or even disappeared, and more and more of the things
that where a right in Sweden is becoming a privilege for the people who don't
need it and financed by the lower portion in the social ladder.
Oh, and now we have something called RUT-Avdrag. Its basically a thing where
the state pays 50% if you employ a maid or call in someone to clean your
house. It was to "create more work" and "to help ordinary stressed career
people". As it turned out it haven't created so many new jobs. And the average
middle class stressed career person would save more money by taking 4 hours of
work to clean by him/her-self than to rent a cleaning person for 4 hours. This
was one of the first things Alliansen did for Sweden after they took over the
parliament. Oh yeah, and wealth tax was quickly gone too.
Sure, there are some things that should be privatizationed, the railroad to
name one(it doesn't work, at all) but you don't fuck with the health care and
the school system and the unemployment benefits to name a few(That made Sweden
the awesome country it was) to finance stuff such as RUT-Avdrag and getting
rid of wealth tax, and then tell the world "Hey! Look at us and our marvelous
economic model"
Also, the person who wrote the blogpost is a party member of Moderaterna witch
is responsible for all these changes, so its his view.
R: willvarfar
It annoys me that he keeps saying 'Stockholm' when he means parlement.
The general gist, whilst he makes his bias clear, is fairly well put and
supported.
What worries me is the trajectory he wants to launch on:
"Stockholm has also introduced a law that empowers Swedes to chose their
providers for health care and other public services. This has led to a robust
surge in entrepreneurship within the health-care sector, where more
competition is bound to improve services."
Hmm. Right. That works really well elsewhere in the world, right?
We need to get more for our money in healthcare, but I think that comes from
effectivness at the regional level and not from inviting private companies to
run healthcare for profit.
The school system with 'free schools' - schools run by companies and for a
profit - has been, in my direct experience, despicable.
By all means privatise the various misc things that the country still has a
stake in. But run healthcare and education on a non-profit basis.
I would go off on a thing about pensions but that's a big ponzi scheme
whichever way you look at it. Its not like I or anyone else has an honest
answer.
/Swede.
|
HACKER_NEWS
|
21-06-2023 MUA Project Cycle committee note: this is the winning project of the MUA project cycle Januari - Juli 2023.
THIS IS AN MUA PROJECT PROPOSAL THAT WILL BE OR HAS BEEN SUBMITTED FOR POSSIBLE MUA FUNDING. ANY EXISTING TRACKER ITEMS THAT THIS MIGHT DUPLICATE SHOULD BE LINKED TO BELOW.
This proposal is for an MUA project to incorporate the Ordering question type (qtype_ordering) into the core Moodle code. This is the official Moodle Association of Japan proposal for the fall project cycle of 2021.
Ordering is a standard e-learning activity applicable in many, if not most, fields of learning. However, it has never been possible using Moodle’s standard question types. To address this need, the Ordering question type was created and released for Moodle 2.x in 2013. Since then, it has undergone many years of development and bug-fixing, and incorporated many suggestions from the Moodle community, particularly with regard to scoring methods, and accessibility on different devices and browsers. It is now a stable and popular plugin that enjoys widespread use across the globe. It is currently the 6th most downloaded question type, as listed on Moodle official download statistics page. (see the link below for the full list)
The ordering question type displays several items in a random order which the user then drags into the correct sequential order. An ordering question can display all items or a subset of items.
- items can be plain text or formatted HTML, including text, images, audio and video.
- items can be listed vertically or horizontally
- several grading methods are available, ranging from a simple all-or-nothing grade, to more complex partial grades that consider the placement of each item relative to other items
The code for this plugin is publicly available via the Moodle repository (since 2013):
- Moodle repository:
- Developer’s repository on Github.com
The Ordering question type is maintained and developed by Gordon Bateson, who is an experienced Moodle developer, with recent updates and code reviews by the Open University of the UK, led by Tim Hunt, the lead maintainer of the Quiz module. The code adheres to Moodle coding guidelines and would require minimal effort to incorporate it into Moodle core.
- Project size: small
- Audience: all schools, universities, and workplaces
- Target users: teachers who wish to create Ordering questions on a Moodle site, students who will interact with such questions, and administrators who maintain Moodle sites that include Ordering questions.
The goal of this project is to add the Ordering question type to Moodle core, to make it available to sites that cannot install 3rd-party plugins, and to ensure that it continues to be maintained into the future.
As an admin, I should be able to install the Ordering question type as part of a standard Moodle installation and update the Ordering question type as part of the standard Moodle core code.
As a teacher, I should be able to add new Ordering questions to the Moodle question bank, insert those questions into Quizzes, Lessons, and other activity modules that use questions, and then view the responses that students make to those questions. When adding or editing an Ordering question, I should be able to specify not only standard question settings, such as the question prompt, the default score, and standard feedback but also the text and multimedia to be displayed in each ordering item. I should also be able to adjust settings that control the layout, appearance, and scoring of the question.
As a student, I should be able to view an Ordering question in a Quiz and see the items listed horizontally or vertically in some random order. I should be able to drag the items into a different order and then submit that new order as my response to the question. Later, if my teacher allows it, I should be able to see a score for my response, as well as automatically generated feedback regarding whether my response was correct, partially correct, or wrong.
... Include mockups, screenshots from similar products, links to demo sites ...
|
OPCFW_CODE
|
Giaosucan’s blog – sharing knowledge in a way that is awesome
The story of DK DK is the world’s fifth-largest electronic component distributor. They have built an ERP Distribution Management System to serve retail distribution since 2002 on the MFC platform, written in C / C ++ with over 2 million lines of code. It has been almost 20 years now.
The current system is experiencing the following problems
- Many technologies developed more than 20 years ago such as MFC are outdated, especially when .NET, .NET core was born
- The original design was a monolithic design that became cumbersome due to too many added functions, difficult to maintain, and the build and deploy source code took a long time. Deploying to expand the system is really complicated
- Difficult to change framework, system written in C / C ++ and MFC, if converted to .NET core, C # almost have to rewrite the whole. They decided to change the design of architect transition from monolithic to microservice architecture. With the aim of taking advantage of the advantages of microservice architecture such as the ability to deploy easily, use many technical stacks, scaling …. However, it’s easy to say but difficult to do, although the microservice architecture has many advantages over monolithic architecture, the DK engineering team encountered the following problems.
- Database of DKE has more than 500 tables, using Oracle DB relationship between tables is extremely complex. Each table has on average several hundred thousand records ? How to break out this database to multi-database in microservice architecture and migrate data to ??
- Migration from mono to microservice does not take place immediately, but takes years. As such, it is not possible to take down the mono system to transfer it, it must be done in a roll-out mode. So how do these 2 systems run in parallel?
- During the migration process, the old system can still be updated continuously, how to reflect this change to the new system in time. In addition, there are many other challenges that cannot be counted. Solution For nearly two years, DK engineers have been testing a variety of methods to perform migration. This article shares some solutions at an overview level. DK’s business is Distributed management system, fast understanding is retail distribution. The domain drive design method is applied to analyze business and break out the system into individual modules such as Order, Payment, and Sale. To understand what DDD is, readers can refer to the series Understanding Domains in the hegemony of giaosucan’s blog
From the results of the business analysis of DDD, engineers were able to estimate the module, the number of microservice to create. The microservice is separated into internal service and external service. The internal service written in NodeJS directly interacts with the database, sending and receiving asynchronous data to the external service (expose API) to the client via a message queue. Later the internal microservice was converted to .NET core The API part was written in .Netcore and used the Swagger library to generate API specs
According to microservice best practice, to ensure loose coupling, the database will be split into many small own databases by the microservice. However, the theory is slash, such as putting into practice with interrelated database, mutual dependency, and millions of record data, breaking out is not a simple matter. The team used the break out and migrate data methods to create a database own by selling services, payment into a database, own by payment service. Oracle Golden Gate was tested to migrate between the new database and the old database.
Although testing on individual modules results in promising results, breaking the whole database is still a complex problem due to the conversion of data consistency (ACID) to the BASE of the microservice. to microservice is still stopping at the logical code, the microservice still points to a single database. To know how DK techniques are handled, read the next article
|
OPCFW_CODE
|
I’ve been looking into how people comment on data and visualization recently and one aspect of that has been studying the Guardian’s Datablog. The Datablog publishes stories of and about data, oftentimes including visualizations such as charts, graphs, or maps. It also has a fairly vibrant commenting community.
So I set out to gather some of my own data. I scraped 803 articles from the Datablog including all of their comments. Of this data I wanted to know if articles which contained embedded data tables or embedded visualizations produced more of a social media response. That is, do people talk more about the article if it contains data and/or visualization? The answer is yes, and the details are below.
While the number of comments could be scraped off of the Datablog site itself I turned to Mechanical Turk to crowdsource some other elements of metadata collection: (1) the number of tweets per article, (2) whether the article has an embedded data table, and (3) whether the article has an embedded visualization. I did a spot check on 3% of the results from Turk in order to assess the Turkers’ accuracy on collecting these other pieces of metadata: it was about 96% overall, which I thought was clean enough to start doing some further analysis.
So next I wanted to look at how the “has visualization” and “has table” features affect (1) tweet volume, and (2) comment volume. There are four possibilities: the article has (1) a visualization and a table, (2) a visualization and no table, (3) no visualization and a table, (4) no visualization and no table. Since both the tweet volume and comment volume are not normally distributed variables I log transformed them to get them to be normal (this is an assumption of the following statistical tests). Moreover, there were a few outliers in the data and so anything beyond 3 standard deviations from the mean of the log transformed variables was not considered.
For number of tweets per article:
- Articles with both a visualization and a table produced the largest response with an average of 46 tweets per article (N=212, SD=103.24);
- Articles with a visualization and no table produced an average of 23.6 tweets per article (N=143, SD=85.05);
- Articles with no visualization and a table produced an average of 13.82 tweets per article (N=213, SD=42.7);
- And finally articles with neither visualization nor table produced an average of 19.56 tweets per article (N=117, SD=86.19).
I ran an ANOVA with post-hoc Bonferroni tests to see if these means were significant. Articles with both a visualization and a table (case 1) have a significantly higher number of tweets than cases 3 (p < .01) and 4 (p < .05). Articles with just the visualization and no data table have a higher number of average tweets per article, but this was not statistically significant. The take away is that it seems that the combination of a visualization and a data table drives a significantly higher twitter response.
Results for number of comments per article are similar:
- Articles with both a visualization and a table produced the largest response with an average of 17.40 comments per article (SD=24.10);
- Articles with a visualization and no table produced an average of 12.58 comments per article (SD=17.08);
- Articles with no visualization and a table produced an average of 13.78 comments per article (SD=26.15);
- And finally articles with neither visualization nor table produced an average of 11.62 comments per article (SD=17.52)
Again with the ANOVA and post-hoc Bonferroni tests to assess statistically significant differences between means. This time there was only one statistically significant difference: Articles with both a visualization and a table (case 1) have a higher number of comments than articles with neither a visualization nor a table (case 4). The p value was 0.04. Again, the combination of visualization and data table drove more of an audience response in terms of commenting behavior.
The overall take-away here is that people like to talk about articles (at least in the context of the audience of the Guardian Datablog) when both data and visualization are used to tell the story. Articles which used both had more than twice the number of tweets and about 1.5 times the number of comments versus articles which had neither. If getting people talking about your reporting is your goal, use more data and visualization, which, in retrospect, I probably also should have done for this blog post.
As a final thought I should note there are potential confounds in these results. For one, articles with data in them may stay “green” for longer thus slowly accreting a larger and larger social media response. One area to look at would be the acceleration of commenting in addition to volume. Another thing that I had no control over is whether some stories are promoted more than others: if the editors at the Guardian had a bias to promote articles with both visualizations and data then this would drive the audience response numbers up on those stories too. In other words, it’s still interesting and worthwhile to consider various explanations for these results.
|
OPCFW_CODE
|
Brazilian rainbow boas are commonly kept as pets. They are beautiful reptiles that take on a rainbow-colored iridescent sheen when the light hits them. This is where they get the name, by the way. These snakes come from parts of South America with a high annual rainfall. As a result, they are accustomed to a certain level of humidity. If you keep a Brazilian rainbow boa as a pet, you will probably have to modify the cage environment in some way to increase the humidity.
Of course, this will depend on where you live. If you live somewhere like Southern Florida, the room where you keep your snake’s cage may already have a sufficient level of humidity. But most keepers will have to provide supplemental moisture in some way. And that’s what we are going to talk about in this rainbow boa care article.
By the way, this information applies to other members of Epicrates cenchria as well. Depending on who you ask, there are eight or nine different types of rainbow boas in South America. This care sheet applies to all of them.
Use a Substrate That Will Increase Humidity Level, Not Decrease It
Ideally, you should create a habitat for your Brazilian rainbow boa with a humidity level of around 75 – 80 percent. But I realize this may not be possible for all keepers, depending on the climate where you live. So let’s talk about some of the creative things you can do to keep your snake healthy.
Substrate is one of your first considerations. This is the material you use to line the bottom of your pet snake’s cage. The product you choose will have a direct bearing on the humidity level inside the habitat. So you need to carefully consider your options.
Many care sheets recommend using aspen bedding. This is basically shredded wood. I personally do not recommend this for a Brazilian rainbow boa cage, because it tends to be very drying. It sucks the moisture right out of the air. So it will probably reduce the overall humidity level inside your snake’s habitat — the exact opposite of what you are trying to accomplish.
Aspen bedding works great for other types of snakes that are commonly kept as pets. It’s a good choice for most of the North American species, such as corn snakes and kingsnakes. But it’s not well suited for a Brazilian rainbow boa cage. It will only make your job harder, as a keeper.
I recommend using either newspaper, cypress mulch, or shredded coconut husk as a substrate for these snakes.
- Newspaper is neutral — it doesn’t add or subtract moisture from the cage. And it’s quick and easy to replace, when it gets soiled. You can combine this with the “humidity hide” mentioned below.
- Cypress mulch is naturally moist when you pull it out of the bag. And you can spray with a misting bottle once or twice a day to keep it moist. This substrate is well suited for a Brazilian rainbow boa cage. Of course, it’s a lot harder to clean up when compared to newspaper. But it’s more naturalistic and nicer to look at. It’s a trade-off.
- Coconut fiber is another attractive substrate that also helps you keep the humidity up. Your pet snake will certainly be comfortable on this soft and natural material. But it’s even harder to clean up after than cypress mulch. Again, it’s a trade-off between snake cage maintenance and appearance. This material can also be misted on a daily basis.
I keep seven snakes as pets, so the naturalistic substrates are just not an option for me. It makes the cleanup process take too long. If I only had one snake, I would make the cage environment much more attractive. But that is currently not an option. Ease of maintenance is my top priorities (along with keeping my snakes healthy). So I use newspaper in my Brazilian rainbow boa’s habitat. At the same time, I realize this species needs a certain level of humidity. So I use the moisture box / humidity hide strategy explained below.
Give Your Brazilian Rainbow Boa a ‘Moisture Box’
If you prefer to use a moisture-neutral substrate like newspaper (for maintenance reasons), you’ll need to provide a moisture retreat of some kind. This is an area of higher humidity, where you pet snake can go whenever it feels the need.
Here’s the good news. It’s easy to create a moisture box for your Brazilian rainbow boa’s cage, and it doesn’t cost much either. Start with a Rubbermaid or Sterilite box that’s large enough for your snake to fit inside. Fill it with a damp material. Cut a hole in the lid so the snake can crawl in and out, as needed. Voila! You’re done.
You can use a variety of materials to fill the box. We’ve talked about two of them already — cypress mulch and coconut fiber. You can also use “frog moss,” which stays moist for a long time. Most of the time, I just use dampened “shop towels,” the blue ones made by Scott. They’re affordable, they last a long time, and they can be misted with a spray bottle.
I keep the box on the warmer side of the cage, or in the middle where the temperature begins to cool toward the cooler side of the cage. I try to avoid placing the box fully on the cooler side. I don’t want my rainbow boa to be cool and damp for a prolonged period of time, as this could lead to illness. Warm and moist is fine.
Getting Creative With Your Pet Snake’s Habitat
These are certainly not the only ways to elevate and control humidity in a Brazilian rainbow boa cage. Use your imagination. Go to Home Depot or some other hardware store and look around. You might be able to come up with some creative ideas of your own.
|
OPCFW_CODE
|
Terminal commands not working
I moved all files starting with lib from /../ to some folder as,
mv /../lib* /to/some/folder
after which I can't move it back.It comes as
-bash: /bin/mv: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
Even for ls it comes as -bash: /bin/ls: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
my home folder which i did'nt want to mention
If you like to be able to ever fix this without a helper boot medium, you need to know exactly what you did and if you like to get help on that, you need to share this information.
Is busybox installed on your system?
the thing is I moved the files starting with lib* to some other location, after which I couldn't move it back as mv was not working.
yes busybox is installed
Can you start busybox from the shell?
how do I do that?
If yes, you can try busybox mv /to/some/folder/lib* /. This would then move the files back. But to give you the correct command, you should write the EXACT command how you moved the files in the first place
Can you use utilities like ls again? Maybe start a new shell
Yeah I can use it. Everything works fine now. Thanks to you :)
The restauration using busybox worked in your case as you had busybox installed.
busybox is a statically linked binary and this helped in your case, but there is no need to use statically linked binaries to repair this kind of defects.
You could do this as well:
LD_LIBRARY_PATH=/some/path/where/the/libs/are mv ....
If you moved the dymamic runtime linker as well, you also need to manually call the runtime linker as well. To understand how this works, call:
man ld.so.1
or on Linux
man ld.so
This typically results in a command line like:
LD_LIBRARY_PATH=/path/to/libs /path/to/libs/ld.so.1 mv ....
On Linux replace /path/to/libs/ld.so.1 by /path/to/libs/ld-linux-x86-64.so.2 or what is actually used on your system.
In general, you need to know whether the binary you like to call is a 32 bit or a 64 bit binary and to call the right dynamic runtime linker.
BTW: This is the method that is documented for Solaris since 2004 and since that year, there are no statically linked binaries on Solaris anymore.
|
STACK_EXCHANGE
|
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
Already have an account? Sign In »
1 hour 34 minutes
in this lesson, we'll talk about a suggested structure for storing case data
and why having such a structure is important.
Should all members of a security team store case data in the same way?
The answer is, of course, they should. Standards are the best way of keeping things organized, so that case is run smoothly and have positive outcomes.
So in my experience, where a lot of security teams tend to see a lot of issues seems to be the storage off data related to their cases.
Different team members have developed different ways of storing their case data over the years, and so there are a multitude of different methods being used,
or what's even worse is everyone just saves their data to the desktop.
Ideally, we want to actually manage case data and in such a way that every team member is able to quickly and easily locate the data they need when they need it.
And her standard as agent
having a standard fold structure and storage location for case related data should form part of your company policies related to security case management
case data should never, ever, ever be saved to the desktop or the system.
configure an internal drive or partition specifically for case. Data
cases and analysis data should be separated from case evidence, so evidence like forensic images should be stored on a drive separate to the cases and analysis data.
The main reason for this is speed
Reading the evidence from one disk or partition and simultaneously writing the case and analysis data to a separate disk is faster than attempting to read and write simultaneously to the same disk.
Obviously, it will be up to you and your team organization to determine the best folder structure to use. But here is a simple example to get you started
at the top. We have the route case folder
named with the case name and potentially a reverse date,
representing the date when the case was initiated.
This provides everyone with a very straightforward, unambiguous way to name their case folders and find what they need quickly
within this brute fold. Oh, we have a number of sub folders.
First of all, we have cases now.
This is where the literal case files from whichever tools are being used. I e. N case cases, X rays cases. I F cases, etcetera will be stored.
So I suggest having a further child folder
for each of the tools used as well.
So dumping case files from multiple tools in a single location results in a mess and causes problems, which are really easily avoided.
Next, we have the evidence folder,
so I say this next part for the sake of simplicity. But
cases and evidence should be stored together, at least at the beginning and end of a case
while an investigation is ongoing and processing is being performed,
cases and evidence should be on separate drives.
Then we have the analysis folder.
This is where I would suggest putting any data used for the investigation.
files which have been recovered from a case spreadsheets containing tool output, that kind of thing.
So keeping your case data organized like this will make your life so much easier, especially when multiple people are working on the same security case.
If someone is working on a case and goes on holiday for four weeks,
clearly managing your case data will allow the remaining team members to easily continue the investigation. In the absence,
you might even write a batch script to create this folder structure automatically at the beginning of a case to make it even easier for people to follow. The same process
should case data and evidence be stored in the same location.
Standard computer science answer applies. It depends.
So in most cases it's better to store these materials separately,
particularly when processing storing them in the same location will encourage speed penalty.
In this lesson, we covered how best to store case data and case evidence
and why it's important that everyone in a security team follows the same process.
|
OPCFW_CODE
|
|I cast Fog Cloud centered on the island|
However, another thing happened this week: I started playing my first 5th edition character! Well, not technically, since I've played characters in one-shots before, but this will be the first character I play over the course of a campaign. It's literally the first time I've gotten to level up a character and then play with the newly leveled-up character.
So, I'm going to talk about that a bit. This won't really be a recap, since I'm not really planning to write articles on this campaign. I think that's really the domain of the Dungeon Master, because they have great insight into what was or wasn't important in a game. If a player wrote it, they'd probably forget/misremember some important details.
Perhaps my DM will write some recaps. Perhaps not. Either way, we had a fun session and I'm going to ramble a bit on it.
Adventures on Jeonju
Cast of Characters
Megan: Dungeon Master
Jon: Sa Konu, wood elf monk, enjoys maps and herbalism, unsure about his place in the world
That's right, it's a solo campaign! I've never run one, so this is a new experience for both of us. I tend to really like player-to-player interactions in my own games, but from a player perspective, it was nice to just imagine the world and move at my own pace.
Also, I had to be very careful to just let things happen. Players who are also DMs can be the worst players, so I did my best to avoid telling Megan to do things one way or another. It was difficult, and I slipped up a bit, but overall I think I did well in-game. We did talk afterwards, and I gave her some advice. It's hard to say if it was overbearing, but she seemed receptive.
The game took place in my homebrewed world of Ahneria, on a distant continent entirely populated by Elves (homebrew stereotype number one). They were heavily influenced by Asian cultures (homebrew stereotype number two) and there had been a terrible disaster in recent memory that shaped the current culture on the island (homebrew stereotype number three, hat trick!).
Now, using stereotypical ideas isn't a bad thing. I could write an entire article on originality and how it doesn't really exist, but many have already done so. The key is that, through development, cliches are expanded into cool, unique ideas. The idea of lengthy seasons isn't original to Game of Thrones, but through story development, it became a powerful force in his world.
Megan has been working on building this area of the world for a while now, and some of the cliches are already showing signs of emerging novelty. There are quite a few races interacting, the culture has expanded from simply being Asian-influenced, and the disaster is starting to formulate into an interesting world-building piece.
I guess I should go into the disaster part, as I understand it. Basically, the culture has forgotten a lot of its history, not due to time, but due to this Gray Fog that has blanketed the land. It's still very mysterious, but Elves who go into the fog can't remember what they did within, or emerge scarred, mad, or worse.
It's a very interesting thing to happen to Elves in particular, with their long racial memory. What will the culture be like now that people have forgotten the ways that the previous generations did things? Will they revert back to the time they do remember? Is there anyone even alive from before the Fog?
Basically, we started out with a nice mystery and a very cool sense of wonder about the world. I think my games sometimes lack that, since I like to tell the players everything they need to know. I do my fair share of foreshadowing, but there is usually a reasonable explanation behind things.
|Beachside property is quite expensive in fantasy real estate|
It's interesting to point out that I'm not sure if Konu was one of those wanderers, or if he was born outside the Fog. There is a city and a few villages outside the Fog that he could have come from. Emergent background details - an exciting part of being a PC!
The monastery had five elder Elves, each with their own quirks. My favorite was Master Kim, a relatively young elder (still in the 500s) who had joined the monastery after wandering out of the fog. She had a very sympathetic worldview, and believed that each person would find their destiny by following their feelings. The conversation I had with her eventually convinced Konu that he was right in leaving his monk life and following the path on a mysterious map he found.
Once he left the monastery, we switched over to a modified version of the travel rules I enjoy so much. It was a simpler system (and I think my d12 was a bit finicky), but I ended up with two encounters that were pretty interesting.
First, Konu met a dryad named Willow, who was quite friendly. After they figured out how to communicate (Konu didn't know Sylvan and Willow spoke broken Elvish), they became traveling companions. A common issue for new DMs is not giving enough life to the NPCs, but Megan did a good job of making Willow have her own personality and goals.
The other encounter was formative for Konu: a black bear entered his camp during a night on his travels. After attempting to distract the bear with food, Konu hid in a tree rather than fight the creature. It ended up costing him half his rations, but Konu is definitely not about taking a life if he can help it.
The map lead to a long-abandoned temple, which Willow didn't want to enter. Konu decided to go in alone, and found a stone child on the altar. When he touched it, it came to life and cried, and when he had calmed it, it turned into a small heart made of gold.
|Sa Konu: mildly uncertain and confused about a lot of things|
The classic example is a skeleton: creepy set dressing or ambushing enemy? Making sure your players know what you're conveying is vital to making your game run smoothly. Of course, if you want a fantasy setting where people go around bashing corpses just in case, then good for you. But building trust between the DM and the players usually leads to better games.
Anyway, we stopped at the strange baby transformation. Also, there was some weird writing that I could read even though I didn't know how I knew the language, which makes me think that Konu might have been a wanderer in the Fog... it's fun to theorize about these sort of things.
As I said before, I'm not planning on keeping up on these recaps. But since I didn't end up running a game this weekend, I figured it'd be fun to talk about.
Next week, I should have a recap for our next session of Maze of the Blue Medusa! The hiatus is over!
Thanks for reading!
|
OPCFW_CODE
|
Glen is a simple command line tool that, when run within a local GitLab project, will call the GitLab API to get all environment variables from your project's CI/CD pipeline and print them locally, ready for exporting.
With the default flags you can run
eval $(glen -r) to export your project's variables and the variables of every parent group.
The easiest way to install glen is with homebrew
brew install lingrino/tap/glen
Glen can also be installed by downloading the latest binary from the releases page and adding it to your path.
Alternatively you can install glen using
go get, assuming you have
$GOPATH/bin in your path.
go get -u github.com/lingrino/glen
By default glen assumes that you have a GitLab API key exported as
GITLAB_TOKEN and that you are calling glen from within a git repo where the GitLab remote is named
git remote -v).
You can override all of these settings, specifying the API key, git directory, or GitLab remote name as flags on the command line (see
By default glen will only get the variables from the current GitLab project. If you would also like glen to merge in variables from all of the project's parent groups then you can use
Lastly, the default output for glen is called
export, meaning that the ouput is ready to be read into your shell and will export all variables. This lets you call glen as
eval $(glen) as a one line command to export all variables locally. You can also specify a
table output for more machine or human friendly outputs.
$ glen --help Glen is a simple command line tool that, when run within a GitLab project, will call the GitLab API to get all environment variables from your project's CI/CD pipeline and print them locally, ready for exporting. With the default flags you can run 'eval $(glen -r)' to export your project's variables and the variables of every parent group. Usage: glen [flags] glen [command] Available Commands: help Help about any command version Returns the current glen version Flags: -k, --api-key string Your GitLab API key. NOTE - It's preferrable to specify your key as a GITLAB_TOKEN environment variable (default "GITLAB_TOKEN") -d, --directory string The directory where you're git repo lives. Defaults to your current working directory (default ".") -h, --help help for glen -o, --output string The output format. One of 'export', 'json', 'table'. Defaults to 'export', which can be executed to export all variables. (default "export") -r, --recurse Set recurse to true if you want to include the variables of the parent groups -n, --remote-name string The name of the GitLab remote in your git repo. Defaults to 'origin'. Check with 'git remote -v' (default "origin") Use "glen [command] --help" for more information about a command.
Glen does one thing (reads variables from GitLab projects) and should do that one thing well. If you notice a bug with glen plesae file an issue or submit a PR.
Also, all contributions and ideas are welcome! Please submit an issue or a PR with anything that you think could be improved.
In particular, this project could benefit from the following:
- Tests that mock git repos
|
OPCFW_CODE
|
I love toolbars. I would never use the start menu if I could help it, so some of the shortcuts that Microsoft Windows 7 has put into the interface are a great welcome. Right-clickable context menus is a great time saver for those that do repetitive actions with their computer. So how do they work?
Every time you open a program, it puts a shortcut to that running program or app in the taskbar. In previous versions of Windows, there wasn’t a whole lot you could do with that shortcut but maximize, minimize, close, etc. Well now there’s quite few other things you can do. If you are simply looking for some of those old commands, you can find out where that menu is in this short blog entry.
Windows 7 Context Menus
For any application running in your taskbar, simply right-click the icon, and a new “context-menu” will pop up. The image below is the menu that popped up when I did so on Trillian. The options available with Trillian are:
- What account to log on with
- Various modes of status (Online, Away, etc.)
- Exit (The equivalent of closing the application by using File–>Exit))
- Pin this program to taskbar (Explained later)
- Close (The equivalent of clicking the Red X on the window)
While this may not seem very impressive, there are some other apps that make better use of the context menu. The next example is context menu for a Microsoft Office Word document I have open.
Notice that it shows the most recent documents I have used in Word. This works with any Microsoft Office documents (Excel, PowerPoint, Project, etc.). Also notice that I can get another context menu by right-clicking on the individual items, and get additional options from those items.
Now let’s look at trusty old Internet Explorer.
This time, it shows the most frequented web sites you visit, and again has additional options to include opening a new tab, or starting “InPrivate Browsing” (that will be another blog some day).
Microsoft Terminal Services Client (or RDP)
Lastly, I want to show you my favorite feature. Being a systems engineer, I use Microsoft Terminal Services Client (MSTSC.EXE aka RDP) a lot. Well, when you open via the command line like I often do, look at the context menu you get.
Notice, it shows all the recent terminal servers I have connected to. Very useful, but what if you do this a lot, but there are also certain servers you connect to all the time. Well, that’s where “Pin this program to taskbar” comes in handy. I can simply click the small “pushpin” icon next to any of the servers I’ve connected to in the past, and they show up at the top of the window, and stay there (see below).
If you ever want to remove one from the “pinned” list, just click the small blue pushpin icon next to it, and it drops back down into the normal list.
|
OPCFW_CODE
|
Counterfactuals operationalised through algorithmic recourse have become a powerful tool to make artificial intelligence systems explainable. Conceptually, given an individual classified as y -- the factual -- we seek actions such that their prediction becomes the desired class y' -- the counterfactual. This process offers algorithmic recourse that is (1) easy to customise and interpret, and (2) directly aligned with the goals of each individual. However, the properties of a "good" counterfactual are still largely debated; it remains an open challenge to effectively locate a counterfactual along with its corresponding recourse. Some strategies use gradient-driven methods, but these offer no guarantees on the feasibility of the recourse and are open to adversarial attacks on carefully created manifolds. This can lead to unfairness and lack of robustness. Other methods are data-driven, which mostly addresses the feasibility problem at the expense of privacy, security and secrecy as they require access to the entire training data set. Here, we introduce LocalFACE, a model-agnostic technique that composes feasible and actionable counterfactual explanations using locally-acquired information at each step of the algorithmic recourse. Our explainer preserves the privacy of users by only leveraging data that it specifically requires to construct actionable algorithmic recourse, and protects the model by offering transparency solely in the regions deemed necessary for the intervention.
Counterfactual explanations are the de facto standard when tasked with interpreting decisions of (opaque) predictive models. Their generation is often subject to algorithmic and domain-specific constraints -- such as density-based feasibility for the former and attribute (im)mutability or directionality of change for the latter -- that aim to maximise their real-life utility. In addition to desiderata with respect to the counterfactual instance itself, the existence of a viable path connecting it with the factual data point, known as algorithmic recourse, has become an important technical consideration. While both of these requirements ensure that the steps of the journey as well as its destination are admissible, current literature neglects the multiplicity of such counterfactual paths. To address this shortcoming we introduce the novel concept of explanatory multiverse that encompasses all the possible counterfactual journeys and shows how to navigate, reason about and compare the geometry of these paths -- their affinity, branching, divergence and possible future convergence -- with two methods: vector spaces and graphs. Implementing this (interactive) explanatory process grants explainees more agency by allowing them to select counterfactuals based on the properties of the journey leading to them in addition to their absolute differences.
Ante-hoc interpretability has become the holy grail of explainable machine learning for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the deployment context. It can refer to predictive models whose structure adheres to domain-specific constraints, or ones that are inherently transparent. The latter notion assumes observers who judge this quality, whereas the former presupposes them to have technical and domain expertise, in certain cases rendering such models unintelligible. Additionally, its distinction from the less desirable post-hoc explainability, which refers to methods that construct a separate explanatory model, is vague given that transparent predictors may still require (post-)processing to yield satisfactory explanatory insights. Ante-hoc interpretability is thus an overloaded concept that comprises a range of implicit properties, which we unpack in this paper to better understand what is needed for its safe deployment across high-stakes domains. To this end, we outline model- and explainer-specific desiderata that allow us to navigate its distinct realisations in view of the envisaged application and audience.
Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike. These two objectives, however, are incompatible when a scoring model is calibrated through discontinuous probability functions, where individuals can be randomly assigned an outcome determined by a fixed probability. This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different -- a clear violation of individual fairness. Assigning unique odds to each protected sub-population may also prevent members of one sub-population from ever receiving equal chances of a positive outcome to another, which we argue is another type of unfairness called individual odds. We reconcile all this by constructing continuous probability functions between group thresholds that are constrained by their Lipschitz constant. Our solution preserves the model's predictive power, individual fairness and robustness while ensuring group fairness.
Users of recommender systems tend to differ in their level of interaction with these algorithms, which may affect the quality of recommendations they receive and lead to undesirable performance disparity. In this paper we investigate under what conditions the performance for data-rich and data-poor users diverges for a collection of popular evaluation metrics applied to ten benchmark datasets. We find that Precision is consistently higher for data-rich users across all the datasets; Mean Average Precision is comparable across user groups but its variance is large; Recall yields a counter-intuitive result where the algorithm performs better for data-poor than for data-rich users, which bias is further exacerbated when negative item sampling is employed during evaluation. The final observation suggests that as users interact more with recommender systems, the quality of recommendations they receive degrades (when measured by Recall). Our insights clearly show the importance of an evaluation protocol and its influence on the reported results when studying recommender systems.
Explainable artificial intelligence techniques are evolving at breakneck speed, but suitable evaluation approaches currently lag behind. With explainers becoming increasingly complex and a lack of consensus on how to assess their utility, it is challenging to judge the benefit and effectiveness of different explanations. To address this gap, we take a step back from complex predictive algorithms and instead look into explainability of simple mathematical models. In this setting, we aim to assess how people perceive comprehensibility of different model representations such as mathematical formulation, graphical representation and textual summarisation (of varying scope). This allows diverse stakeholders -- engineers, researchers, consumers, regulators and the like -- to judge intelligibility of fundamental concepts that more complex artificial intelligence explanations are built from. This position paper charts our approach to establishing appropriate evaluation methodology as well as a conceptual and practical framework to facilitate setting up and executing relevant user studies.
Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences. Insights such as human preference for contrastive -- more precisely, counterfactual -- explanations have played a major role in this transition, inspiring and guiding the research in computer science. Other observations, while equally important, have received much less attention. The desire of human explainees to communicate with artificial intelligence explainers through a dialogue-like interaction has been mostly neglected by the community. This poses many challenges for the effectiveness and widespread adoption of such technologies as delivering a single explanation optimised according to some predefined objectives may fail to engender understanding in its recipients and satisfy their unique needs given the diversity of human knowledge and intention. Using insights elaborated by Niklas Luhmann and, more recently, Elena Esposito we apply social systems theory to highlight challenges in explainable artificial intelligence and offer a path forward, striving to reinvigorate the technical research in this direction. This paper aims to demonstrate the potential of systems theoretical approaches to communication in understanding problems and limitations of explainable artificial intelligence.
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable. New transparency approaches are developed at breakneck speed, enabling us to peek inside these black boxes and interpret their decisions. Many of these techniques are introduced as monolithic tools, giving the impression of one-size-fits-all and end-to-end algorithms with limited customisability. Nevertheless, such approaches are often composed of multiple interchangeable modules that need to be tuned to the problem at hand to produce meaningful explanations. This paper introduces a collection of hands-on training materials -- slides, video recordings and Jupyter Notebooks -- that provide guidance through the process of building and evaluating bespoke modular surrogate explainers for tabular data. These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, their qualities such as fairness, accountability and transparency (FAT) are of paramount importance. To ensure high-quality, fair, transparent and reliable predictive systems, we developed an open source Python package called FAT Forensics. It can inspect important fairness, accountability and transparency aspects of predictive algorithms to automatically and objectively report them back to engineers and users of such systems. Our toolbox can evaluate all elements of a predictive pipeline: data (and their features), models and predictions. Published under the BSD 3-Clause open source licence, FAT Forensics is opened up for personal and commercial usage.
* Journal of Open Source Software, 5(49), 1904 (2020)
"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first published by John Wiley in 1994. It could be purchased as book-only or with a 3.5 inch diskette containing the SWI-Prolog programmes printed in the book (for various operating systems). In 2007 the copyright reverted back to the author at which point the book and programmes were made freely available online; the print version is no longer distributed through John Wiley publishers. In 2015, as a pilot, we ported most of the original book into an online, interactive website using SWI-Prolog's SWISH platform. Since then, we launched the Simply Logical open source organisation committed to maintaining a suite of freely available interactive online educational resources about Artificial Intelligence and Logic Programming with Prolog. With the advent of new educational technologies we were inspired to rebuild the book from the ground up using the Jupyter Book platform enhanced with a collection of bespoke plugins that implement, among other things, interactive SWI-Prolog code blocks that can be executed directly in a web browser. This new version is more modular, easier to maintain, and can be split into custom teaching modules, in addition to being modern-looking, visually appealing, and compatible with a range of (mobile) devices of varying screen sizes.
|
OPCFW_CODE
|
Facing simcard/network connection issues after flashing AOSP 14 onto Pixel 7a
I have been facing an issue after flashing AOSP 14 (Lynx) onto Pixel 7a devices. As soon as I insert a sim card into the phone, the phone starts rebooting randomly and the phone will not ever connect to the network. The issue is very frequent with the AT&T and Verizon SIM cards. Little less frequent with the T-Mobile sim card, but still happens.
I am wondering if I am missing something or doing something wrong! Here are the steps I am following:
I have set up a virtual machine (Ubuntu)
I downloaded the source following the steps listed in: https://source.android.com/docs/setup/download/downloading
Downloaded and extracted the binaries (from: https://developers.google.com/android/drivers)
Built AOSP following the steps listed in: https://source.android.com/docs/setup/build/building
After a successful build, I download all the FOLDER 'lynx' from the folder WORKING_DIRECTORY/out/target/product/
Connect a Pixel 7a in bootloader mode and unlock oem
Flash AOSP using the command 'ANDROID_PRODUCT_OUT="./" fastboot flashall -w' from the downloaded lynx folder
Phone starts with AOSP 14
Insert a sim card
The phone keeps rebooting and doesn't connect to the telephone network (It stops being a phone)
I have tried with all the Android 14 (lynx) releases from Oct 2023 to Jan 2024 and no luck! Any help would be greatly appreciated!
Update
Log reports:
434-434 init pid-434 E Unable to set property 'persist.vendor.radio.call_waiting_for_sync_0' from uid:10082 gid:10082 pid:27559: SELinux permission check failed
1391-3611 ActivityManager system_server E Service ServiceRecord{96c9046 u0 com.shannon.imsservice/.ShannonImsService} in process ProcessRecord{2bda7ab 9540:com.shannon.imsservice/u0a82} not same as in map: ServiceRecord{eed0ddb u0 com.shannon.imsservice/.ShannonImsService}
16911-16928 SHANNON_IMS com.shannon.imsservice E 0057 [CONF] External configuration xml file not exist in vendor/etc. Name: sim_operator_list.xml (BaseConfigurationReader%getInputStreamVendor:293)
I would encourage you to share the crash log via ADB logcat and update the question accordingly. Also have you added the proprietary binaries?
@RajatGupta Yes, I have added the binaries (Pixel 7a - lynx) from Google before building the AOSP. Crash log details are updated in the question. Thanks!
You should try and see if this problem is reproducible on an SELinux Permissive build also, if not, you need to write the sepolicy rules for this issue it seems.
Add next policies in dir device/google/gs201-sepolicy/:
File: system_ext/private/platform_app.te
set_prop(platform_app, shannon_ims_service_prop);
File: system_ext/private/property_contexts
persist.vendor.radio.call_waiting_for_sync_0 u:object_r:shannon_ims_service_prop:s0 exact int
persist.vendor.radio.call_waiting_for_sync_1 u:object_r:shannon_ims_service_prop:s0 exact int
File: system_ext/public/property.te
system_public_prop(shannon_ims_service_prop)
Source: here
|
STACK_EXCHANGE
|
I have to site A and B connected by Site to Site VPN and they are working OK. When I try to add remote access VPN for Site A so that users at home could use Both site A´s ja Site B´s services and also connect to net through site A, I can't get this to work. I have tried doing this both with PDM and commandline. I have quite a lot experiece with routers, but PIXes are still somewhat mystery to me. Does anyone have any similar working configurations to share with me?
Here is what i used to set to remote access-vpn with the Cisco VPN client.
access-list nonat permit ip 172.16.0.0 255.255.0.0 192.168.10.0
255.255.255.0 (Access-list defining what traffic to not use NAT on) access-list 102 permit ip 172.16.0.0 255.255.0.0 192.168.10.0 255.255.255.0 (Access-list defining which traffic to use split-tunneling on) nat (interface) 0 access-list nonat (Command issued to not use NAT translation through whichever interface the VPN traffic will flow.)
sysopt connection permit-ipsec (Permits IPSEC communictation through the PIX)
crypto ipsec transform-set vpnsei esp-3des esp-md5-hmac (Setting up what type of encryption to use, there are many choices) crypto dynamic-map dynmapsei 10 set transform-set vpnsei
isakmp client configuration address-pool local sei-1 internet
vpngroup misvpn address-pool (The vpngroup command sets up your configuration for the vpn. Your first line tells which ip pool to use) vpngroup misvpn dns-server (DNS server IP) vpngroup misvpn wins-server (WINS server ip) vpngroup misvpn default-domain (your internal domain name) vpngroup misvpn split-tunnel (This command allows your vpn users to surf the web through their ISP and only use the VPN to connect to your internal servers or services) vpngroup misvpn split-dns (your internal domain-name. Also used in conjunction with command above) vpngroup misvpn idle-time 7200 (time in seconds you want the the Pix to allow a connection to sit idle) vpngroup misvpn password ******** (VPN group password)
ip local pool sei-1 192.168.10.10-192.168.10.25 (This is the ip addresses that are assigned to the VPN Clients)
If you have any problems or more questions, send me an email at email@example.com
|
OPCFW_CODE
|
This area is for useful scripts that some of our visitors have
contributed. ASP 101 did not write them and is not responsible for the
operation of these scripts... heck we don't even take responsibility
for our own. ;) We simply found them interesting and or useful and
thought some of you might too!
"This script runs onto IIS webserver and it's designed to take large number of files (ex. pics, gif, jpg, bmp) with no-progressive filename from a (web)directory, rename them with a progressive number, keeping the same file extension, and move them into a new (web)directory.
For example you cold have in directory named FromDir those files:
After the script run you will have into FromDir no files and into TargetDir:
Especially designed for pictures, the scripts will ignore the thumbs db created by Windows (thumbs.db)."
"I have never found a good (and free) set of asp pages to administer a poll, so I created my own. Here they are, with a sample mdb file. The whole system was based on the poll pages at http://www.4guysfromrolla.com/webtech/071099-1.shtml. I modified it in a few ways: instead of basing polls on a date period (users and servers have different times across the world), I made the poll active/inactive. I also modified it so that each user is only allowed to vote once per poll (instead of once per day in the 4GFR example) by writing a cookie to the user's machine with the PollId of the current poll. Users can also click on the View Results link on the poll page to view the results without voting.
Deleting polls removes both the poll and any poll options that were created. I'm sure the code could be more compact and more error checking could be done, but it works great for me. Put the poll.asp and pollresults.asp in the same directory. All others can be in a hidden admin directory as long as everything points to the db correctly."
"I have recently developed a asp script that basically
replaces a database. It is almost like a .ini file
except it stores data in a [Section]key=key value format.
This is meant to help people that cannot or do not wish
to have to pay for an actual database for their site.
I am actually using this script on a site I am getting
ready to release, and I have tested this one I am sending you.
If you have any questions on any aspect of the script,
please feel free to ask. ASP 101 has been helping me since
I started, now I can actually maybe give something back."
I'm not sure if you are the correct person to send feedback... but
the form in the site would be to small for this.. :-)
I have been reading the Banner Rotator article in your web site,
and I have what I think is a better solution. It has been designed
by me for my web site (http://www.NanoP3.com/). I'm sure it could
be better designed but I started it as a small banner rotator and
has been increasing capabilities trying not to modify too much
the original design... "
"I built this example because I kept getting asked by a number of
Graphic Designers to use XML. The following is my initial attempt at
exporting data from an Access database into an XML document using the FSO.
Simply place this in your web directory and it will work immediatly.
The index.asp page display the contents of the database, along with a tick box for each product you wish to see in the xml document.
The build.asp page grabs the info from the form on the index.asp and uses the FSO to export and xml page.
Once the build page has performed the build task, you are redirected to the xml page.
Unfortunatly this is as far as I have got with the example."
|
OPCFW_CODE
|
The perfect is “just up grade all the things.” That gives by far the most Advantages for your shortest complete time.
We existing a list of guidelines that you may use In case you have no superior Tips, but the actual purpose is regularity, rather then any particular rule set.
It can be not possible to recover from all problems. If recovery from an mistake is impossible, it is necessary to quickly “get out” in a very properly-defined way.
An API class and its users can’t are in an unnamed namespace; but any “helper” class or functionality which is outlined in an implementation resource file need to be at an unnamed namespace scope.
Students try to look for expert advice to finish their assignments effectively. EssayCorp has utilized many of the top rated writers who are extremely expert in producing assignments on cross-cultural management. These writers can be relied on with any kinds of assignments on cross-lifestyle management like essay creating, report creating, situation experiments, issue-remedy jobs and so forth. They often tackle subjects like the next: primary brainstorming classes that has a cross-cultural perform group, semantic limitations inside the cross-cultural do the job team, important interaction concerns in cross-cultural teams, handling conflict involving two cultural teams in a work atmosphere, and very best methods for supervisors for handling persons in multicultural get the job done groups.
When You can't form characters into your string, use the escape sequences to insert nonprintable characters into textual content strings, char variables, and arrays. Here i will discuss frequent C escape sequences:
The widespread case for any foundation class is that it’s check out here meant to have publicly derived courses, and so calling code is just about certain to use a little something like a shared_ptr:
string fn = title + ".txt"; ifstream is fn ; Document r; is >> r; // ... two hundred traces of code devoid of supposed utilization of fn or is ...
Only the 1st of these motives is basic, so Anytime probable, use exceptions to implement RAII, or style and design your RAII objects to never fail.
Flag goto. Better nonetheless flag all gotos his comment is here that don't bounce from a nested loop into the assertion right away after a nest of loops.
Now, there's no specific point out on the iteration mechanism, and also the loop operates with a reference to const aspects in order that accidental modification cannot come about. If modification is preferred, say so:
Employing a synchronized_value makes sure that the information contains a mutex, and the right mutex is locked when the data is accessed.
Derived lessons such as D need to not expose a general public constructor. Or else, D’s consumers could make D objects that don’t invoke PostInitialize.
Such as, the overall swap() will copy The weather of two vectors currently being swapped, whereas a fantastic distinct implementation will not copy components in the least.
|
OPCFW_CODE
|
How to write a plugin in typescript
Thanks for this tutorial section @asymmetrik , it helped me a lot.
I integrated the following custom lib
https://github.com/whitequark/Leaflet.Nanoscale
Things I did:
I added the Control.Nanoscale.js to assets/js/Control.Nanoscale.js
I added "import * as L from 'leaflet'" to Control.Nanoscale.js because otherwise you will receive an (error "ReferenceError: L is not defined")
I added
"scripts": [
"../src/assets/js/Leaflet.Nanoscale.js"
],
Added typings
import * as L from 'leaflet';
declare module 'leaflet' {
var nanoscale: any;
}
Added the nanoscale control
import "../../../assets/js/Control.Nanoscale.js";
L.control.nanoscale({
nanometersPerPixel: 1000,
}).addTo(this.map);
But the code of Control.Nanoscale.js is so short I would prefer to write the code in typescript myself.
Is it technically possible to write leaflet plugins in Angular / typescript?
Thank you for help in advance
You can. It depends on the complexity of the control and what you're trying to do specifically. But, you could use the (leafletMapReady) output binding to get a handle to the map and then you can do whatever you want to (add a custom control, add event handlers, etc.).
If you want to wrap it into a custom component or directive that works with ngx-leaflet, you can use https://github.com/Asymmetrik/ngx-leaflet-draw or https://github.com/Asymmetrik/ngx-leaflet-markercluster as examples of how to inject the LeafletDirective into your custom component/directive. That allows you direct access to the map and other functionality within ngx-leaflet.
Check the README for details about making sure change detection works on changes in event handlers.
Thank you very much @reblace I will try that
I'm trying to write my first native typescript (without js dependencies) leaflet plugin and I am struggling a little bit.
What I tried (I started with a service before trying a directive):
I tried to create a typescript implementation of this plugin https://github.com/azavea/Leaflet.zoomdisplay/blob/master/src/L.Control.ZoomDisplay.js
/*
* L.Control.ZoomDisplay shows the current map zoom level
*/
import { Injectable } from '@angular/core';
import * as L from 'leaflet';
@Injectable()
export class LeafletControlPower {
zoomDisplay = L.Control.extend({
options: {
position: 'topleft'
},
onAdd: function (map) {
this._map = map;
this._container = L.DomUtil.create('div', 'leaflet-control-zoom-display leaflet-bar-part leaflet-bar');
this.updateMapZoom(map.getZoom());
map.on('zoomend', this.onMapZoomEnd, this);
return this._container;
},
onRemove: function (map) {
map.off('zoomend', this.onMapZoomEnd, this);
},
onMapZoomEnd: function (e) {
this.updateMapZoom(this._map.getZoom());
},
updateMapZoom: function (zoom) {
if(typeof(zoom) === "undefined"){zoom = ""}
this._container.innerHTML = zoom;
}
});
mergeOptions = L.Map.mergeOptions({
zoomDisplayControl: true
});
addInitHook = L.Map.addInitHook(function () {
if (this.options.zoomDisplayControl) {
this.zoomDisplayControl = new this.zoomDisplay();
this.addControl(this.zoomDisplayControl);
}
});
constructor() {
}
public ZoomDisplay(options: any)
{
return new this.zoomDisplay(options);
}
}
Add it to the map
this.powerControl.ZoomDisplay({}).addTo(this.map);
An error comes up:
TypeError: this.zoomDisplay is not a constructor
I am pretty sure that I did a lot wrong here because i don't understand how to transform the js concept of this class into the directive/service concept of angular.
I am also wondering if my the idea (writing native typescript plugins) makes sense. The idea came to me only because I can debug with VS code.
|
GITHUB_ARCHIVE
|
- By Matt McInnes
- 0 Comments
Recently, I attended a CIO conference along with hundreds of other senior IT leaders. In filling the whitespace between presentations, the MC posed a question to the room “Who founded the World Wide Web (www)?”. Much to my surprise one of the attendees responded with an answer which pointed to some confusion regarding the difference between the Internet and the World Wide Web (WWW). While this may appear to be splitting hairs, I believe the deeper implications can negatively impact an organisation’s capacity innovate and disrupt, which ultimately impacts their capacity to compete.
There is no question that for the masses of users, the Internet and the WWW are essentially the same thing. From the perspective of a business information systems manager, the devil is in the detail, and the difference is critically important to appreciate. In summary, the Internet is the network of connected computing devices, and the WWW is a distributed application that runs primarily across the Internet which allows content to be presented and connected (hyperlinked). When the WWW application is made available within a corporate network, it is referred to as an Intranet.
Coming back to our confused friend at the CIO conference, they were adamant and emotionally connected to the idea that the US Government created the WWW and Tim Berners-Lee simply made their technology more user friendly.
Understanding the difference between the Internet and the WWW makes it easier to explain that the US Government are attributed with creating the first distributed network (ARPANET) which became the Internet, and Tim Berners-Lee created an application running across this network to collect and connect content (the WWW). For reference, some other examples of applications that run over the Internet are e-mail and more recently VoIP (Voice over Internet Protocol).
While I absolutely believe it’s important for business managers to understand observations from the past (Vilfredo Pareto, Henry Ford, Frederick Taylor, Michael Porter and Peter Drucker as a few examples), I also believe information systems managers need to understand the fundamentals of computing theory and the observations of leaders that have come before us. Gordon Moore and Bob Metcalf in particular documented observations that have held true since the 1960’s, and continue to impact the sheer existence and behaviour of organisations today and in some cases the existence of entire industries.
To remain competitive (Michael Porter), you have to analyse and consider market forces on your business. Rest assured, competitors are looking to rapidly disrupt your industry. Today, this disruption is becoming more available as networks become more available and accessible (Bob Metcalf) and the cost of computing capacity reduces (Gordon Moore).
In the same way that sound business fundamentals provide the foundation to establish, maintain and grow a business, I believe that as we continue into the future, sound information systems foundations can provide a platform for business innovation and disruption.
Don’t become our confused conference friend. Understand the fundamentals of both business and information systems, and use these as a foundation for innovation and disruption.
|
OPCFW_CODE
|
DragonFly kernel List (threaded) for 2009-01
Re: C++ in the kernel
On Sun, Jan 04, 2009 at 07:26:15PM +0100, Pieter Dumon wrote:
> It's just political, there's pros and cons for everything.
> Its not because LT says something that it's true.
I agree. I don't worship at that alter, so it was more the content and
relevance for which I was going.
> Some people have demonstrated nice work in C++ (e.g. some L4 variants).
> Whatever language you use, it all comes down to using it properly.
> But if your whole kernel is written in C, better to leave it at that :-)
> The worst thing you can do is mix and match C and C++ I think - that
> would be really crappy.
Again, I completely agree. I have no dog in this fight...other than
the fact that I don't know C++ very well :).
> On Sun, Jan 4, 2009 at 5:25 PM, B. Estrade <email@example.com> wrote:
> > On Sun, Jan 04, 2009 at 05:06:13PM +0100, Michael Neumann wrote:
> >> This question bugs me since a quite long time so I write it down...
> >> FreeBSD had a long thread about pros and cons of using C++
> >> in the kernel here .
> >> I'm undecided whether it would be good to use C++ in the DragonFly kernel.
> >> At first, most importantly, there is the question about the quality of
> >> the C++ compiler (bug-freeness) and the quality of the generated machine
> >> code.
> >> I can't answer this for sure, just did a small test compiling
> >> the same C code with both a C and a C++ compiler. Both produce the same
> >> machine code.. Using C++ classes without all the more advanced stuff
> >> (like exception, RTTI...) shouldn't make too much a difference in the
> >> produced code. So I don't think this will be much of a problem.
> >> Next thing to consider is the possible abuse of C++'s features
> >> (exceptions, RTTI etc.). I don't think this is a problem either,
> >> especially in a small project like DragonFly, as there is only a handful
> >> of developers. The solution to this problem is as simple as just don't
> >> use those features.
> >> Now to the advantages of C++ that IMO would make sense:
> >> * Think about the kobj and the driver architecture. All this comes
> >> for free when using C++. No .m files anymore. Everything in
> >> one language.
> >> * Think about macro-driven datastructures (e.g. rbtree).
> >> They are IMHO quite unreadable and very hackish.
> >> C++ templates on the other hand are a lot cleaner
> >> (they are sometimes ugly as well :).
> >> Of course templates doesn't help when using internal
> >> datastructures like sys/queue.h.
> >> Maybe I spent too much time using OO languages (like Ruby or C++).
> >> What I am missing most in C is the ability to subclass structures,
> >> methods and templates. All this IMHO can improve expressability
> >> and code quality.
> > I can't pretend to know what this implies for DfBSD, I think Linus has
> > addressed this before wrt Linux:
> > http://lwn.net/Articles/249460/
> > I don't know enough C++ to share his opinion, nor do I contribute to
> > any projects, but I think this might be some good background on the
> > matter.
> > Cheers,
> > Brett
> >> Regards,
> >> Michael
> >> http://lists.freebsd.org/pipermail/freebsd-arch/2007-October/006969.html
Louisiana Optical Network Initiative
+1.225.578.1920 aim: bz743
|
OPCFW_CODE
|
Home >> Opensource Customization
To a vast majority of web surfers'
opensource application simply means free software
that they can integrate into their web sites and enjoy
the benefits of. However there are burning issues
with using opensource web applications without knowing
which may incur huge losses or make you suffer lower
Problems with most opensource
It's not always easy to get opensource software to
work with other applications. Often, that standard
is the office suite most often used, Microsoft Office,
which isn't compatible with most opensource programs.
Additionally, if your organization already has existing
computers and a network, it might be better to have
all your applications compatible than to use some
opensource products in isolation.
Difficult to understand
support. Enthusiastic and varied support
was previously listed as one of the benefits of opensource software. However, it should be added that
the support can sometimes be difficult to understand
because it is frequently aimed at developers and not
Open source is NOT plug
and play. The loading and installation of
the software's can be a major hurdle for many users.
Difficult to use.
Open source packages tend to be written by engineers
for other engineers and for many of them it is accepted
that ordinary function will involve creation of configuration
files, writing scripts, or actually editing the source
code and recompiling.
Open software packages tend to have far fewer features
and capabilities than commercial equivalents.
Intellectual Property Rights
issues. If you buy an opensource product
you have no assurance whatsoever that you are not
buying intellectual property that has been stolen
from its rightful owners, or has been created illegally
by people who are violating a nondisclosure contract.
No Warranty. If
you use opensource you are on your own. There is
no single company backing the product.
Then the question arises that why
use opensource software at all if it has so many disadvantages.
Well, opensource web applications also have various
benefits provided it is used effectively.
The opensource Advantage
Its Free to implement
and Extensive Documentation is available
to aid developers
Quick fixes available
for any bugs found and Personal issues can be dealt
with. Try to get to fix a small "feature,"
in any major software if you are not fortune 500.
Free updates and upgrades
Encourages software re-use.
Open source software development allows programmers
to cooperate freely with other programmers across
time and distance with a minimum of legal friction.
As a result, opensource software development encourages
software re-use. Rather than endlessly reinventing
wheels, a programmer can just use someone else's codes
Can increase code quality
and security. With closed source software,
it's often difficult to evaluate the quality and security
of the code. In addition, closed source software companies
have an incentive to delay announcing security flaws
or bugs in their product. Often this means that their
customers don't learn of security flaws until weeks
or months after the security exploit was known internally.
Decreases vendor lock-in.
Businesses no longer have to be locked-in to the whims
of a sole-source vendor
Every business has unique need or desire to present
itself on the internet -- rather than live with a
clumsy interface provided by their opensource software.
opensource customization crew lives with the motto
that is no point in reinventing the wheel. We are
enthusiastic about technology and how wonderfully
it can help your business but, at the same time we
give focus on the fact that our customers should not
pay more for solutions that already available and
can be readily deployed.
The true benefit of opensource can
only be leveraged when an off the shelf opensource
application can be taken and then it can be customized
as per your business needs. This ensures that while
the soul of an opensource software is preserved, it
works with your business like a 100% web application.
to learn how we add more value to Opensource Web
|
OPCFW_CODE
|
/**
* Helper function to start monitoring the scroll event and converting them into
* PDF.js friendly one: with scroll debounce and scroll direction.
*/
export default function watchScroll(viewAreaElement, callback) {
var debounceScroll = function debounceScroll(evt) {
if (rAF) {
return;
}
// schedule an invocation of scroll for next animation frame.
rAF = window.requestAnimationFrame(function viewAreaElementScrolled() {
rAF = null;
var currentY = viewAreaElement.scrollTop;
var lastY = state.lastY;
if (currentY !== lastY) {
state.down = currentY > lastY;
}
state.lastY = currentY;
callback(state);
});
};
var state = {
down: true,
lastY: viewAreaElement.scrollTop,
_eventHandler: debounceScroll
};
var rAF = null;
viewAreaElement.addEventListener('scroll', debounceScroll, true);
return state;
};
|
STACK_EDU
|
Google offer of truncating words with the aim to save memory
In a video uploaded in YouTube (“Google Developers Day US – Theorizing from Data” http://youtube.com/watch?v=nU8DcBF-qo4), Peter Norvig from Google in a part of his talk (31:17-33:00) presents results from tests which aims to find the shortest length of any word by which length to save the uniqueness of the word and to not mass it with other words. The need of cutting words is based on the need of saving memory and ignoring the lexical form and keeping only the semantics (as much as possible), many times when you search with Google you see bolded words like “robots”, “robotical” while yu have typed “robot” in the search query. It will be useful when we want to claim is this two strings of characters are one and the same word in different revisions, or completely different things.
Google Research Director offers to cut the length of words up to 4 letters.
Here I want to present to your attention a passage written by people from Cambridge. Read it.
THE PAOMNNEHAL PWEOR OF THE HMUAN MNID
Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
Lets apply the experience from this example to the unawareness about the natural genesis of words and their form of memorization in the mind. Psychologist say that we remember the beginning, the end and the rest content but without the letter order. If this is the way human mind works, applying it in machines which operate with natural human words will not cause problems. Never the less this have to continue be true for not only English texts, but for any language which uses array of characters for presenting words.
I think that in order to understand the nature of one phenomena you should look analytically back at its history and find the circumstances that created (caused) it, its behavior and characteristics. May be it is wiser to apply the above algo to the first one and the last one sounds, instead on the first and the last letter, because of the verbal genesis of language, instead of its later written representation.
We have to believe that by reordering all the words except the first and the last, we will not loose the identity (uniqueness) of the word and we will not lie ourselves about it’s equality to another different word.
Lets apply this Cambridge’s approach as a solution for saving memory in manipulation of big amounts of text. I think the application will look like this:
Aoccdrnig // first saving the first and the last letters
A ccdinor g // then order in alphabet order the rest letters
A c2dinor g // coding the same characters by letting only one of the group and next to it placing the count of all similar letters
May be the resulting array of characters which even can contain some numbers will be hard to be red by human, but if the Cambridge’s study is write about the nature of how people memorize words in English the above transformations will keep the uniqueness of the words after this transformation. (Can be used by a spell checker which generates advices by this algo if the word is written with proper first and last letter).
Applying this ideas to international words like names of geographical places and others we I offer one my observation I find useful: International words keep at least their first two groups of consonants in every language they are translated to.
Google and Peter Norvig are completely right about data.
I share the view of Google that by collecting input data we as an algorithm architects understand the problem better than when we have no data. It is a very deep understanding that collecting data can help you when you are taking decisions.
“Mathematics was found with the analogy between two apples and two pears, not by collecting two tons of apples only. And in this way Mathematics teaches us that data is not the only way to model the problem and build a solution which deals with every case of the problem which is reflected on the input data. Statistics in not the only one general algorithm by which people make solutions. We all know that there is data which can’t be collected which uncollected data is relevant (important) to the final results, the brain in this cases works analogically by finding close in some aspect (crucial for the concrete problem) problems which have solutions and even better which has solutions with proved (understood) solutions’ semantics. And this way finding solutions which in the concrete situation are equivalent and able to be applied in the raised problem. Data contains unsorted information about many problem solving possibilities, it is important to understand what data says (analogy) and collect it (statistics).” [part of AI-research page]
We have to recognize Google’s deep understanding about statistical problems and how and where to apply statistical methods. They are very good at this.
|
OPCFW_CODE
|
Leverage monitoring in Kyma runtime to better operate and troubleshoot your business extensions
With release 1.17, we have enabled access to the monitoring capability in Kyma via the Grafana UI. This will provide additional support to developers to troubleshoot any issues with their business extensions or event triggers.
NOTE: At the moment, Grafana provides Kyma runtime users only with read-only access. This means it is not possible to define and save custom dashboards or provide custom metrics from the applications.
To access the Grafana UI, navigate to Diagnostics –> Metrics. It uses the same authentication as the Kyma Console UI.
Kyma comes bundled with a lot of out-of-the-box dashboards. You can start accessing them right away to monitor and troubleshoot your extensions, microservices, functions, and Event triggers.
The best way to start is by exploring the available dashboards.
Let’s look at some of the available dashboards and see what metrics they offer.
In the Kyma / Functions dashboard, you can get a holistic view of how a function is behaving. These are some of the available basic metrics:
- Request duration or latency
- Resource usage
- Error rate
What about my microservcies?
The best way to start is to check out the Istio / Service dashboard to access all the deployed microservices.
Simply select the microservice from the dropdown list.
Standard metrics are provided out of the box to the developer thanks to the Istio service mesh.
If you have set up a deployment with multiple replicas, you can have a look at the Kubernetes / Compute Resources / Workload dashboard to see how various replicas are behaving resource-wise. Select namespace and deployment you want to monitor.
Further on, you can drill down and inspect each and every Pod of a given ReplicaSet to get more details.
There is a whole bunch of Kubernetes metrics available for you so that you could constantly keep an eye on your cluster.
In general, the Kyma runtime is managed and operated by SAP. Still, the curious folks can explore various dashboards on their own to see how their clusters behave.
One out of many available dashboards is Kubernetes / Compute Resources / Cluster.
Keep an eye on those events
To troubleshoot any event delivery- or latency-related issues, Kyma runtime provides a whole set of useful dashboards.
The most interesting one for developers is Kyma / Event Mesh / Broker-Trigger. It contains details about how events are delivered to your microservice or function. You can select a specific Namespace in which the event trigger has been configured.
To dig in further, you can refer to the Kyma / Event Mesh / Latency dashboard for any latency issues.
When you want to confirm if the events are coming from a particular connected system or not, you can check out the Kyma / Event Mesh / Delivery dashboard.
Let’s see some logs
Loki (the Kyma logging component) is also integrated into Grafana. Thanks to it, you can access the logs for your microservice or function directly from the Grafana UI.
It also provides nice search capabilities with autocompletion for various fields such as Namespaces, labels, container/function names, and so on.
Run your own queries
That’s not all!
That was just a quick overview, but there are many more dashboards shipped with Kyma runtime out of the box. They can help you to observe, analyze, and act on various Kyma components and Kubernetes resources. Enjoy!
|
OPCFW_CODE
|
Replication package for "Simulated Power Analyses for Observational Studies: An Application to the Affordable Care Act Medicaid Expansion"
# Replication package for "Simulated Power Analyses for Observational Studies: An Application to the Affordable Care Act Medicaid Expansion"
Bernard Black, Alex Hollingsworth, Leticia Nunes, and Kosali Simon
Journal of Public Economics
You can cite this replication package using zenodo, where an archival version of this repository is stored.
The code and data in this replication package will replicate all tables and figures from raw data using Stata, R, and a few unix shell commands.
The entire project can be replicated by opening the stata project file `power-analysis.stpr` and then by running the `scripts/0_run_all.do` do file. This file will also run the necessary R code. The paper can be rebuilt using the latex file `latex/manuscript.tex`.
Using nodes on Indiana University's supercomputer Carbonate (https://kb.iu.edu/d/aolp), the entire replication takes around three days. Each requested node had 16 GB of RAM and 4 cores from a 12-core Intel Xeon E5-2680 v3 CPUs. If line 51 is set to 1, `global carbonate 1`, then the power analysis will be set to run on Carbonate. If line 51 is set to zero `global carbonate 0` then the power analysis will be done in serial rather than as submitted jobs on Carbonate run in parallel.
The majority of this time is spent on the many power simulations reported in Table 3. In `scripts/0_run_all.do`, the user can choose to not run the power analyses code by altering line 47 to be `global slow_code 0`. If this is done the entire replication will take around than 3 hours.
**Note**: The github version of this replication package only contains the code and output. The zenodo version of this replication package contains all publicly available raw data and data used in analysis (as well as the code).
The zenodo replication package is available here: https://doi.org/10.5281/zenodo.6653213
The github version (only code and output) is available here: https://github.com/hollina/power-analysis
## Data availability statement
**Note**: The mortality data are available from the Centers for Disease Control but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. More information about these data can be found here, https://www.cdc.gov/nchs/nvss/dvs_data_release.htm. Email Alex if you have any questions about accessing the data.
All other data are contained in the zenodo repository.
## Software Requirements
- Stata (code was last run with version 15.1 MP)
- the program `scripts/0_run_all.do` will install all stata dependencies locally if line 102 is set to `local install_stata_packages 1`
- All user-written stata programs used in this project can be found in `stata_packages` directory
- R 3.6
+ We use the package `renv` for this project
+ The `renv.lock` file has the version of each R package used in this project.
- Portions of the code use shell commands, which may require Unix (the analysis was done on a unix machine).
## Instructions to Replicators
- Edit line 35 of `scripts/0_run_all.do` to R executible path
- e.g., `global r_path "/usr/local/bin/R"`
- Edit line 102 of `scripts/0_run_all.do` if you'd like to install stata code again rather than running code using `stata_packages` folder
- Edit line 41 of `scripts/0_run_all.do` if you want to build analytic data from raw data
- Edit line 47 of `scripts/0_run_all.do` if you want to run slow code (power analysis)
- Edit line 51 of `scripts/0_run_all.do` if you want to run code that submits power analysis as jobs to IU's carbonate (unlikely you'll want to select this!)
+ The shell program `scripts/3_power/3.13.pbs_scripts/create_pbs_scripts.sh` was used to create the files `scripts/3_power/3.13.pbs_scripts/power_*.txt`. This will need to be edited for your own machine. Each file `power_0.txt`-`power_73.do` was submitted as a job to Indiana University's supercomputer carbonate, requesting 16 GB of virtual memory and one node, being run on Stata MP 15.1.
+ The shell program `scripts/3_power/3.13.pbs_scripts/submit_all_files.sh` was used to submit all of the jobs via line 236 of the file `scripts/0_run_all.do`
+ If you set line 51 to zero, `global carbonate 0`, then each power analysis code will run sequentially
- Compile `latex/manuscript.tex` to recreate paper.
The provided code reproduces all tables, figures, and results in the paper.
This code is based on AEA data editor's readme guidelines. (https://aeadataeditor.github.io/posts/2020-12-08-template-readme). Some content on this page was copied from [Hindawi](https://www.hindawi.com/research.data/#statement.templates). Other content was adapted from [Fort (2016)](https://doi.org/10.1093/restud/rdw057), Supplementary data, with the author's permission.
|
OPCFW_CODE
|
Localization vs Navigation
I am learning ROS for my first time, and I am confused about the difference between localization and navigation. As I understand it, amcl is the package used for localization, but then the navigation stack is a separate package. I am having a hard time seeing the difference between the two, because it seems the nav_stack is performing localization tasks; I do not see the need for localization. Is it possible to run a robot only using the nav_stack with sensors and actuators; without using any sort of localization?
Originally posted by Raisintoe on ROS Answers with karma: 51 on 2016-07-20
Post score: 0
Broadly speaking, Navigation Stack of ROS involves the folllowings
Localization
Collision Avoidance
Trajectory Planning
Localization involves one question: Where is the robot now? or where am I, keeping in mind that "here" is relative to some landmark.
Originally posted by cagatay with karma: 1850 on 2016-07-21
Post score: 2
Comment by gvdhoorn on 2016-07-21:
And then navigation is how do I get somewhere else (preferably without hitting things along the way).
Comment by cagatay on 2016-07-21:
also exploring and mapping the enviroment that robot is operating
Comment by gvdhoorn on 2016-07-21:
I guess this could be subjective, but I'm not sure navigation includes all that. I can navigate quite well without making a map at the same time, provided I already have one, which is certainly possible.
Comment by cagatay on 2016-07-21:
I am referring to nav_stack of ros
Comment by gvdhoorn on 2016-07-21:
Ah ok, I understood your comment to be about the concept of navigation in general.
Comment by cagatay on 2016-07-21:
I edited my response regarding to ROS, yeah it may be subjective but generally speaking, the navigation concept can be extended to involve mapping and exploration
Comment by Raisintoe on 2016-07-22:
Cagatay, thank you for the response. So when talking about navigation, there are a number of packages I need to consider. What are they? Or what are the main parts that make up navigation? So far I have been reading about amcl for mapping, and the nav_stack. Also tf is helpful.
Comment by cagatay on 2016-07-25:
you can start with reading the tutorials here http://wiki.ros.org/navigation/Tutorials
|
STACK_EXCHANGE
|
Can a piece of duct tape bring down a plane today (Flight 603, 1996, Perú)?
According to Wikipedia, the reason for the crash was that:
adhesive tape had been accidentally left over some or all of the static ports (on the underside of the fuselage) after the aircraft was cleaned and polished
Can this still happen with modern aircraft?
Mentour pilot has an accident reconstruction based on NTSB (or equivalent agency) findings for Malaysian Airlines Flight 134, which took off with pitot tubes covered whre he does a pretty nice explanation of the whole system for laymen - https://www.youtube.com/watch?v=f80WwpNuaxg (the titles and intros are a bit over the top, but the content is top notch) - it is not precisely about static ports, but very related.
OP, one way to understand this: the duct tape was literally covering the EYES (!!!!!) of the robot that is the overall aircraft. The aircraft had utterly no clue WTF was going on; most of its senses had been blindfolded.
Yes, this can happen on all aircraft.
The static ports on the aircraft were covered with tape for cleaning, and then not removed. This resulted in contradictory flight data (mainly airspeed and altitude) being fed to the pilots, including that relayed from ATC. Similar actions in modern aircraft would have a similar effect. More modern aircraft have specially designed covers that are used instead of tape, making it less likely they will be left on.
In theory the presence of a measure of speed that did not rely on pressure sensors (such as INS or GPS) might provide some degree of protection, as might an altitude measure that did not rely on the plane's sensors, but it is not guaranteed even with those that the pilots could have figured out how to get reliable airspeed or altitude figures in the circumstances.
Though even the 757 in the accident had radar altimeter, the crew didn't know which instrument to trust. I agree that even with GPS it would take some fast thinking on part of the pilots to figure out the situation. Fortunately modern maintenance procedures use bright-colored protective caps instead of pieces of tape :)
@jpa The maintenance procedures at the time required bright coloured tape. It just wasn't used. The maintenance worker responsible was found guilty of negligent homicide.
The risk has been mitigated, but the potential will always remain... consider the "Swiss Cheese Model".
If:
The cleaning crew working on the aircraft "bent the rules" and chose to use duct tape perhaps because the proper covers were not readily available, with the reasoning that the worker who applied it will remember to remove it before returning the aircraft to service.
There is a shift change before the work is completed, so the incoming crew is unaware of the duct tape and does not need to do any work in the affected area of the aircraft.
The aircraft is returned to service after nightfall and there is poor lighting where it is parked; the flight crew happens to be running late and so is in a rush to complete the pre-flight.
Then, there will be a repeat of the thing that should never happen again.
At each step, there is an opportunity to prevent or catch and correct an issue so it won't propagate, and only small things at each step can make the difference between non-event and disaster.
I'd perhaps make this a comment but it's too long.
This is a "hopefully) useful addition to the other answers.
This is an example of Anthony X's "Swiss Cheese Model".
A large proportion of major disasters require 3 to 5 "impossible" things to happen to allow them to occur. (Sometimes just bizarre, unusual or wholly unexpected).
A classic example is the Air NZ Flight TE901" loss of a DC10 on Mt Erebus in Antarctica
(on Novermber 28, 1979).
The original route WAS over the mountain.
A transposition error in data entry took all flights up McMurdo Sound
so this became the "known route".
On the day before the flight someone did a routine course recalculation and noted an insignificant error.
They entered the updated data - maybe moving the course a tiny fraction of a degree off where it "should have been".
But, this change overwrote the original transposition error and put the aircraft back over, and so into, the mountain.
There was a "whiteout" (this is Antarctica!)
And the captain assumed the course was low-level safe, as it always had been previously.
It wasn't.
|
STACK_EXCHANGE
|
Stream bufferSize
Hi, I'm using Ratchet to create a server socket that receives messages from a client.
The problem occurs when I try to send a message thats longer than 4096 bytes. The only way that I could make it work was changing (I don't like that) manually the bufferSize in Stream.php class. Is that a way to do that but calling some method acessors? Couldn't find out.
Just to give more insights, my server implements MessageComponentInterface and my onMessage its receiving only the first 4096 bytes of my message.
Thanks in advance.
bufferSize is a public variable on Stream you should just be able to change that.
I don't think this is the problem you're having though. TCP is streamed and has its own chunking. 4096 is just the soft limit more than likely that's not even being reached.
This test case shows Ratchet receiving and sending out WebSocket messages in chunks of 4MB. Internally, React is chunking that at 4096 bytes but it's being reassembled via the messaging protocol to the original 4MB frame.
I don't think I have access to Stream from where I stand.
here's the snippet of my code:
$appServer = ...
$server = IoServer::factory($appServer, $port);
$appServer->setLoop($server->loop);
$server->run();
```php
And in my onMessage, I just got the message with only 4096bytes.
The $connection variable implements ConnectionInterface I don't have access to stream.
I see your test case but I also see my message arriving partially. Whats wrong?
Thanks in advance.
Is connection drops on this check? Got same on post request with multipart/form-data content.
React/Stream/Buffer line 120:
if (0 === strlen($this->data)) {
$this->loop->removeWriteStream($this->stream);
$this->listening = false;
$this->emit('full-drain', [$this]);
}
If i comment this, server receive full content, if not it's chunked 8k (stored image size about 7k)
I seem to have the same problem and can't find a solution to it.
I have two reactphp running and I'm using a Socket to have them communicate with each other. I'm gzcompressing and base64_encoding any message transmitted between them. The problem is that any message of more than 4096 are being cut in pieces and sent to the 'data' listener that way.
part of my code
$connector->create('<IP_ADDRESS>', $socketPortNumber)->then(function (Stream $stream) use ($self, $tableId) {
$self->outputInfo("Connected to table worker".$tableId, DEBUG);
$stream->bufferSize = 32768;
$stream->on('data', function ($data, $read) use ($self) {
$self->handleTableWorkerOutput($data);
});
});
Even by setting the bufferSize to a higher value, I can't seem to get bigger data in one piece. I could come up with a data split thingy and just reassemble on the other side, but I would just prefer to have this working as expected.
Any ideas on this?
@Ethorsen we were having the same problem and also tried to increase the bufferSize but it doesn't seem to make a difference. We solved it by concatenating the data packages before returning it to the calling function to handle. Not as clean as getting all the data in one package, but it gets the job done.
$connector = new Connector($this->loop, $this->dns);
$response = "";
$connector->create($this->address, $this->port)->then(function ($stream) use (&$response) {
$stream->on('data', function ($data) use (&$response) {
if (!empty($data)) {
$response .= $data;
}
});
});
$this->loop->run();
return $response;
Possibly fixed by reactphp/stream#20
Try with react/stream read_buffer branch.
@cboden Great that works, when I added that line in I was able to set the bufferSize higher and get the data in one package. Thanks.
Thx @dancingshell and @cboden. I did come up with my own split process where I would tag each part with a UID and a part # / total part. Controller would wait to get all parts of a message before sending it for processing.
But anyhow I will try cboden fix later on.
This test case shows Ratchet receiving and sending out WebSocket messages in chunks of 4MB.
Possibly fixed by reactphp/stream#20
I feel this ticket has been answered and can be closed :+1: If we have new details, update this ticket and we can look into this again :+1:
|
GITHUB_ARCHIVE
|
Solar panel I-V characteristics are highly non-linear; this results in a Power-Voltage plot featuring a maximum at a given Voltage Vmpp across the panel.
As you pointed out in your question, it happens that the IV curve changes over time according to the light irradiance and temperature, and Vmpp changes, too. That's the reason why methods to track Vmpp are sought: squeezing as much power as possible from the source, i.e., the panel.
Between your panel and the storage element (battery, supercapacitor) there is an harvesting circuit, based on a (switched) DC-DC converter topology (e.g., boost); MPPT techniques are implemented inside this circuit to keep the input voltage of the harvester (i.e., the output voltage of the panel) as close as possible to Vmpp.
Therefore, when you target MPPT, the focus is on optimal power transfer from the source to the harvester (which -in turn- will actually introduce some loss itself, yep!). As RoyC puts it, optimal battery charge is another story.
Maybe the schematic below will help: the photovoltaic panel is modelled as a current source in parallel with a diode (representing the PN junction); the goal of MPPT is to keep the voltage V as close as possible to Vmpp.
simulate this circuit – Schematic created using CircuitLab
For clarity, I have drawn one possible implementation of a Boost-based solar harvester.
The IC I put in the schematic is a Schmitt trigger comparator whose task is that of keeping the voltage at its non-inverting terminal as close as possible to Vref. One can set Vref = Vmpp in order to achieve our goal.
simulate this circuit
Now: how can we generate Vref = Vmpp?
Even in this case there are different possibilities: for example, an additional timing cicuitry can be designed to disconnect periodically the solar panel load, so that a peak holder can 'capture' the panel open-circuit voltage Voc. It can be seen that Vmpp is usually an approximately constant fraction of Voc, irrespectively of the the environmental conditions. By knowing the ratio Vmpp/Voc, a voltage divider can be used to obtain Vmpp starting from the stored value of Voc.
Considerations about the schematic above:
- this is just an example of implementation: it should be noted that an external control logic is not required to switch the MOSFET on and off; instead, the comparator output accomplishes this task, which is very useful in applications where power-draining microcontrollers cannot be afforded;
- the low-power comparator draws its supply from the harvester input terminal; since this has some fluctuation (mostly depending on the inductor value and on the comparator's time delay) an RC filter can be used to smooth it.
Other possible harvesting solutions include the use of microcontrollers implementing some sort of 'Perturbe & Observe' algorithm: as shown in another answer, in this case the operating conditions are changed a little bit while monitoring the response of the input power.
|
OPCFW_CODE
|
First, if you aren't familiar with the term "recursion" please allow me to allow you a moment to google it. Go ahead, and when Google asks you if you meant "recursion" and you click on that, you'll be in the right frame of mind for this post..........Ohhhhhhmmmmmmm.
Article Contributed by Scott Yates CEO of Blogmutt
I saw Memento with a woman I had met recently, and she really liked it. One thing led to another, and now she's my wife, and my wife picked out the dog that we now own.
Why do I mention these facts, and how does it relate to crowdsourcing and blogging?
In the style of Inception, we'll have to go a few layers deeper to figure that out.
The top layer is easy enough to understand, especially for readers of Daily Crowdsource.
The idea of a crowd of workers is now well established. We know that many people are simply better than one person for complicated tasks, or for tasks where a bunch of ideas can help improve all ideas and where work can be done in parallel to produce massive amounts of content.
I understood that well-enough when I started out to form a crowd-based blog-writing company. I'd seen how well the crowd worked in companies like Trada, 99designs, and GoSpotCheck. Lots of regular businesses understood that power, too, and were happy to switch away from the old "expert" model.
I'd started two companies before, and I knew I wanted to do something that used crowdsourcing for my third company; but I started off with some really bad ideas. I kept reading blogs and more blogs and more blogs trying to figure it out, and that's when I went into the next level of this dream-like crowdsourcing world.
In that level, with my partner Wade Green, we figured out that the one thing most businesses needed, but couldn't find, was all-original blog content. We decided we would unleash a crowd onto the problem, and created a structured community for bloggers and businesses.
That was an idea, a simple idea that was so powerful that we were able to go to the next level of the dream.
In that level, we had to come up with a name. We played around with every kind of name involving 'blog' we could think of. Until, one day, when I was talking about our naming challenge to my wife, and she come up with a name after looking at our lovable mutt (of unknown lineage).
Now we must ask...Would we have a name as good as "Blogmutt" if it hadn't been for Christopher Nolan's film Memento?
Going one-level deeper, we come to the notion of this very blog post. This is a blog post about crowdsourcing and blog-writing, and the post is written by a crowdourced blog writing service especially for a blog that's all about crowdsourcing. (Say that ten times fast...)
This is as deep as the levels go. This is the point where the top is spinning and we don't know if it's going to stop.
In other words, allow me to allow you a moment to go and google "recursion." I think if you do you'll be in the right frame of mind for this post.
Now it's time to get back to my own company blog where sometimes we blog about our bloggers who blog for other blogs, sometimes even about blogging.
Let me know if you've enjoyed this blogpost. Ohhhhhhmmmm......
In all seriousness also, let me know if you see any recursion in your own work.
Amazon uses microtasking to clean it's own data... Have we seen a crowdfunded crowdfunding platform yet? Daily Crowdsource crowdfunded it's own crowdfunding report, so why not. Meanwhile Stephen Shapiro will help you innovate innovation at Crowdopolis.
Why not you? What do you recursively do? Let us know in the comments.
|
OPCFW_CODE
|
A few years ago, I struggled to improve my land cover classification results. It was a time when I was obsessed with trying the latest shiny or advanced machine learning models to improve land cover classification. I invested a lot of time and effort in learning how to run new and advanced machine learning models in python or R. Yet, time and time again, the results were disappointing despite reading about successful use cases in primary remote sensing and GIS journals. Yes, I was caught up in machine learning hype and how it will transform geospatial data analysis.
So what was I doing wrong? Why was I struggling? Eventually, I realized that my mindset was fixated on applying the shiny and advanced machine learning models in fashion those days. So I took a step back. I started performing land cover classification using a simple machine learning model such as k nearest neighbor (KNN). And guess what? The land cover classification results were almost similar or even better than the advanced machine learning models. From that moment, I realized that there was more to just tuning or optimizing advanced machine learning models. While it sounds like common sense, some researchers or students focus only on the new machine or deep learning models.
Why Explainable Machine Learning?
I had to go back to the basics. I started by understanding the classification problem at hand and the geography of the study area. I looked at land cover class definitions at the appropriate scale of analysis and selected relevant reference data, satellite imagery, and ancillary data. I began to focus on compiling reliable training sample data. Finally, I could build simple and advanced machine learning models, tune model parameters, perform cross-validation, and evaluate the models. I also got interested in land cover classification uncertainty and errors, leading to explainable machine learning. I focused on understanding a machine learning model’s underlying mechanisms, biases, and errors.
Recently, researchers have developed methods to address the complexity and explainability of machine learning models. But what is explainable machine learning? How does it help to improve land cover mapping results? Explainable machine learning refers to how analysts can explain the underlying mechanism of a machine learning model. That is, explainable machine learning models allow us (humans) to explain what the model learned and how it made predictions (post-hoc). For example, explainable machine learning can give us insight into the algorithms used for land cover classification. This insight can enable us to understand how the algorithm works and assign classes to pixels.
Data-centric Explainable Machine Learning
Over the years, many remote sensing researchers and analysts have focused on improving or tuning the machine learning algorithms. In most cases, remote sensing researchers applied the machine learning algorithms in developed countries with quality reference data sets (e.g., field data, aerial photographs, and high-resolution satellite imagery). However, there is a lack of reference data sets in most developing countries. Therefore, remote sensing researchers and analysts should often spend more time and effort creating quality reference (training and validation) data and looking for appropriate satellite imagery and ancillary data. This data-centric explainable machine learning approach will systematically improve reference data quality to enhance machine learning models’ accuracy, generalizability, and accountability.
Land cover classification remains challenging. However, more very high-resolution images are available to create quality reference data. In addition, efforts to improve transparency and accountability in the machine learning model are becoming an important research topic. I have published a book on ‘Data-centric Explainable Machine Learning for Land Cover Classification: A Practical Guide in R.’ The book is for those interested in improving land cover classification using a data-centric explainable machine learning approach. If you want to learn more about the book, please check the information at:
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using ImgDiff.Constants;
using ImgDiff.Exceptions;
using ImgDiff.Models;
using ImgDiff.Monads;
namespace ImgDiff.Builders
{
public class ComparisonOptionsBuilder
{
Option<SearchOption> searchOption = new None<SearchOption>();
Option<Strictness> colorStrictness = new None<Strictness>();
Option<double> biasPercent = new None<double>();
public ComparisonOptionsBuilder WithSearchOption(SearchOption option)
{
searchOption = new Some<SearchOption>(option);
return this;
}
public ComparisonOptionsBuilder AsStrictAs(Strictness strictness)
{
colorStrictness = new Some<Strictness>(strictness);
return this;
}
public ComparisonOptionsBuilder ShouldSucceedWithPercentage(double bias)
{
biasPercent = new Some<double>(bias);
return this;
}
/// <summary>
/// Builds the <see cref="ComparisonOptions"/> object, using the values that
/// the builder has been given.
/// </summary>
/// <returns>The newly created options.</returns>
/// <exception cref="IncompleteComparisonOptionsException"></exception>
public ComparisonOptions Build()
{
return BuildInternal();
}
/// <summary>
/// Here, we build the <see cref="ComparisonOptions"/> object, which will hold all of the options
/// we want the program to run with. We first take
/// </summary>
/// <param name="flags">The name/value store of flags that was returned by the command parser.</param>
/// <exception cref="BiasOutOfBoundsException">Thrown if the parsed bias factor
/// is greater than 100, or less than 0.</exception>
public ComparisonOptions BuildFromFlags(Dictionary<string, string> flags, Option<ComparisonOptions> currentOptions)
{
// If there are no flags, use the default values. Or, if we have some
// options already defined, use those.
if (flags.Any() == false)
{
if (currentOptions.IsSome)
{
searchOption = new Some<SearchOption>(currentOptions.Value.DirectorySearchOption);
colorStrictness = new Some<Strictness>(currentOptions.Value.ColorStrictness);
biasPercent = new Some<double>(currentOptions.Value.BiasPercent);
}
return BuildInternal();
}
var directoryLevel = flags[CommandFlagProperties.SearchOptionFlag.Name];
if (!string.IsNullOrEmpty(directoryLevel))
{
if (!Enum.TryParse<SearchOption>(directoryLevel, out var searchIn))
searchIn = directoryLevel.Contains("top")
? SearchOption.TopDirectoryOnly
: SearchOption.AllDirectories;
searchOption = new Some<SearchOption>(searchIn);
}
var strictness = flags[CommandFlagProperties.StrictnessFlag.Name];
if (!string.IsNullOrEmpty(strictness))
colorStrictness = new Some<Strictness>(Enum.Parse<Strictness>(strictness));
var biasFactor = flags[CommandFlagProperties.BiasFactorFlag.Name];
if (!string.IsNullOrEmpty(biasFactor))
biasPercent = new Some<double>(Convert.ToDouble(biasFactor));
return BuildInternal();
}
/// <summary>
/// Centralize the value checks to this internal method. We don't want to expose
/// this to outside methods, since they don't need to know, or care, about how
/// these are validated. They only need to know that it does.
/// </summary>
/// <returns></returns>
/// <exception cref="IncompleteComparisonOptionsException"></exception>
ComparisonOptions BuildInternal()
{
if (searchOption.IsNone)
searchOption = new Some<SearchOption>(SearchOption.TopDirectoryOnly);
if (colorStrictness.IsNone)
colorStrictness = new Some<Strictness>(Strictness.Fuzzy);
if (biasPercent.IsNone)
biasPercent = new Some<double>(1/(double)8);
return new ComparisonOptions(
searchOption.Value,
colorStrictness.Value,
biasPercent.Value);
}
}
}
|
STACK_EDU
|
JS autocomplete doesn't work for object literal shorthands
TS Template added by @mjbvz
TypeScript Version: 4.1.0-dev.20201026
Search Terms
suggest / suggestion
intellisense
object short hand
VSCode Version: 1.51.0-insider (7a3bdf4ee9588755d447aa1c3b5db4a123fc11a9)
OS Version: Arch Linux
Steps to Reproduce:
Suppose there are variables in the current context (be it local or global)
Create an object literal and try to use a variable in a shorthand style
No semantic autocompletion is offered, and textual autocomplete is only offered after 3 characters
GIF file to better demonstrate:
Does this issue occur when all extensions are disabled?: Yes
If you don't mind, I can work with this issue. FYI @andrewbranch
Sounds good @orange4glace, let me know if you get stuck. You have about a month and a half before we need to get a fix in.
@andrewbranch Thanks for the reply! I have one question for this.
I'm planning to change the code like this and with this code,
every tests are passed except completionListAtIdentifierDefinitionLocations_destructuring, which is,
/// <reference path='fourslash.ts' />
//// var [x/*variable1*/
//// var [x, y/*variable2*/
//// var [./*variable3*/
//// var [x, ...z/*variable4*/
//// var {x/*variable5*/
//// var {x, y/*variable6*/
//// function func1({ a/*parameter1*/
//// function func2({ a, b/*parameter2*/
~
Getting Global Autocompletion
verify.completions({ marker: test.markers(), exact: undefined });
As l understood, the parenthesis { of the last line after the func2( is recognized as ObjectLiteral with any type rather than ObjectBinding since the line is getting following error,
'}' expected.ts(1005)
41259.ts(15, 16): The parser expected to find a '}' to match the '{' token here.
41259.ts(13, 16): The parser expected to find a '}' to match the '{' token here.
I'm curious that,
Is the test case valid one even though there's an syntax error? (Original PR for testcase : https://github.com/microsoft/TypeScript/pull/1767)
If so, Fixing the code to satisfy both original issue and the testcase seems pretty hard as I think.
I'm very new to this community and anything that can be helped would be much appreciated!
We definitely want test cases that have syntax errors because they reflect half-written code as though a user is actively editing, and we want completions to work in those cases. That said, any particular assertion in a test like this might become wrong and need to be updated. What completion is showing up there?
With my modified version, it autocompletes global symbols at,
....
var {x, y/*variable6*/
function func1({ a/*parameter1*/
function func2({ a, b/*parameter2*/
~
Global symbols Autocompletion
whereas with Typescript v4.0.5, it autocompletes nothing.
Currently, the compiler parses the testcase code like this,
/* Statement #0 Starts (VariableStatement) */
var [x
/* Statement #1 Starts (VariableStatement) */
var [x, y
/* Statement #2 Starts (VariableStatement) */
var [.
/* Statement #3 Starts (VariableStatement) */
var [x, ...z
/* Statement #4 Starts (VariableStatement) */
var {x
var {x, y
function func1/* Statement #5 Starts (ExpressionStatement) */({ a
function func2({ a, b
The opening bracket { in the func2({ a, b is recognized as an ObjectLiteralExpression even though it looks like (or it should be) an ObjectBindingPattern, finally it autocompletes with global symbols.
I think that’s probably alright. Can you go ahead and open a draft PR so we can make a playground build?
Sure! I've just opened it. :)
|
GITHUB_ARCHIVE
|
November 16, 2016 by LanceShi
List of IDE for Salesforce Programming (Part 1)
- Two other IDEs – HaoIDE and Metaforce.
- Add some explanations about pros/cons on each IDE, especially the ones which I have used.
The IDEs I have used
1. Mavensmate with Sublime
Around 80% of Salesforce developers are using this IDE. It can almost be considered the IDE for Salesforce coding. There is a reason behind it.
Sublime Text is a very effective Text Editor, especially for large projects. When you want to jump into a certain file, you can simply type ctrl+P and type in a subset of your file name. And it doesn’t have to be the full file name. For example, if you file name is TestPageController, you can simply type in tpc and sublime text will be smart enough to locate your file. Sublime text also have a vibrant community providing a lot of useful plugins for developers. If you can’t find one, you can also write it yourself.
Aside from it, Mavensmate is light-weight and fast. The test execution support is very nice. The design which separates Mavensmate core and editor plugin code makes it easier to develop for other editors. It is also a open-source tool, so if you find anything confusing, you can always refer to the source code.
- Sublime is very effective programming for projects
- Nice test execution support
- Open source and easy to develop with other text editors
- The configuration of the plugin is intuitive.
- Most developers are using it so you can get best support from the community.
- Mavensmate is supposed to be working under corporate firewall. However, it is hard to configure for it. Personally, I had issues with it. And that is the primary reason I am not using it.
- You need to run Mavensmate tool first in your computer and than open your sublime text editor. I am kind of tired of doing that every time.
- The response time of opening issues in Mavensmate is long.
HaoIde is another Sublime Text plugin for Salesforce development. It is my personal choice. And many Chinese programmers prefer this tool to Mavensmate. It works perfectly well under corporate firewall. It supports lightning component development very well.
- Sublime is very effective programming for projects
- Works perfectly under corporate firewall
- Lightning component development support
- Github open issue respond time is very fast. Hao is doing a great job in this.
- You don’t need to run a separate app in order to use HaoIde.
- The support for running unit test is not as good as Mavensmate
- The configuration is less intuitive. You will need to configure the settings file manually. However, I don’t find it very hard as a developer.
3. Eclipse with force.com IDE
Force.com IDE tool for Eclipse is supported by Salesforce and it is now a open source project. It is effective in project development. I wouldn’t say it is bad in any sense. However, Eclipse is a heavy-weight IDE environment. And since you won’t be able to set break points in your Apex code anyway so does it really make sense to use such a heavy IDE? Well, personally I don’t think so.
The good part is we have a Apex PMD plugin for Eclipse which does static analysis for our code.
- Eclipse is good for project development.
- Good support for almost everything
- Static code analysis tool available
- Eclipse is a heavy IDE which makes it slower than sublime text. And we still can’t use the heavy IDE features like setting break points
- Test execution support is not as good as Mavensmate
4. Mavensmate for Atom
Mavensmate also has an Atom version. The github page is here.
So Mavensmate is pretty much the same thing. The pros and cons of Mavensmate we have already explained. Atom editor is very similar to Sublime. You can google about Atom vs Sublime Text and there are a lot of articles about it. Aside from those, below are the pros and cons of Atom editor when compared to Sublime Text of my two cents:
- The UI is super coooooooooool!
- When writing your own packages for Atom, you have full control over the editor. You don’t have such privilege when writing plugins for Sublime Text
- It is free of charge when Sublime Text will cost 70 dollars (although it can be considered as Winrar free)
- It is awful when behind a corporate firewall.
- The searching is way slower than Sublime text, especially if the stored location is not at your local machine.
- It doesn’t support secondary search – which means search in search results.
5. Developer Console
This is the IDE I have used most in sfdcinpractice.com. It is perfect for handling small tasks. I still use it if the destination org’s code size is not too large. It now supports global search and auto-indentation which makes it a very handy tool as well.
However, I still won’t recommend it for large projects and serious coding for the following reasons:
- The files can only be saved in the cloud. It means if your file have compile errors, you won’t be able to save your work. There are times you want to save your code even if there are errors: if you need to answer an urgent phone call; if there is fire; if a friend is coming for a coffee. You will want to save the code and fix the error later. However, you can’t do it with developer console.
- If developer console stuck with saving your file – and it happens from time to time, it means you can potentially lose your changes.
- The global search is definitely less effective than IDEs based on local storage.
- There is no project file view as a sidebar.
This is the first part of the IDEs for Salesforce development. All the IDEs covered in this post are free. I will cover more IDEs in the next post.
|
OPCFW_CODE
|
Controller, Router, Route Architecture change
Architecture change
This proposal concerns the controller, router resources. I will also be defining two new non existing resources.
Current architecture
The following section describes my understanding of the intended architecture. I will not be using a "I believe" structure when stating observations of the architecture, I will just be using a declerative style.
Atlas is a cli tool that generates a Typescript server that follows a version of the MVC architecture. Atlas encourages a functional programming style.
Model
The model is defined within the database folder. The database folder defines the different components of the model. The models folder defines MongoDb schemas and models that define how documents are stored in the database. Associated interface types are defined in the interfaces folder. The interactions defines modules around the MongoDb models that will used by the controller.
Controller
The controller is defined in the controllers folder. This is where all business logic is located. In similar express applications, this folder could also be called services. This layer is not coupled to a specific database. It has an aggregation association to the interactions package in the model and a dependency to the interfaces defined in the model.
View
The view is defined in the routes folder. It uses functions in the controller to defined and handles errors in a controlled way (i.e. maps them to a certain http code).
Suggestions
New folder structure
tests/
routes/
src/
routes/
dummy_namespace/
router.ts
dummy_routes.ts
nested_namespace/
router.ts
dummy_routes.ts
controllers/
dummy_controller.ts
database/
models/
dummy_model.ts
interactions/
dummy_interaction.ts
interfaces/
dummy_model_interface.ts
middleware/
dummy_middleware.ts
app.ts
server.ts
.gitignore
README.md
package.json
tsconfig.json
tslint.json
Adding interfaces folder to database
The idea here is to completely decouple your databse from your controllers. Instead of your controllers having direct access to the Document type, the user defines types which exposes the attributes that the controller is allowed to use. Given that you generate the boilerplate, adding this type of interface would not come at that big of a time cost to the cli user.
As a result of this, the atlas generate model command would now need to generate the MongoDb model, the associated interactions module and the associated interface.
Modifying the router
... tired of writing at this point, will reformulate later. The basic idea is...
Router creation
Allow the creationg of namespaces in your routes. Examples
atlas generate router new Would create a new file named new.ts and would connect it the the namespace's router (in this case the top level router).
atlas generate router namespace/new Would create a new file named new.ts and would connect it to the namepsace's router.
Routes creation
Allow route creation with a lot of flags.
atlas generate route new/route --verb "get" --errors customError=411[,...] [...] Would create a new route on the the new router. It would also take care of connecting the route to the server (and generate the swagger yaml).
Notes
This need more thought, I just wanted to give you an elevator pitch.
Notes
In a lot of other projects, the controller folder has the role that the routes folder has in an atlas project.
A few concerns:
Why move interfaces to database? Interfaces aren't used only within the context of databases e.g. users can get data from different sources and would need to type them through interfaces.
Why are app and server outside of the src directory? src is where all the source code for the project should be.
Why do we want to generate namespaces for routers?
|
GITHUB_ARCHIVE
|
1.4. Install the SPEX Python interface¶
The SPEX python interface depends on a quite specific python environment. Using conda, you can create this environment pretty quickly. Please make sure you have (mini)conda installed and initialized on your machine before you continue.
1.4.1. Binary installation¶
The easiest option to install the Python interface for SPEX is by downloading the SPEX binary version for your platform. See and follow the instruction in How to install SPEX.
Once the standard SPEX environment is set up using the
it is time to build the conda environment. This can be done using the
spex.yml file provided in the SPEX directory.
Please enter the following command:
(base) unix:~/SPEX> conda env create -f $SPEX90/python/spex.yml
This creates the
spex conda environment for you. This step should only be done once.
If you installed SPEX through the Mac package installer, then
spex.yml is not located in a writeable
directory. Please copy
spex.yml first to your home directory (
cp /opt/spex/python/spex.yml ~/) and then
create the conda environment like:
conda env create -f ~/spex.yml.
If successful, you can from now on activate the environment with the command:
(base) unix:~/SPEX> conda activate spex
And from now on, you can use the python interface in SPEX:
(spex) unix:~/SPEX> python Python 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 21:41:56) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from pyspex.spex import Session >>> s=Session() Welcome user to SPEX version 3.06.01 NEW in this version of SPEX: 22-07-2020 Bugfix: neij gives line emission while abundance is zero. 18-08-2020 Changed reference density from electron density to hydrogen density. 04-09-2020 Added pyroxene back to the amol model. 23-10-2020 Update of H I collision strengths (IMPORTANT) 24-11-2020 Added ascdump and par_show functions for the Python interface. Currently using SPEXACT version 2.07.00. Type `help var calc` for details. >>>
If you need other python packages for your project, you can install them through conda in
1.4.2. Compile Python interface for SPEX¶
If you do not want to use conda, you can also compile the Python interfaces for SPEX. This can be done
either through a compile script called
compile.py or manually with CMake by adding the option
-DPYTHON=3 depending on the python version. The use of Python 3 is strongly recommended. See Compile SPEX from source
1.4.3. Integration into iPython and Jupyter Notebook¶
Next to the dependencies installed in the conda environment
spex, the Python interface also depends on the SPEX
environment variables set with the spexdist.(c)sh files. So before running iPython or Jupyter notebook, it is
necessary to source the SPEX environment (also make sure the conda
spex environment is activated):
(spex) unix:~> source /opt/spex/spexdist.sh
(replace /opt/spex in this path with the location of SPEX on your machine, or $SPEX90 if this variable is already set).
It may be that iPython and Jupyter are not yet installed in your conda environment. If not, please install them using the command:
(spex) unix:~> conda install jupyter_client ipykernel
With iPython and Jupyter notebook, it can be helpful to install the spex conda environment explicitly for your project:
(spex) unix:~> ipython kernel install --user --name=spex
In your Jupyter notebook, you can now select
spex from the drop-down menu if you are creating a new project. The
spex conda environment should now be linked to your Jupyter project.
|
OPCFW_CODE
|
Utility method that allows passing a different comparison function in java for using in contains
I have a class that represents each row in my database table. So it has some data members that represent the primary keys in the table while others are non key columns. Say I have a list of rows (listA) that I fetched at some point of time, then after some duration I fetch some more rows (listB) which can contain ones that were fetched earlier but having some updated data in the non key columns. I want to merge listA and listB retaining updated values (i.e stuff in listB). This comparison requires only checking the key columns. So are there any utility (maybe in guava or apache commons etc) contains / removeAll methods that take in a custom comparison function?
For all other purposes I want to check all the data members in the class for equality.
Please note the usual removeAll will use the equals and hashcode that which has been overridden to compare every data member in the class, I want to use a different comparator that compares only the data members that are part of the primary key for the contains.
You may want to check my revised answer
If using Java 8, you can use streams and predicates to determine if an element is already in your list. Depending on the size of your results, this might be slow, as you have to scan your list for each new element. Maybe another option is to keep in parallel to your list of results a Map of your primary keys to their corresponding elements.
You can use standard-Java Set operations. If your "row" class does not implement equals and hashCode using only the primary key information, you can use some wrapper or utility class that does, for example:
class Row {
Object pk, a, b;
public boolean equals(Object other) {
// Checks for other's type, etc.
Row otherRow = (Row) other;
return pk.equals(otherRow.pk) && a.equals(otherRow.a) && b.equals(otherRow.b);
}
}
class PkInfo {
final Row row;
PkInfo(Row row) {
this.row = row;
}
Row getRow() {
return row;
}
public boolean equals(Object other) {
// Checks for other's type, etc.
PkInfo otherPkInfo = (PkInfo) other;
return row.pk.equals(otherPkInfo.row.pk);
}
public int hashCode() {
return row.pk.hashCode();
}
}
So, whenever you want to merge result sets, you can do something like:
// Your result sets
Collection<Row> listA, listB;
// Your primary-key-based sets
Set<PkInfo> pkiA = new HashSet<PkInfo>();
for (Row r : listA)
pkiA.add(new PkInfo(r));
Set<PkInfo> pkiB = new HashSet<PkInfo>();
for (Row r : listB)
pkiB.add(new PkInfo(r));
// Remove from A the rows present in B
pkiA.removeAll(pkiB);
// Modify listA
listA = new ...();
for (PkInfo pki : pkiA)
listA.add(pki.getRow());
// Add all from listB
listA.addAll(listB);
You can place the code above in a utility method that you call whenever you want to update a list with fresh data from another.
removeAll will use the equals and hashcode that compares every data member in the class, I want to use a different comparator that compares only the data members that are part of the primary key
You cannot use a Set with other equals / hashCode semantics than the ones you created it with, you have to do some kind of "translation" to use PK-only semantics. Check my revised answer.
In your class you just need to @Override the public boolean equals(Object obj). That is the method used by List when you are testing the contains(Object).
|
STACK_EXCHANGE
|
Why do my tests start failing when I add simple validation?
Here is my controller test:
test "should get create" do
sign_in(FactoryGirl.create(:user, admin: true))
assert_difference('Event.count') do
post :create, FactoryGirl.build(:event)
end
assert_not_nil assigns(:event)
assert_response :success
end
and when I add the simplest validation to events.rb
class Event < ActiveRecord::Base
attr_accessible :city, :date, :name, :state, :street
has_many :periods
validates :name, presence: true
end
I get:
1) Failure:
test_should_get_create(EventsControllerTest) [/Users/noahc/Dropbox/mavens/test/functional/events_controller_test.rb:37]:
"Event.count" didn't change by 1.
<2> expected but was
<1>.
But, then I look at events_factory.rb
factory :event do
name 'First Event'
street '123 street'
city 'Chicago'
state 'IL'
date Date.today
end
And there doesn't seem to be an issue with name being required.
update:
When I make my test:
test "should get create" do
sign_in(FactoryGirl.create(:user, admin: true))
assert_not_nil assigns(:event)
assert_response :success
end
I get:
1) Failure:
test_should_get_create(EventsControllerTest) [/Users/noahc/Dropbox/mavens/test/functional/events_controller_test.rb:38]:
<nil> expected to not be nil.
When I remove that line, and leave in the assert_response :success it passes.
update 2:
def create
@event = Event.new(params[:event])
@event.save
end
Try removing the assert_difference() code for now, and just confirming that your create page is returning a success response. Maybe something else on that page is failing.
@girasquid, I've added updates.
Thanks. What does the create action look like?
You need to add the line post :create, FactoryGirl.build(:event) back to your test - you only wanted to remove the assert_difference wrapper. Right now it's not actually sending the post request.
removing the assert, but keeping everything else allows it to pass, no errors.
What about changing this:
post :create, FactoryGirl.build(:event)
to:
post :create, event: FactoryGirl.attributes_for(:event)
Explanation:
post expects a hash with the attributes of the record you're creating. FactoryGirl.build(:event) creates a new unsaved instance of the model (event), which is not what you want. Since you had no validations on your model, this was somehow getting by and being ignored, so in fact the factory was having no influence on the newly-created event (which I assume was being created with blank attributes).
attributes_for, in contrast, returns the attributes of the factory as a hash, so:
attributes_for(:event) #=> { name: 'First Event', street: '123 street', ... }
which is exactly what you want. When you pass this to post, it assigns the attributes from the hash to params, which are then used to create a new event in the line: Event.new(params[:event]).
|
STACK_EXCHANGE
|
02-12-2013 02:10 PM
I have just set up the norton apps on my two devices: Samsung Galaxy S3 and Asus Nexus 7. I might have done it all wrong though! The end result is that while I am logged in and apparently set up on both devices, I cannot access them both from the website dashboard. If I use Firefox, I was previously able to access only the Samsung S3, and was able to send a scream, but now it tells me I have no devices, and if I use Chrome, I cannot access either and instead get this error message:
We're sorry, but we're having a problem accessing the page you are trying to view.
Error ID: 28fd404c-d042-47fc-a4c0-a04c89b32c9e
I bought the Nexus 7 with Norton Tablet Security from PC World in the UK. I downloaded the software from the website and installed it on both the phone and the tablet, registered with the supplied code and all seemed well. On both devices I was pointed to upgrade to the latest versions. It seemed like I might have done this several times, what with the various add ons, and I got a bit confused, but I now have the following installed on the Nexus 7: Norton Anti-Theft and Norton Mobile, and on the Samsung Galaxy S3: Norton Mobile only.
In both cases I have logged in to the apps. In both cases the apps say the anti theft is secure, and the anti malware is secure etc. Security Lock is enabled.
Have I done something wrong? Am I using the wrong apps for my devices. Should I have used two separate codes to register rather than one?
Can you help please, as so far this seems like a big waste of money!
FYI my computer is a mac running 10.6.8.
02-13-2013 01:38 AM
Welcome to the Norton Community Forums.
I am not entirely clear about your set up so please forgive me if I ask some basic questions.
You appear to be trying to use Norton Anti-Theft and Norton Mobile Security (NMS). I am not aware of any reason for trying to use both at the same time and am concerned that trying to might lead to confusion.
Can you pleas advise what versions of NMS you currently have on your devices. This should hopefully look something like V.188.8.131.522 and can be found under menu/about.
I assume you have logged on to mobilesecurity.norton.com. When you do what does it show you there about devices?
I look forward to hearing back from you.
03-12-2013 11:22 AM
Sorry to have taken so long to reply.
I have had to since have both devices replaced, so currently I have nothing installed on either.
Can you advise what I should have installed on my Asus Nexus 7 tablet, and my samsung galaxy s3? I have purchased the norton tablet security. Will I need to purchase anything else?
03-13-2013 04:28 AM
I am a little unsure of how best to answer your question as the upgrade process you went through before means that I am not sure of the starting position. However if you have purchased two licences for Norton Tablet Security (NTS) which has now been superseded by Norton Mobile Security (NMS) then you should be entitled to the full use of NMS for up to a year from the purchase date. So the only issue is how best to get you up and running.... ;-)
What we do not want is a copy of NTS and a copy of NMS both installed at the same time so first check your apps list to make sure that neither are on the devices. Assuming they are not I would suggest that we try the following approach.:-
On just one of your devices (if is works for one it should work for the other but let us wait and see), go to Google Play and download and install NMS (NOT repeat NOT NTS). Then when you are encouraged to purchase/register the products click on the "I have already purchased this product" (or words to that effect) option. This should take you through to the mobilesecurity.norton.com website and activate NMS on the device.
If it works try the same on your second device. If it does not come back and tell us exactly where you hit problems and what they were (screen shots may be useful).
Good luck and do let us know how you get on. ;-)
03-29-2013 01:28 PM
Hi and thanks.
I followed your advice and seem to be getting there.
It may have been a conflict with having an outdated piece of norton software. I suggest they remove that software to prevent this.
I now have the latest version of nms on both my phone and tablet. From the desktop I am able to scream and locate both. But I cannot run 'sneak peak' on either. This makes me worried that I would not be able to run 'lock device' and 'wipe' in either case.
Do you know what might be wrong with sneak peak, and how to test the other features?
I'd also prefer it if my subscription could begin from the day I actually get the software working - so far I have lost 40 days of the subsription.
03-29-2013 02:44 PM
First your last point, as it was PC world who sold you, what I would consider to be, inappropriate software, you might like to take this up with them. Norton are giving you a free upgrade so I think you are getting a fairly good deal from them already. And I am not a Norton employee just an unpaid volunteer.
Not sure about the lack of sneak peek. There have been issues in the past with this given that some devices have back facing, some have front facing and some have two way facing cameras. Hopefully someone with more experience of the Samsung or Nexus will be able to offer an informed comment.
As for testing the wipe command..... well if you really want to, but I have always left it to trust... ;-)
I do know someone who did try it and it worked!
The lock command is not however, a bid deal. The website gives you the unlock code, so just lock your device using the website and then use the code to unlock it. Hopefully that will give you enough confidence not to feel it necessary to test the wipe command.
Does that help?
|
OPCFW_CODE
|
France visa refusal and Appeal
As mentioned in my previous question, my France long-stay visa was refused (the reason for refusal was "risk to public order/public security/health") in Dec, 2022. My immigration consultant filed an appeal in Feb 2023 to CRRV commission but we did not receive any response for two months, which typically means a refusal of the appeal.
We lodged a fast track appeal to the administrative court, but it has been rejected and court will process it as a standard appeal. However the standard appeal process may take 8-12 months, or even more, and my UK visa will expire in Oct 2023 and my now UK employer would like to apply for my skilled work visa in May/June. In addition we have a written a letter to the French ministry to check they have any record of me in the Schengen integrated system (SIS II) - still no response.
My French immigration consultant gave me the idea to apply for a France tourist visa. If it gets approved, then I can write the UK visa application that my refusal was due to "French regulations regarding foreign workers and not particularly to me". If it gets refused again, then denial would be instructive and we’ll know where we stand. My concern is if my France tourist visa gets refused, which I think is very likely due to my long stay visa refusal reason, then I'll have to mention both refusals in the UK visa application.
Please suggest to me what should I do as I only have a few months to resolve the French visa refusal before I apply for a UK skilled work visa. Worth mentioning that I don't need a France long stay visa as the French employer withdrew their job and I now have a job in the UK.
A refusal from France doesn’t automatically harm an application to the UK. If I were you I’d apply for the UK visa and explain the circumstances regarding the French refusal. UK Immigration can likely check such things out with the French authorities.
@Traveller thank you for your comment.I understand French refusal does not automatically harm UK visa application but French refusal reason is it self not a good one.If there is any other refusal reason I would have not bother to appeal against the decision.Irony is I do not have any criminal record not even a traffic violation.
Why does your immigration consultant think that a refusal for "risk to public order/public security/health" can be explained as “due to "French regulations regarding foreign workers and not particularly to me"? Have you considered asking a British immigration consultant for advice, given that it is actually the UK visa application you’re concerned about
@Traveller we have no further details about refusal then just a tick on this reason on refusal letter where there are 11 other reasons. My French immigration consultant had no idea why I have been refused on these basis and he is an experienced person and he tried to reach different ministries to find out if they have anything about me in SIS II but no response. I checked with one UK immigration lawyer and he said that don't mention much details (especially not the reason of refusal) and mention refusal due to visa officer wasn't satisfied but I think this may not satisfy UKVI.
"My French immigration consultant had no idea why I have been refused on this basis...." Speeding tickets? Unpaid parking fines or bus/train fines? A loud argument where the police were called - though nobody was arrested? Nothing rings a bell? You might be surprised. I've known of people who were refused a UK visa because they didn't pay their TV license. It happens.
@ouflak thanks for you comment but nothing rings the bell and after refusal I checked my criminal record through police character certificate and there is no criminal record in UK and in my home country. I think after Brexit UK does not share information with French /EU authorities.
Please don't post the same question again. If you feel your original question was not adequately answered, perhaps you can try to clarify it. See also What should I do if no one answers my question?
@tripleee thank you for your comment. Actually my situation is keep changing as the case is progressing and to ask the question about my latest situation and upcoming step I have to refer my previous question and need to summarize it.But please do let me know if you think my question is posted again because I found this platform very useful .Thank You
My French immigration consultant gave me the idea to apply for a France tourist visa. If it gets approved, then I can write the UK visa application that my refusal was due to "French regulations regarding foreign workers and not particularly to me". If it gets refused again, then denial would be instructive and we’ll know where we stand. My concern is if my France tourist visa gets refused, which I think is very likely due to my long stay visa refusal reason, then I'll have to mention both refusals in the UK visa application.
This is a very puzzling suggestion and a very valid concern. Even if you get the short-stay visa, you still have to disclose the earlier long-stay visa refusal and will have very little hard evidence that the decision wasn't about you. And there is a high risk it would be refused again, which I wouldn't expect to be very instructive (mostly they have to check a box and can pick the exact same reason without providing any justification).
None of this seems consistent with the original "risk to public order/public security/health" reason for the refusal.
Please suggest to me what should I do as I only have a few months to resolve the French visa refusal before I apply for a UK skilled work visa. Worth mentioning that I don't need a France long stay visa as the French employer withdrew their job and I now have a job in the UK.
There may not be any easy solution besides hoping the French refusal doesn't override your record in the UK. As far as I can tell, the best you can do is to let the French appeals run their course and get help from a local consultant on the British side of things.
I know very little about British practices in this respect but I assume a single refusal in another country shouldn't lead to an automatic refusal for someone who is already present in the country and otherwise in good standing? It may also be possible to explain your situation in more details and point out that appeals are running but could not be processed in time (I expect a consultant or solicitor could advise you on that).
I don't want to give you any false hopes however. The risk that the French decision weights negatively is still there. If you really cannot think of any way you could have come to the attention of the French authorities, it's also not inconceivable that this is a case of a mistaken identity, maybe confusion with someone having a similar name and unfortunately those can be very difficult to clear up.
Thank you for your answer.Yes I am going to discuss with a barrister who is specialized in British immigration. But meanwhile my efforts are on french side as well and trying to get anything positive for me like if they can share the record about me Schenghen integrated system (SIS II).Am also requesting their visa department to mediate outside the court as I don't need visa anymore.
Through this platform am trying to find anything which I should do as I don't want to leave no stone unturned.
@Stark I wouldn't assume there is anything in the SIS.
More generally it seems to me that you are already doing everything you could be doing (recours gracieux, recours contentieux first in front of the commission and then the courts, information request regarding any SIS entry) and you already know a lot more about this very specific issue than anybody you can hope to find here. You should still push and might in fact prevail but there is very little you can do to speed things up and match the timeline of your British visa renewal.
should I request SAR (detail) from UK home office to see if UK has anything about me but I got my police character certificate from UK after refusal and it is clear.Although my French immigration consultant say that France only checks data within EU countries and not with UK but as per EU PNR directive I am not sure UK/EU shares data with each other.
|
STACK_EXCHANGE
|
How to center justify text in UITextView?
I have an Arabic (right to left) text in textView, when I set myTextView.alignment = .justify, the last line in text go to the left side. How can I set center justify for textView or how can move last line to the center?
just make it .center
i know what exactly you're looking for cause i've been there before, only you can make it .center
can't use center and justify together??
no you can't, otherwise developers would be allowed to use both .right and .left
Interesting question, I would try to count the lines in the textView and cut the last one than I would try to put the cut text in a new label below the textView ... ;)
after asking how I would do this I added this:
print(self.testLabel.frame.width)
self.testLabel.text = "Hello World"
self.testLabel.sizeToFit()
print(self.testLabel.frame.width)
This is a starting point you can work with. Create a label and set the width to zero. Insert your text and set sizeToFit(). Now you are able to get the length of the label. Is it to long for your view it will have more lines... (The divider is your length - not tested, maybe an offset is needed too)
The idea is now to subtract as long as your lines are more than one and the lines are the same amount at the beginning of your calculation to get the last word and put it at the beginning of a temp string.
If this returns you get your substring for the last line.
I was interested and started with this one:
@IBOutlet weak var textBox: UITextView!
@IBOutlet weak var testLabel: UILabel!
override func viewWillAppear(_ animated: Bool) {
print(self.testLabel.frame.width)
self.testLabel.text = self.textBox.text
self.testLabel.sizeToFit()
print("total \(self.testLabel.frame.width)")
let basicAmount = Int(self.testLabel.frame.width / self.textBox.frame.width) + 1
print("lines \(basicAmount)")
var lastLine: String = ""
while basicAmount == Int(self.testLabel.frame.width / self.textBox.frame.width) + 1 {
var arr = self.textBox.text.components(separatedBy: " ")
lastLine = "\(arr.last) \(lastLine)"
arr.removeLast()
self.textBox.text = arr.joined(separator: " ")
self.testLabel.text = self.textBox.text
self.testLabel.sizeToFit()
print(lastLine)
}
}
the interesting output is:
total 1499.5
lines 7
Optional("civiuda.")
Of course you should spent more time in the calculation because of the free space at the end of a line...
use this to count the lines (at the end is swift3): https://stackoverflow.com/questions/5837348/counting-the-number-of-lines-in-a-uitextview-lines-wrapped-by-frame-size
that just return last word not last line :(
you’re wrong.. lastLine = "(arr.last) (lastLine)" extends the string. (add a new word at the beginning)
use this
self.textView.textAlignment = .justified //set text alignment center and justified
self.textView.makeTextWritingDirectionRightToLeft(true)//if you want set writing direction right to left
self.textView.makeTextWritingDirectionLeftToRight(true)//if you want set writing direction left to right
|
STACK_EXCHANGE
|
Source code control is an indispensable tool for any developer. In this post, we will see how to configure Bitbucket with Visual Studio.
In today’s times, it is difficult to imagine working without source code control even in personal projects, and practically impossible in team projects.
There is a wide variety of alternatives for source code control. From Visual Studio Team, the most common in Microsoft systems, in its Foundatin (self-hosted) or Services (cloud) versions. To the well-known Github, widely used in Open Source projects.
In its free version, Teams Services allows an unlimited number of private repositories in teams of up to 5 people. While the free version of Github only allows public repositories.
In this post, we will see another widely known alternative, Bitbucket, an online service from the company Atlassian. Like Teams Services, the free version allows unlimited private repositories and teams of up to 5 members.
In certain situations, it is preferable to use Bitbucket over Teams Services. For example, in the case that a team member uses an unsupported IDE, or that the team prefers the way Bitbucket is used.
In any case, using Bitbucket with Visual Studio as source code control is very simple and the integration is almost perfect, as we will see below.
Install Bitbucket with Visual Studio
The latest versions of Visual Studio incorporate compatibility with Git, so it is basically compatible with Bitbucket. We only need to install an extension that simplifies the process.
So, we search for and install the Visual Studio Bitbucket Extensions extension.
Once installed, we go to the Team Explorer tool window, expand the Bitbucket Extensions section, and click on login.
We enter our Bitbucket login details.
That’s it, we have now configured Visual Studio with Bitbucket. Now we will see how to work with it.
Use Bitbucket with Visual Studio
Let’s try using Bitbucket. We open any existing project, or create one for testing. Right-click on it and choose “Add Solution to Source Control”.
Next, we choose “Sync”.
Since this is the first time we are synchronizing the project, it will ask us to create a repository. The name should contain only lowercase letters, no spaces.
A local Git repository will be created, and it will be synchronized with the newly created remote repository in BitBucket.
Testing Bitbucket with Visual Studio
Let’s test using Bitbucket. For example, we write a comment in any file of our project.
We right-click on the modified file, and select “Commit”.
We add a comment indicating the description of the Commit.
The commit is created in the local Git repository. Now, to synchronize it with the remote Bitbucket repository, we click on “Sync”.
Now, in each function, we will have all the changes uploaded by all the authors of the project.
By clicking on this header, we can see all the changes made by the users.
Of course, we have all the usual functions in Git repositories (retrieve any version, compare versions, branch, merge).
That’s it. Using Bitbucket with Visual Studio is a good alternative to Github or even Teams Services as source code control, and we can use it in our projects or in small teams.
|
OPCFW_CODE
|
Sorry, I know this is terribly basic.
I have a Dell laptop with a corrupted hard disk. I am trying to use testdisk with Ubuntu 12.10 to try to recover at least the recovery partition, if not the entire disk.
I succeeded in recovering the very small [dell-diagnostic] (or something like that) partition. To celebrate my success, I booted the computer off that partition. Now I've inserted my Ubuntu boot CD and the thumb drive on which I put testdisk and restarted the PC in Ubuntu.
Alas, I was unable to get testdisk to work again. So I deleted the directory into which I'd extracted it, and re-extracted it. I then renamed the directory into which it had been extracted "tdisk" (without the quotes obviously). I made the name tdisk in case the problem was from having both the executable file and directory named the same. (I had the problem described below several times in a row, and started from scratch each time --- that is, by deleting the files I'd extracted, re--unpacking the archive and then renaming the directory, whose default name was much longer and included the version number)
Then I typed
(pwd now reporting /media/SANDISK/tdisk for the rest of my terminal session both to save on typing and make sure any errors weren't from leaving out the path)
sudo apt-get install testdisk
Unfortunately, it came back with
E: unable to locate package testdisk
So of course sudo testdisk resulted in an error message.
sudo ls -l testdisk *
shows me that the file is indeed in the current folder
I am pretty sure I did it this way the first time when it worked.
So I need to know:
(1) What did I do wrong and how should I be doing this?
(2) Trial and error the first time showed me that 'sudo apt-get install testdisk' is a necessary step to be able to run testdisk. The only part of this command that I understand is the 'sudo' part. A *brief* explanation would be greatly appreciated -- I've had about enough of man pages today to last a lifetime <g>
(3) Note that testdisk is on a thumb drive, as I didn't want to write this to the hard disk, possibly resulting in a lower chance of recovery -- my Ubuntu boot device is a DVD. If I restart the computer after successfully using testdisk, how can I use it again without having to restart with sudo apt get-install testdisk? Not that this is all that hard, but I imagine that the testdisk executable persists if I understood what I was doing.
(4) As a result of reading the instructions and luck, I was able to get back that small [dell-diagnostic] partition. What I'm really interested in is the restore partition. I would like to save that on an external drive, put a new drive in this machine and then use the saved restorre partition data to reinstall Windows and whatever else came with the machine. Is it likely that, if I succeed in recovering the restore partition, that I can use it to make a fresh installation on the new drive? I don't necessarily feel compelled to put a restore partition on the new drive, as the user has never bothered to try this route or create restore disk. (I also have a two month old Windows 8 laptop here to repair that she dropped while it was running. I can't even make that one go into setup, give me the option to boot of an external drive or its built-in optical drive. The current laptop in question is less than a year old.)
I would like to apologize again for the simple question that's been answered 10,000 times already. This is very humbling because I am a Windows and OS X expert; before that I was a DOS expert. That came after CP/M. My unix/linux skills have always been weak at best. I'd really appreciate a good resource for learning basic unix/linux command-line stuff that goes beyond cd, mkdir, rm and chmod, which along witith pico and nn are all I know how to do in this environment. Reading man pages doesn't work well with me, and I think a real dead-tree or kindle book would be the best way for me to learrn in an orderly fashion. (Over 20 years ago, when I subscribed to my current mail and website provider, which also provides access to all the common unix shells via ssh, I inadvertently started a months-long flame war by asking if I should learn emacs or vi. When I later asked for a book recommendation, folks still remembered my emacs vs. vi question and the flame war started again, so I never got a good recommendation. So I am still using pico and generally helpless at a unix command-line.)
Finally, I feel compelled to admit that this isn't my PC. A client is paying me to do this; -- obviously I'm not charging by the hour. Otherwise I wouldn't bemoan the loss of Office and Norton.
Thanks for the help and apologies for the unrelated ranting above!
|
OPCFW_CODE
|
Animal Studies Bibliography
Lynch, Michael E. 1988. Sacrifice and the transformation of the animal body into a scientific object: Laboratory culture and ritual practice in the neurosciences. Social Studies of Science 18: 265- 289.
[Note: this analysis applies only to the neurosciences, where the animal's death is a required part of the experiment.] There are two basic kinds of knowledge at work in animal laboratories. First, there is a commonsense knowledge that recognizes the animals as living creatures (the natural animal) with subjectivity, emotions, etc. that the researcher must interact with. This knowledge is not scientifically proven or tested and is not reported in scientific studies. The commonsense view is the perspective on animals that most people take, and what makes it difficult for most people to understand how scientists can engage in animal research. Our (and scientists') interactions with the naturalistic animal involve anthropomorphism and emotional involvement. Second, there is the scientific knowledge of the animal as specimen or piece of data (the analytic animal). The commonsense knowledge is subjugated in the lab and in science, but it is essential to the production of good scientific findings. Understanding how to interact with animals helps scientists put the animals at ease during procedures, and less struggle means better specimens. The analytic animal is thus a rendering or transformation (269) of the naturalistic animal. In the process of killing and cutting up the (naturalistic) animal, it is grouped with other specimens and becomes a set of data points and numbers to be analyzed. Scientists speak of good animals and bad animals in both senses--good animals are both docile as naturalistic animals and are clear, useable specimens as analytic animals. The process by which the naturalistic animal is made into the analytic animal (a cultural object) include processes of breeding for particular characteristics. The major moment in the process, however, is the sacrifice. In usual social scientific use, sacrifice denotes an act for linking the profane and the sacred, and we can see this as parallel to scientific animal sacrifice transforming the naturalistic animal to the analytic one. Further, sacrifice victims are often specially bred/raised for that purpose, and their remains are used in ceremonial rites. Similarly, lab animals are bred and cared for for their future role as specimens, and their various parts are separated and used for specific purposes. There are three main areas of parallel between this lab transformation and the traditional use of ritual. First, preparing the victim must in both cases involve a specific series of events to make it acceptable for use--if something goes wrong during this part of the procedure (e.g. not enough anesthetic or dies at the wrong time), the animal cannot be transformed successfully (i.e. it will not work as a specimen). The correct procedure requires adherence to the correct temporal and spatial requirements. Second, destroying the victim in order to transfer it from the naturalistic state to the abstracted and purified set of theoretical relations (279) must be done with minimal shock to the animal in order to keep bodily functions as normal as possible. If this part of the process fails, the animal will not achieve the analytic state. Third, the victim is constituted with meaning. Lab workers identify with it as a source of information about ourselves (hence their usefulness in experiments), and they see it as a type of responsibility to create the analytic animal, including a rite of passage to have new workers get into the work by starting out doing the sacrifices. The terms sacrifice in this context is a naturally occurring metaphor and the meanings described are only partially recognized by the workers. In sum, the analytic animal may be created as a product of the naturalistic animal, but the process may also fail. The analytic animal is most importantly linked to the naturalistic one because creation of analytic animals would rarely be possible without the commonsense knowledge that allows workers to effectively interact with the animals to ensure good procedures. Although this commonsense knowledge is generally discounted or ignored, it is central to the production of science. One way for scientists to improve their public image might be to talk more about this commonsense-based practice, as well as to focus on their practice of sacrifice as a ritual. This might offer a sense of respect for life that the public does not now see among animal researchers.
|
OPCFW_CODE
|
What kinds of internship in a power plant would allow the intern access to the control room without requiring a STEM background?
In an early chapter in my book, I want to explain some of the inner workings of an experimental power plant to the reader. However, I also want to get a certain object (a smart coffee pot) in the control room of the power plant to get the plot going.
My thought is to have an intern carry the coffee pot to the control room while the 2 people who know how the plant works explain this to the intern, and by extension to the reader. However, I'm concerned that anyone who got an internship at a power plant would already have a good idea of how the plant works.
What internships in a power plant could reasonably exist that would allow an intern with no knowledge about how the power plant works to bring a coffee pot to the control room?
They could just be lazy. That is what interns are for in a lot of places.
They don't have to be in the control room to explain it, they could just as easily do it in the break room. The break room also gives you access to dry erase boards and props which will help explanation. It is also a more conductive environment to such questions ans explanations.
Does the coffee pot need to get into the control room? Or can it do its job from outside? The control room in any power plant is only for those with security clearance and that goes doubly for a nuclear plant. If you want to keep to some semblance of realism you will have to factor in all the security checks you will go through - any transmitting device wouldn't be allowed (though I fear this stumps your plot). You can either devise a detailed plan or skip this aspect of the realism all together (no harm in knowing when to suspend reality to get your story told).
This seems very plot-based. It isn't going to be useful to a large group of people. You could try and generalize the question a bit more, but it seems to be more of a writing question than a world building question. You might be able to get it to work at Writers.SE, I don't know. People might also be willing to talk it over with you in chat.
I fail to see how this is off-topic. Yes, it's relevant to the story, but we are not constructing a plot. We're providing information about what is realistic or what would exist in a world. Plus, reality-based questions are on topic. Voting to leave open.
@LioElbammalf The idea I'm tinkering with is that the coffee pot is used as a malware vector. Something like a NFC chip that informs the heater pad how hot the liquid has to be and can provide firmware updates to pots that aren't connected to the Internet. I'd like the coffee pot to be inside the control room so the main character can suddenly make the connection between the coffee pot and a cyberattack.
It’s highly unlikely that any such internship position would exist.
Power plants don’t skimp on security or safety. The employees that work at them are trained professionals who have often undergone background checks. This is generally not an environment suitable for interns (whether they’re educated in STEM fields or not) and the control room would be one of the most sensitive areas.
Consider, for instance, a nuclear power plant. According to this Nuclear Energy Institute document the control room is located in what is considered the “vital area”, which has the highest level of security at the plant. Given that amount of security, particularly considering the major government concern over breaches at nuclear facilities, an average intern would not be allowed anywhere near the control room. It also seems highly questionable that a coffee pot full of spillable hot liquid would be allowed into the control room at all — it would almost certainly be kept in a break area outside of the vital zone.
Non-nuclear power plants are unlikely to have security protocols that stringent, but a new “experimental” power plant, depending on how it’s actually generating power, may very well be at the higher end of the security spectrum.
If you’re choosing to ditch the realism then the internship position really doesn’t matter — interns are often put to menial tasks (though less commonly in engineering) like fetching coffee and doughnuts.
There's no mention of the power plant being a nuclear one in the question, is there?
@dot_Sp0T For some reason my brain jumped to nuclear when I read the question. Since this has continued to get up votes despite that bad assumption, I’ve edited it to be a bit more inclusive.
If a plant should be secure and has appropriate procedures to ensure it is secure,then doesn't necessarily mean that the procedures are followed and it actually is secure. For example, in this report from nuclear submarine there would be plenty of opportunities for something similar to what OP wants - https://wikileaks.org/trident-safety/
Honestly, I can deal with the coffee pot being in something like a break room near the coffee pot. I just hoped I could add a scene of someone realizing that the coffee pot is being used as an attack vector, followed by the person running to the coffee machine, yanking out the coffee pot and using it to hammer the coffee machine into pieces. Moving the coffee pot from the control room to the break room would remove the requirement for the intern to be in the control room. Also, you were kinda right with "nuclear". it's a fusion reactor testing an unorthodox particle generated by tortured cats.
How about the amazingly hot intern who is given a position for which (s)he has no qualifications except that (s)he is jawdroppingly good looking. A hoped-for sleeping-with is the motivation of the person in charge who grants internship, and this is the same motivation of the engineer who understands this person will probably not understand, but nevertheless is explaining things en route to the control room. The other engineer is of the wrong inclination (or no inclination) as regards a hoped-for sleeping-with the intern and sees exactly what is going on. This grumpy one can leaven the explanation with snarky bitter comments.
High SF is well served by the stock characters. It saves the writer a lot of time.
It all depends on the type of power plant. If you're talking about a nuclear plant, then Avernium is probably right. We guard those too heavily for that job to exist. Any intern in such an environment would have a lot of experience.
Other plants, however, may not be so difficult. I've taken a tour of my local power plant. The cube farm where the "normal" workers work and the offices where the "management" work is in the very same room as where the "operators" do their magic. Walking to the operators there is a trivial task. (Heck, as a tour guest, we even got to chat with the operators while they worked... though they had this strange tendency to keep looking at their screens while they talked to us)
I just realized that the type of power plant is never specified beyond "experimental". Not sure why my brain jumped to nuclear. Useful points in here.
Could it be, for example, "summer help?" Many businesses will hire college kids to do routine maintenance tasks like paint, move furniture, etc.
Or the ever-popular "boss' child" who has no particular qualifications other than nepotism. This person might work in any department, really.
And many companies send all new-hires on a plant tour, to explain what goes on, where different things are done, and how the business makes the money. My day-job is Information Technology. Therefore, I always learn things on those tours, since I have no training in the day-to-day work my employers perform.
A law firm intern is sent to control room of power plant to do interviews of employees in connection with an employment related lawsuit (maybe harassment, maybe worker's compensation), but the employees are not allowed to leave their posts, so the intern comes to them.
A law enforcement intern is sent to to control room of the power plant as part of a group of law enforcement officers responding to a fake employee going crazy and attacking people call.
A photography intern is sent as part of a PR campaign for the plant to take pictures of the control room.
A Congressman's intern is sent to do advance work for a speech or dog and pony show for the member of Congress.
Reason: They would need to get to know the place they're working at
Why would you need a reason to have a new employee/intern get an explanation of whatever they're going to be working with as a part of a team?
You already state that the technology they're going to have explained to them is of experimental nature. Thus it is highly unlikely that they would already have a good idea of how it works (as you put it).
Why not the cleaner? Why does it have to be an intern? I could simply be that a well-meaning cleaner happens to be on somewhat friendly terms with someone that works the theory behind it. They could see that this person is under stress, and offers a listening ear and some freshly brewed coffee.
You could even throw in an Einstein quote, given he was quite close to someone with a mental disability, though I forget of which nature.
It is suggested that Einstein himself was dyslexic (due to late development in language areas in general) however it cannot be proven since the process of diagnosing dyslexia wasn't developed/was never applied to him.
The intern can be a software engineer tasked to write a software simulator or some sort of visualization. It's very common software engineers not to have the science/operational background that the engineers/operators have, they usually learn the relevant information on the job.
This is also the case not only for interns, but for full fledged programmers with many years of experience as well.
It's very common and completely reasonable for the power engineer then to take the software engineer in the control room to show him the ropes.
True; I’ve had tours of places that would use the software and some explainations of the work, without being part of those industries myself. I've even been sent to a hospital radiology department to figure out what’s wrong with the products. So, a software engineer working on the control panels or display or whatnot might be in that room to look it over.
|
STACK_EXCHANGE
|
I well remember my introduction to big data. We were at a customer site and they were taking us on a tour of their facility when we walked into the ‘sneakernet’. The sneakernet was a wonder to behold. Along the far wall was a gigantic diagram that was the dataflow of their enterprise. Between us and the wall was an array of forty cubes, much like many enterprise offices: until you noticed the Post-It Notes. Not just one or two Post-It notes; every cube had dozens of ‘Post-Its’ arrayed on boards that had clearly been designed to hold them. The diagram on the wall was also covered in ‘Post-Its’. Even more interesting there were about half a dozen people that were literally walking from cube to cube, depositing Post-It notes and removing them. I was baffled until the sneakernet, literally considered to be the heart of the data factory, was explained to me.
Today when most people discuss ‘Big Data’ they tend to focus upon Volume. Something is a Big Data problem if it can be measured in Petabytes; everything else is not really Big Data. However, this large company that was entirely focused upon data did not have a major volume problem. They did have a few tens of terabytes and back in 2000 that was a bit of a Volume issue – but it wasn’t the major issue. The major issues they had were the other four Vs: Variety, Variability, Velocity and Veracity.
The company was collecting data from thousands of different places; theoretically it was all in the same format although as one wry analyst observed “we have several hundred different interpretations of our standard format.” Once you add variety to a data problem the code complexity increases, the need for QA increases and the chances of requiring code modifications as part of the daily workflow increases.
Variety on its own would be bad; it is even worse when compounded to variability. A process that has been running flawlessly for years can suddenly break because a data source changes. This may be a deliberate change that can be scheduled, a deliberate change they mention after the fact, or they may just have screwed something up. Whatever the reason variability can mean that a downstream process fails because of un-trapped variability upstream.
Working with variety and variability is only painful; once you add velocity it becomes exacerbated. If dozens of files are turning up in any given hour and there are very tight SLAs for the data to be integrated and available then all the foregoing has to happen and happen fast. In this situation the pain of finding an error downstream is compounded by the need to re-run some of the dataflow to fix the error but wish to rerun as little as possible.
The pain goes from exacerbated to excruciating when you add in veracity: the absolute need for the data to be correct. In a typically ‘web log tracking’ or ‘analytics’ system the cost of an error here or there is not extreme. One has to fix it as soon as possible; but as soon as possible is soon enough. In a situation where the data at a detailed level is crucial to the financial or physical wellbeing of the individual then you simply have to fix any problem now.
The sneakernet was an expensive, if brutally effective solution to this problem. If code had to be changed then individuals plotted out what processes had to be re-run and the sneaker people took the flow orders encoded on PostIts to the job executers that sat in their cubes. If data arrived, or had to be re-worked or failed some QA test then again the sneakers would spring into action. The process cost manpower; but it allowed the company to run.
So – what does any of this have to do with ECL? Sneakernet was the driving force behind one of my favorite ECL features: PERSIST. PERSIST was recently described by one of my colleagues as ‘incredibly simple’, which is true but that is a good thing; not bad. PERSIST is a qualifier that can be added to any attribute and it marks a watershed in the data process.
When an attribute with a PERSIST qualifier is executed then the result at that point is saved to disk along with a checksum of all of the data that went into the attribute along with all of the code used to produce the result. Then when the attribute is used again the system first checks to see if any of the inputs (code or data) have changed: if they have then the attribute is recomputed – if not then the stored value is used.
Some of our large processes which contain hundreds of graphs, each of which contains the equivalent of dozens of map-reduces, will have dozens of persists within them. Whenever we want the result of the graph the system checks all the PERSISTS and only recomputed the bare minimum of the graph required to get the result. Further, PERSISTed attributes can be shared between multiple graphs. Therefore if one output builds the PERSIST from one source, then when the next job needs the input from that source it will already exist.
In short, in our system the ECL Agent wears the sneakers; it interacts with our meta-datasystems and the compiled code. If your only data problem is Volume then this may be overkill. But if any or all of the other four Vs begin to bite – check out PERSIST
|
OPCFW_CODE
|
Lazy Getters
The way graphql_flutter implements NormalizedCache would require denormalizing data on every build call, making allocations for everything under that type for fields that may not be even relevant.
Like a list of friends and respective chat history that would require denormalizing the chat history and allocate it event if I just asked for the friends names.
I would suggest using lazy getters that use their public Map<String,dynamic> interface to interact with data.
That means type generation becomes "cost free" as the compiler would remove most of the indirection it generates and wouldn't require any allocation, dereferencing, etc.
The way I'm currently dealing with it is like this:
# addressAutosuggest: [AddressAutosuggestResponse!]!
addressAutosuggest {
locationName # String!
latitude # Float!
longitude # Float!
}
// generated
class AddressAutosuggestAddressAutosuggestResponse {
AddressAutosuggestAddressAutosuggestResponse(this._d);
final Map<String, dynamic> _d;
String get locationName => this._d["locationName"];
double get latitude => this._d["latitude"].toDouble();
double get longitude => this._d["longitude"].toDouble();
}
class AddressAutosuggest {
AddressAutosuggest(this._d);
final Map<String, dynamic> _d;
List<AddressAutosuggestAddressAutosuggestResponse> get addressAutosuggest =>
(this._d["addressAutosuggest"] as List)
.map<AddressAutosuggestAddressAutosuggestResponse>(
(dynamic addressAutosuggestResponse) =>
AddressAutosuggestAddressAutosuggestResponse(
addressAutosuggestResponse))
.toList();
}
Hello again @lucasavila00,
That makes sense. It's not a priority, though, given this would require redoing most of the work json_serializable does.
It makes more sense to have json_serializable (or an extension to it, or another generator like it), to support this kind of lazy read, as other projects could benefit from it, instead of recreating it here.
What do you think about requesting this behavior on json_serializable, or even starting another library to do it?
I don't think there is really a need to support more than this specific use case.
Graphql-flutter may change their implementation details and then I wouldn't see a reason to mantain this bigger scoped project.
Graphql-flutter is being used in production and works fine and I don't see why Artemis couldn't support it first class.
@comigor As I believe you like Apollo API design and graphql-flutter tries to immitate it as closely as possible there isn't a conflict here.
Personally I wouldn't be interested in mantaining another package just so Artemis works for me but would be interested in helping Artemis work with my current setup.
Is there a reason not to support graphql_flutter first class?
Sure, but first-class support is different from coupling everything in one library to make another one work. Lazy fields shouldn't be Artemis responsability, but rather something it could get from json_serializable out of the box. Also, keep in mind the goal of Artemis is to be agnostic of environment (it should work with pure Dart, without `graphql_flutter).
This lazy feature is clearly some enhancement of the way json_serializable generated classes deal with data and, as I said, other projects could benefit from this laziness. That's why it seems better to have it as an option of json_serializable or as another library/generator.
If we implement this behavior on Artemis, we'd need to basicaly stop using json_serializable and write the custom class with lazy fields, is that so? Or do you have another way of doing this in mind?
In my mind I'd put the lazy getters behind a flag and almost "fork" json_serializable for our specific needs. This would allow faster development under a smaller test suite for this specific use case.
What I hadn't considered is upstreaming the feature to json_serializable.
I'll investigate if this is possible and if the project is willing to accept this addition.
|
GITHUB_ARCHIVE
|
Exotic pets are, by their very definition, uncommon animals and so it’s to be anticipated that many pet shops do not inventory them. If you are not already convinced of the explanations it is best to personal an unique pet, please let me know, and I’ll provide you with one other dozen or so reasons. Reaseheath’s animal administration department , which already trains RSPCA officers on the care of unique animals, is planning to carry a seminar on the subject next Spring. On the whole, unique pets which eat whole vertebrates are less of a worry as the get all of there dietary wants from their prey.
Another facet that it’s essential to find out before buying an unique animal is that if there’s a veterinarian in your space that can treat the animal. Apart from the preliminary worth of buying an unique pet and all the mandatory gear, there might be ongoing prices akin to meals and vet bills. In a lot of the states, you needn’t own any formal diploma or license to own exotic animals.
Finally, before selecting an unique pet, think about the pet’s compatibility with kids, its longevity, and other tendencies the animal may have. If you are involved about the welfare of an unique animal in your neighborhood, contact your local humane society. Online searches and adverts in reptile magazines (both industrial magazines and those revealed by unique pet charities) are good places to begin your hunt for exotic pet breeders who will often have a very good status and lots of keepers will happily refer” you to another breeder if they can’t assist. Please only buy from respected sellers in unique pets as sadly 1000’s of illegally imported exotics die on the journey into this nation.
Lastly, do not be afraid to get in contact with other exotic pet lovers online or in particular person to ask for recommendation. Landlords are reluctant to lease to unique pet owners or vicious canine owners, as a result of liabilities concerned. One of the primary decisions that you must make is which kind of unique pet do I select. Eventually the potential buyer’s superb unique pet is revealed, accompanied by some information about the animal.
People buy turtles mostly from pet shops and on-line sellers, the place residing situations run the gamut, from individual enclosures to plastic tubs packed and crawling with animals. Before you give the inexperienced mild on shopping for an unique pet for your family, think of the requirements that have to be met to present these animals an excellent life in your home. Commercial trade in exotic reptiles and amphibians, then again, has been banned since 1999. For some exotic pets it’s essential to simulate the habitat it might naturally encounter in the wild. However, all the zoos and accredited institutions couldn’t presumably accommodate the number of undesirable exotic animals. In our social-media-saturated, superstar-infatuated culture, these spectacles can gas the demand for wild pets.
|
OPCFW_CODE
|
How to interpret regression results when the data have been detrended?
I am planning to build a linear regression model where I explain flight ticket demand with airfares, lagged airfares, GDP etc. based on monthly data from the past 15 years. This is my first time working with time series and - if I got it right - some of my variables are trend-stationary. So my plan is to use a Hodrick-Prescott-Filter to take the trend out and just use the remaining, detrended data in my regression.
My question is: Does this change the interpretation of the results I get?
To clarify: I already took the natural logarithms of all my variables, so my regression model will look something like:
ln(ticketdemandt) = ß0 + ß1t * ln(pricet) + ß2t-k * ln(pricet-k) + ß3t * ln(GDPt) + ... + ut
(sorry about the confusing t and t-k, they are meant to be in the subscript)
So, if I found that ß1 was equal to -0.8, this would mean that for an increase in current prices of 1 %, ticket demand in the same period would decrease by 0.8 %. But what if I used detrended data for the price instead of the original ones. Would the interpretation of ß1 then stay the same or would it be somehow different?
The same question arises if I need to take the first difference (or higher differences) of data to make them stationary. How does that change the interpretation of the resulting betas?
Many thanks in advance!
Hi: If you are going to use hodrick-prescott or differencing, then, if you want the actual predictions to reflect the model, then you are going to have to "un-do" those transformations. By "undo", I mean account for the trend or the difference when you actually predict. It's not a trivial process I don't think. Also, James Hamilton has a paper on the dangers with using Hodrick-Prescott so I would check that out before you use it. I say that because, although I'm sure Hodrick and Prescott know what they're doing, James Hamilton does also so I would see what he has to say about it.
The title might be somewhat extreme but it's probably worth reading. https://www.nber.org/papers/w23429
Dear @mlofton thanks for your quick answer! I was afraid undoing would be the solution. However, I really can't imagine how to do that. (Adding a trend to the betas I get?)
I couldn't find any information on that yet. Anyhow, I imagine that this must be a rather common problem that almost everybody who wants to run a regression with non-stationary data has to come across, no? Or is there a more common way to solve such an issue?
I would check out the hamilton paper first. But, as far as "undoing", it shouldn't be that horrible. You just have to keep track of what you took out by differencing or de-trending and then put it back in at the end. Not trivial but mainly book-keeping programming wise.
Yeah, thanks for the Hamilton tipp. I had actually come across that. The reason why I was planning to use HP is that I am replicating another paper where it was also used. But I'll still consider using an alternative!
maybe hamilton suggests one in that paper. i think there's also one by christiano et al.
|
STACK_EXCHANGE
|
Openvpn usa udp o tcp
In the ExpressVPN app, OpenVPN is actually referred to as “UDP” or “TCP,” two internet protocols that can greatly affect performance. Both TCP and UDP OpenVPN connections will offer excellent security and privacy when using your VPN service. The choice between the two really depends on your own speed requirements and whether your connecting from your work or home network. TorGuard Anonymous VPN Service offers 200+ TCP & UDP OpenVPN connections in over 13 countries.
Cómo instalar y configurar OpenVPN en FreeBSD 10.2
Follow-up on using Port 443/TCP for OpenVPN.
FAQs - Windscribe
Unlike UDP, TCP performs error correction. The TCP tunnel transport support on the OpenVPN protocol offers you many benefits. It includes seamless online gaming, video conferencing, audio conferencing Hi, I'm putting an OpenVPN server for my company and I'm wondering what a "better practice" is. Should I leave it at default 1194 UDP Should I leave it at default 1194 UDP? or change to a more common port, for example 443 TCP? If I leave it at 1194, is there a When I use UDP for the OpenVPN connection, download from my laptop is slow and not very stable. (speed fluctuation) However This slow up OR slow down behaviour switches with the OpenVPN server protocol setting TCP or UDP. Anyone actually tried this and OpenVPN can run over User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) transports, multiplexing created SSL tunnels on a single TCP/UDP port (RFC 3948 for UDP)..
Qu\u00e9 es OpenVPN.docx - \u00bfQu\u00e9 es OpenVPN .
##match out on egress inet from !(egress:network) to any nat-to (egress:0) pass out quick $log_opt on $ext_if from remote 1194 # set proper address of VPN server. proto udp # tcp or udp. Make sure server-side also supports the protocol. Ubuntu & OpenVPN Projects for $10 - $30. install udp and tcp both protocol in openvpn and recording video for step I can develop an automated script for you to install two instances of OpenVPN running on both tcp and UDP. $35 USD in 1 day. OpenVPN is a VPN protocol that is one of the most common in the community. It is popular because it combines the right mix of speed and security.
VPNs seguras con MikroTik RouterOS - Prozcenter
IANA. OpenVPN (Official). WIKI. Puerto: 1194/UDP. 1194/UDP El puerto TCP 1194 usa el Protocolo de Control de Transmisión. TCP es PPTP (Point to Point Tunneling Protocol), es un protocolo de comunicaciones obsoleto que permite implementar redes privadas virtuales o VPN. Una VPN es una red privada de computadoras que usa Internet para Privada Virtual (VPN, por el anglicismo Virtual Private Network), empleando una red de trabajo TCP/IP.
¿Cómo cambiar de UDP a TCP cuando se utiliza OpenVPN .
For each packet sent over TCP, a confirmation What I mean is our server able to connect openvpn with either port udp and tcp but every connection only use one port, tcp or udp. if you already have server.conf, you can delete it or you can rename it depend on configuration in server.conf. I have a pretty standard openvpn install on my home server's Ubuntu. However I face a bizarre problem where downloads from Internet browsing (speedtest.net and fast.com) are also much faster on TCP, though UDP isn't as unbearably terrible as with file Both TCP and UDP OpenVPN connections will offer excellent security and privacy when using your VPN service. The choice between the two really depends on your own speed requirements and whether your connecting from your work or home network. proto tcp instead of proto udp (If you want OpenVPN to listen on both a UDP and TCP port, you must run two separate OpenVPN I think you can run two OpenVPN servers (one for TCP, one for UDP,) bridge each of them with a TUN, and then connect the TUNs. TCP means Transmission Control Protocol and UDP means User Datagram Protocol.
1.- Puertos de Red de los diferentes s.- Mapa Mental
23/06/2013 There are two types of internet protocols, the first one is TCP which is an abbreviation of Transmission Control Protocol and the other one is UDP which is an abbreviation of User Datagram Protocol. Open VPN offers these both types of internet protocols and you are allowed to choose anyone. 25/02/2021 04/01/2020 11/12/2013 19/01/2021 The same openvpn process can't listen on UDP and TCP sockets at the same time.
|
OPCFW_CODE
|
Main article: Advanced Hybrid RBAC
MidPoint is using a Role-Based Access Control (RBAC) principle to make provisioning more manageable. However the use of simple RBAC leads to a role explosion problem. Much more sophisticated approach is needed to make RBAC really useful. And that's exactly what midPoint has:
Assignment is a basic structural element of midPoint policy. Assignments define what should be (as opposed to links which describe what is). Assignments associate a user with roles and resource objects such as accounts. Traditional IDM systems has a very simple structure instead of assignment. It is mostly binary: either a user has a role or he does not. This can work for very simple cases but it fails miserably in more complex scenarios. However the assignment in midPoint is much more powerful. It contains activation and deactivation dates. It can be conditional. It can contain parameters for parametric roles. The consequence is that several assignments can be used to assign the same role to the same used at the same time.
But the assignment is even more powerful than that. Only a very naive IDM project expects that everything and everyone will comply to to rules (e.g. RBAC) and that there will be no exception. There are always exceptions. Assignments are a way to support such exceptions in systematic and elegant way. The assignments can make sure that the exceptions are recorded and properly managed. The assignment can specify attributes that are specific for a particular user. As assignment is a part of the policy, specifying attributes in assignment efficiently legalizes such exception. Think of the assignment as a personal role that every user has for himself. Adding assignments to a user can be subject to approvals exactly like adding a role (because even roles are added in assignments). Therefore the processes can be created to make sure that the exceptions does not get our of control.
There are many examples of IDM systems that have fallen into the technological traps and bound themselves to a bad standards or data formats. The way out of that trap is very difficult and in many cases it is a death sentence. We have been lucky enough to foresee the situation and we have successfully avoided such pitfalls.
Pragmatic SOA and REST
Service-Oriented Architecture (SOA) is a great idea. However this is usually not the case when it comes down to the implementation of SOA concepts. The deployers of SOA solutions too often forget about the basic principles of the software architecture which should be the crucial part of Service-Oriented Architecture. The widely spread idea is that the first and essential part of SOA is an Enterprise Service Bus (ESB) and that this single component is a solution to all the problems. It isn't. We can tell for sure. We have been there already. Several times.
We fully support SOA principles such as publishing of independent services which can be composed into a larger solutions. We just know first-hand that the techniques which are currently used to implement it are more than questionable. They way in which we support Service-Oriented Architecture is what we call pragmatic SOA. It is basically this:
- MidPoint is exposing vast majority of its functionality in the form of a network service which follows a well-defined interface.
- The service is exposed in several forms:
- Java API (local only)
- SOAP-based web service with WSDL definition
- HTTP-based RESTful service
- The specific interface definition is adapted to the form which is appropriate for each technology. E.g. We have Java classes for Java API and WSDL for SOAP. Shamefully REST does not have any way of formal interface definition therefore we at least have a textual description and examples.
- The functionality of all the interface forms is roughly equivalent - considering limitations of each technology.
- We try to follow standards (Java, WSDL) and existing conventions (REST) as much as practically possible.
- The interfaces follow a proper software engineering practice: none of them is designed especially for a specific case or architecture. They are generic. Universal.
- MidPoint can be used as a service in traditional ESB-driven Service-Oriented Architectures by the means of midPoint SOAP web service.
- MidPoint can be used as a service in Resource-Oriented Architectures (ROA) by the means of midPoint RESTful service.
- MidPoint can be used as an orchestrator for the purposes of identity integration.
Which in fact means that midPoint can be used in almost all the currently fashionable architectures as a first-class citizen. However midPoint is not bound to any particular integration architecture. It just follows the practical, pragmatic way, good engineering practices and common sense. That's the reason it works so well.
|
OPCFW_CODE
|
1 year of experience
Proficient in Generative AI, LangChain framework and building GenAI based ChatBots, FineTuning LLMs and Machine learning and Deep learning. Techniques with good hand-on tools/ programming like Python, SQL. Possess in-depth knowledge in various mathematical and statistical concepts like sampling, distribution, correlation and regression, descriptive and inferential statistical used in data Analytics, hold a good grasp of various python libraries for data analysis and visualisation like Numpy, Pandas, Matplotlib etc. I am eagerly looking for an opportunity to leverage my skills and knowledge into the role of in a Data Science.
PropLlama.ai is a real estate platform designed to transform the daunting and intricate task of navigating the property landscape into a seamless and enjoyable experience. Our platform features an intelligent chatbot that understands, remembers, and responds to your queries like a trusted friend, making it feel like you are having a meaningful conversation with a property confidant rather than just another chatbot. Additionally, our platform offers 24/7 chat support with a team of seasoned experts, personalized property searches, insightful decision-making support, comprehensive assistance throughout the property acquisition process, and meticulous legal compliance support. This includes everything from document verification to crafting customized lease agreements. With PropLlama.ai, you get convenience, speed, and accuracy at your fingertips, making it your ultimate real estate companion and doubt resolver. Say goodbye to data overload, questionable broker integrity, and the stress of legalities. PropLlama is here to guide you through it all. The core of PropLlama.ai is powered by the Llama-13b-alternative chat model(tested with Llama-70b too) and Clarifai. It enables the chatbot to understand the context of a conversation, remember previous interactions, and generate human-like responses. It can also process text data, such as property descriptions, legal documents, and news articles, to extract relevant information and provide personalized recommendations to users. Langchain Agents and LLMs- Various tools which were built using Llama and Clarifai for variety of class of queries were intergrated using Langchain agents. LangChain is a framework designed to simplify the creation of applications using large language models (LLMs) In summary, PropllAma.ai leverages advanced technologies, such as the Llama-13b-alternative chat model, Clarifai platform, Llama-70b model for testing, and Langchain agents and LLMs,
Our platform revolutionizes recruitment with personalized experiences for candidates and streamlined processes for employers. Challenges with traditional recruitment system are: 1. TIme consuming 2. Screening Hassles 3. Inconsistent result 4. In effective methods Solutions - AutoRecruit AI is a comprehensive and cutting-edge solution to these problems. It puts the candidate at the center of its process and applies a breakthrough llama-based algorithmic approach to achieve unprecedented accuracy at speeds that haven't been seen before! Features 1. Candidate sourcing 2. Resume parsing 3.Candidate scoring and summary 4. Personalized Engagement Benefits 1. Time Saving 2. Cost-Effective 3. Efficiency Optimization 4. Better Candidate Fit
🔧 Three days of creating your own startup! 🤜 Team up with others or build solo during the Hackathon. 🤖 Free access to educational materials on Generative AI models, AI Agents and more. 🚀 Discover startUp opportunities for Hackathon teams. 🌍 Join and build with a global AI community!
⌚ 3-day AI Hackathon 🚀 Compete to build an AI app powered by Llama 2 💡 Learn how to extend your capabilities with Clarifai! 🎓 Mentors are available to support you during your creative journey
Fine-tuning is a technique that empowers pre-trained models to perform specific tasks or behaviors, opening up a world of possibilities for customization and specialization. Dive into the details of this cutting-edge technology and showcase your skills in this fast-paced hackathon!
|
OPCFW_CODE
|
Method cannot access the sorted array
The following code sorts through a dictionary of words in lexicographical order. The problem is that I cannot access the sorted array outside of the for loop. Well I think this is the problem. I have declared the array words in another method and returned it to a private variable within the same class. But after I sort the array and try to print it out, it just prints out the original array. I also want to mention that this sorting method does work as I have tested it in another class where everything was in the main method. So again I believe my problem has to do with returning the new sorted array. I also need to write a few lines of code within the method after the array is sorted and when I tried to return the array, using "return words;" I was unable to write this code.
public void insertionSort() {
int in, out;
for (out=1; out<nElems; out++){
String temp = words[out]; // remove marked item
in = out; // start shifting at out
while (in>0 && words[in-1].compareTo(temp)>0){ // until temp is lexicographically after after the word it is being compared to
words[in] = words[in-1]; // shifts word to next position in lexicographical order
--in;
}
words[in] = temp;
}
for(int i=0; i <words.length; i++)
System.out.println(words[i]);
How is nElems defined?
I don't see a problem with the sort. There's probably something wrong with the way you're invoking it or using it in the other method, and we can't see that part.
Woops! I forgot to define nElems. Thank you.
I have declared the array words in another method
It sounds like you have declared two arrays: one inside the other method and an entirely separate one at class level that is being sorted. Make sure both methods are using the same array.
If you want them both to use the array declared at class level, don't declare a separate array in the other method. E.g., if you have String[] words = new String[10]; in the other method, change it to words = new String[10];. If you really need the nElems variable (i.e., if the sort method can't simply use words.length), make sure that variable is also not being redeclared and is being set to the correct value.
Alternatively, instead of depending on putting the data to be sorted in a particular array, your sort method would be more generally useful if it accepted any array to be sorted as a parameter:
public static void insertionSort(String[] words, int nElems) {
/* ... rest of method as before ... */
}
I've made that method static too, since it no longer depends on any instance fields.
Well, as I understand what you are saying, you are using a private variable at class level and that variable receives the words[] array items from a method... in that case:
Make sure the private variable at class level is an array named words.
Make the private variable (now named words) the same type as the words array received from the other method.
Because you are calling words from insertionSort() you need to acces the private variable values, you can't get the words[] array values from the other method, because that array is local to that method, so you must return it's values to some array varibale (the private variable that you must name words), so it's that variable that you must access.
|
STACK_EXCHANGE
|
JWST MIRI is already revolutionising mid-infrared astronomy. MIRI observes at wavelengths of 4.8-29 microns, and is designed to distinguish ultra-high-redshift galaxies from more nearby systems, to track the growth of heavily dust-enshrouded supermassive black holes at cosmic noon, and to trace the birth of stars. However, the very small projected areas on the sky of the MIRI detectors means that only very narrow, deep fields can be imaged. Much wider infrared survey data exists, but at a coarser angular resolution.
In the last half a decade Generative Adversarial Networks, denoising auto-encoders and other technologies have been used to attempt deconvolutions on optical data. We have developed an auto-encoder with a novel loss function to overcome this problem in the submillimeter wavelength range (Lauritsen+21 MNRAS, 507, 1546). This approach is successfully demonstrated on European Space Agency Herschel observatory SPIRE 500 micron COSMOS data, with the superresolving target being the James Clerk Maxwell Telescope SCUBA-2 450 micron observations of the same field. We reproduce the SCUBA-2 images with high fidelity using this auto-encoder.
This technique has been very successful in blank-field extragalactic surveys, in which the distant galaxies appear as unresolved point sources. However, we have found that the deconvolution struggles when there is extended structure in the images, due to interstellar cirrus from our Galaxy. This is not unexpected, because the deconvolution training sets did not incorporate this extended emission.
This project will therefore extend the deconvolution work to include interstellar cirrus, and also adapt the network to be trained and to operate at mid-infrared wavelengths. The project will then deconvolve lower-resolution data from the Spitzer and WISE space telescopes to attempt to approach the angular resolution of JWST. The principal astronomical science goal from developing these techniques will be to search for rare objects in the deconvolved data sets with the help of cross-correlation with existing imaging and catalogue data sets, particularly in identifying dust-obscured galaxies undergoing violent phases of supermassive black hole growth and/or star formation, in order to illuminate the processes driving the peak of galaxy stellar mass assembly at cosmic noon.
Interested students could explore where else and to what other data, including non-astronomical data, this new technique could be creatively applied. A useful outcome would be guidelines about data characteristics and how to apply the technology.
1. Lauritsen, L., et al., 2021, Superresolving Herschel imaging: a proof of concept using Deep Neural Networks, MNRAS 507, 1546 https://ui.adsabs.harvard.edu/abs/2021MNRAS.507.1546L/abstract
· 2:1 or above, MPhys or other integrated science masters in physics, astronomy, computer science or related disciplines
· First class honours BSc in physics, astronomy, computer science, or related disciplines
· 2:1 BSc in physics, astronomy, computer science, or related disciplines, plus a Masters-level qualification in a relevant area
For more information about the project, please contact the follwoing academisc:
|
OPCFW_CODE
|
The pandemic has exposed the massive digital divide between the rural and urban population. With schools switching to online learning, it was quite evident that the rural children were getting left out. At the same time the rapid digitization is also creating an entirely new set of jobs for which there are not enough skilled youth. And finally, more and more of the public services are being delivered digitally and it is very important that the beneficiaries are aware as to how to consume these services in a safe and secure manner.
Create a library of digital devices in each Gram Panchayat along with broadband internet facility for beneficiaries to borrow the device and consume online services. For individual use, Android devices are provided so that users will have access to various approved Apps as required. Also, an Android TV will be setup which will enable group training / learning activity by allowing for easy Chromecast of various appsand resources directly from the internet.
With the devices preloaded with school content the students in these GPs can continue the learning as we navigate the pandemic. Content developed and curated by DSERT will be pre-loaded on these devices covering classes 4 thru 10 and can be easily accessed using GPSTEP App. Provision has been made to update the content with newer materials from within the App.
Vocational Education is getting extremely important in the context of the current economy where labour with highly specialized skillset is needed. Training of the workforce needs a meet the current demand and hence training needs to be provided based on demand/supply data of specific skills and its potential for adaptation. For this an online platform called GP Market place has been developed by Sikshana Foundation and based on the need, domain experts and other NGOs will be brought in to cater to the training needs.
With more and more social programs by the Govt. being delivered as e-service, it is imperative that the beneficiaries not only have devices to access it but also understand how to consume these services in a safe and secure manner. To address this Sikshana Foundation has developed a Digital Literacy program which have been piloted in a few GPs. In addition, Functional Math and English has become extremely important in dealing with everyday situations and the training program covers these skills also. And lastly as part of this the youth will also be provided soft skills training to cover 21st century skills like Communication, Collaboration, Critical Thinking and Creativity.
Sikshana Foundation which pioneered Learning Maps in schools will help youth with navigating their career options using Career Maps. Career counselling camps will be run periodically at the GP library to help youth understand the various career choices that are available today. Aptitude tests will be provided to help them get a better understanding of their own fit with the various career options they may be interested in. Content from partner NGOs who specialize in these programs will be offered by the Sikshana Foundation team positioned in each of the ZPs.
5 youth from ZillaParishats across the State who show the potential for higher learning will be selected from the GPs and given special training for various Engineering Entrance Exams. Additionally, to help develop logical and analytical skills programming languages such as Python, VPython, Android App development will be provided for these youth.
Sikshana Foundation will take total ownership of the project and will implement the project in a phased manner. The foundation will partner with CSRs to raise the necessary funds for the Internet, Digital Devices, Monitoring Staff and the necessary software.
Govt. of Karnataka will provide the necessary logistics support at each of the selected GPs to enable the implementation of the project. This includes promoting the availability of the digital resources in the villages of the GP, assigning additional duties to the current dedicated librarian in each of the GP to manage these digital resources and providing the necessary space/infrastructure to run the Digital Device Library.
Each of the 1,000 “Digital Transformation Centers” have been or will be upgraded & equipped with the following Digital Infrastructure & Tools in phases by Mar 2023
|
OPCFW_CODE
|