Document
stringlengths
395
24.5k
Source
stringclasses
6 values
eve: log more IKEv2 fields At this moment Suricata detects IKEv2 traffic, but the traffic analysis is little bit complicated. here is a small illustrated guide for IKEv2 I added my experimental IKEv2 suricata rules to this task too. But, Moloch shows (IKEv2_Moloch_Screenshot_20190504_175220.png), in the Suricata section, only the Signatures which detect this traffic. My proposal is to enhance the Suricata/Moloch plugins to show these parameters of the IKEv2 handshake (IKEv2-EventsList_Screenshot_20190504_175956.png) ikev2.exchange_type (at this time only numerical string, maybe standard descriprion will be better, like the other parameters) Updated by Andreas Herz over 4 years ago - Status changed from New to Assigned - Assignee changed from Community Ticket to Michal Vymazal The necessary steps are explained in https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Contributing and https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Developers_Guide feel free to ask if you have any specific questions. You can also look at our github page https://github.com/OISF/suricata and see how we work with PRs. Updated by Michal Vymazal over 4 years ago Suricata code location - Moloch, Suricata plugins I will be glad to cooperate on this projects But, I can't locate the right part of the code in the repository (means Moloch and Suricata plugins) Can you give me a contact to a responsible person, who will help me to find the right part of Suricata plugin and Moloch code? Thank you very much Updated by Michal Vymazal almost 4 years ago - File IKEv2_Moloch_Screenshot_20190504_175220-2.png IKEv2_Moloch_Screenshot_20190504_175220-2.png added - File Screenshot_20191123_094316.png Screenshot_20191123_094316.png added - File IKEv2-EventsList_Screenshot_20190504_175956.png IKEv2-EventsList_Screenshot_20190504_175956.png added The code should be located in Moloch-Suricata plugins
OPCFW_CODE
do you need to remove mean and divide by std for testing normality? I am running a kstest on MATLAB. When I take the data directly, i.e. kstest(data), the result says that my data is non normal. However, if I use kstest((data-mean(data))/std(data)) it would come out as normal distribution. What is the correct way of testing normality? If you subtract the mean and divide by standard deviation then you can no longer use that particular test. We discussed this several time here. The reason is that kstest function in MATLAB is for a known distribution, meaning that you know its mean and variance. You're estimating them from sample, which makes this test invalid as implemented in MATLAB. Look up Lilliefors test for your purpose. Other software packages such as SPSS may have implemented it in the their KS test functions, check the docs. ... or Shapiro-Wilk's. This is correct, though when someone asks about a stats package that package may call Lilliefors test "Kolmogorov-Smirnov" (like SPSS does). @Glen_b, thanks, I clarified my answer. In fact when I ran into this issue with MATLAB, I assumed that they'd take care of unknown parameters case. It's great that SPSS does this If SPSS is a problem because it is costly, you could try the gnu version: PSPP; https://www.gnu.org/software/pspp/manual/html_node/KOLMOGOROV_002dSMIRNOV.html#KOLMOGOROV_002dSMIRNOV much thanks for the explanation, great for me who's knows very fundamental of statistic By subtracting the mean and dividing by the standard deviation, you're making your data normal. If you want to test for normality, I would say the most intuitive way would be to plot a histogram and see how close it is to a normal distribution. For a more analytical approach, you can always run a Shapiro-Wilk test. You might reasonably say you make normal data standard normal by subtracting the mean & dividing by the standard deviation. Ah, I guess I should have been more clear. However, I think it's still fair to say that doing so would still be making your data normal. The standard normal distribution is still a normal distribution after all :) Why would you want to say you make normal data normal? Anyway, what your first sentence appears to be saying is that you make non-normal data normal by subtracting the mean & dividing by the standard deviation! If the original data are not normal, then subtracting the mean and dividing by the SD will not make it normal. Conversely, if the data were normal to begin with, then scaling them will not make it "more" normal. I fail to see what you are trying to say. Shapiro-Wilk is a very good point.
STACK_EXCHANGE
- 04/11/2020 at 22:08 #19227 On Unity 2019.3.6, Pico SDK 2.3 -2.8.6, and Tobii SDK 22.214.171.124. Using the highlightAtGaze script to test out Tobii on the Pico Neo 2 Eye, it works well before SceneManager.LoadScene is called, but after that. Tobii and HighlightAtGaze do not work, weirdly if I exit out of the application on the Pico device and jump back in, Tobii is active and HighlightAtGaze works. This is confusing to me and I have tried calling TobiiXR.Start() in many places to try to workaround. Any help is greatly appreciated, thanks!05/11/2020 at 12:26 #19233 Hi @leppster13, and thanks for your query. There are a number of ways to enable eye tracking between scenes. One way is to have the TobiiXR_Initializer prefab in the start scene and make it DontDestroyOnLoad, another way is to have one initializer per scene. You can also make either of these setups work dynamically in code. If you could kindly clarify what you are trying to achieve we will be better placed to assist you further. Best Wishes05/11/2020 at 20:19 #19235 Hey Grant, thanks for the help. I am just trying to reload the start scene in my case. I now have the TobiiXR_Initializer awake function looking like `private void Awake() but I am still running into the same issue. Is it possible that I need to do something with the objects with the HighlightAtGaze script attached to them? Thanks again.10/11/2020 at 12:53 #19270 Hi @leppster13, for basic eye-tracking usage in one scene reloading in the next scene, you can just add the “TobiiXR_Initializer”-prefab in the Prefabs-folder to the scene and it will work. No need to add anything in code. Please try this out and let us know how you get on. Best Wishes.10/11/2020 at 19:44 #19273 Hello, I have actually solved this now and it seems that SceneManager.LoadScene(SceneManager.GetActiveScene().name);does not play nicely with what I was doing. A switch to SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);fixes the issue. Thank you for the help.11/11/2020 at 10:54 #19279 Great! Glad to hear you got it all up and running, happy to help. Please don’t hesitate to get in touch again should you require any further assistance. Best Wishes. - You must be logged in to reply to this topic.
OPCFW_CODE
XYF is a data directory which contains examples of XYF files, a simple format for recording points and faces in 2D space. An XYF file assumes first the existence of an XY file, which lists the (X,Y) coordinates of points. Then the XYF file is used to define faces by listing the indices of a sequence of points in the XY file that form the boundary of the face. It is not necessary to repeat the first point at the end. The XYF format does not require a particular order for each face (such as TRIANGULATION_ORDER3 or QUAD_MESH formats) nor does it require that each face have the same order. However, it does require that all the node indices for a face can be listed on a single text line. Since the XYF file includes point indexes, this raises the issue of whether to use 0 or 1-based indexing. For now, the issue is left open. A program reading an XYF file presumably also has access to the XY file. The index base can be inferred from the presence or absence of an index with value 0, or the presence or absence of an index value equal to the number of points. # Four points. # 0.0 0.0 0.0 1.0 1.0 1.0 1.0 0.0 # Indices of vertices to connect for the faces of two triangles. # 1 4 2 3 2 4 The computer code and data files described and made available on this web page are distributed under the GNU LGPL license. TRIANGULATION_ORDER3, a data directory which contains examples of TRIANGULATION_ORDER3 files, a description of a linear triangulation of a set of 2D points, using a pair of files to list the node coordinates and the 3 nodes that make up each triangle; TRIANGULATION_ORDER6, a data directory which contains examples of TRIANGULATION_ORDER6 files, a description of a quadratic triangulation of a set of 2D points, using a pair of files to list the node coordinates and the 6 nodes that make up each triangle. XY, a data directory which contains examples of XY files, a simple 2D graphics point format; XY_IO, a C++ library which reads and writes files in the XY, XYL and XYF formats. XYF_DISPLAY_OPENGL, a C++ program which reads XYF information defining points and faces in 2D, and displays an image using OpenGL. XYL, a data directory which contains examples of XYL files, a simple 2D graphics point and line format; XYZF, a data directory which contains examples of XYZF files, a simple 3D graphics point and face format; ANNULUS is a set of 65 points and 48 quadrilateral faces. HEXAGONAL is a set of 105 points and 14 hexagons. LAKE is a set of 621 points and 974 triangles. POINTS is a set of 4 points and 2 triangles. SQUARES is a set of 30 points and 20 squares. You can go up one level to the DATA page.
OPCFW_CODE
We're working with a custom board based on the TI CC2640R2, and while we are able to fit the 8k config file into flash with the rest of the application, RAM is very limited, and we cannot integrate the BMI270 API, so we are basically driving the BMI270 at the register level through TI's SPI API. We have managed to get the config file to load (INTERNAL_STATUS reports init_ok). This is with code running in the main firmware application. This product makes use of the CC26xx family's Sensor Contoller, which is a low-power auxiliary sub-system intended for simple sensor interfaces. Due to its limitations, we have to download the 8k config file using the main micro, then after the config file has been downloaded, actual gyro data acquisition is driven by the sensor controller. While init completes, we are not getting any data from the gyro. After init completes, the main firmware has to release control of the SPI port so the Sensor Controller can take it over. The config file download uses a data rate of 10 MHz so it can be completed as quickly as possible, but the SPI data rate on the Sensor Controller is limited to 1.1 MHz. Would you expect this change in the SPI data rate after config file download to cause issues? I found while working on the config file download that It appears the timing is very tight on the init sequence. If I code all the SPI transfers one immediately after the other, I can get the download to succeed (INTERNAL_STATUS reports init_ok). If I put the SPI transfers into a generic transfer function, introducing function call over head between each step, the download fails, and INTERNAL_ERROR reports "long processing time, processing halted". The CC26xx Sensor Controller has its own set of APIs to drive a SPI interface, and I'm wondering if it's too slow for the BMI270. Once the config file is downloaded, can the BMI270 then be driven at a lower clock rate, with more latency between successive commands for gyro setup and data reads? 1.For your description "The config file download uses a data rate of 10 MHz so it can be completed as quickly as possible, but the SPI data rate on the Sensor Controller is limited to 1.1 MHz. Would you expect this change in the SPI data rate after config file download to cause issues?", BMI270 supports 10 MHz SPI data rate, it will work well as long as SPI master date rate is not greater than 10 MHz. 2.The configuration file needs to be loaded in segments. The length of segments depends on the maximum length supported by the SPI host. Initialization takes a while, but it only needs to be initialized once.
OPCFW_CODE
Unity's Future In High-Definition February 29, 2012 Page 4 of 4 Why open an office in Stockholm rather than go join the team in Copenhagen? Is it a strategic thing, or just because it's where you are? RZ: There are different reasons for that. If you look at Unity, that's how it happens. We have quite a lot of offices around the place. Opening just our office is a bit of a headache for our financial guy, but it's kind of natural for us. RZ: There's [also] a strategic reason for that. [laughs] EK: There are many successful studios here, naturally, so it's a good surrounding to be in, I think. There's a lot of things happening here. RZ: It helps to get focus, as well. If everyone sits in one place, I think you can get... Because one office is working on mobile, and the headquarters is more core, and here we can concentrate on certain things as well. Maybe sometimes it's even easier to concentrate on certain things when you have are smaller group while being in the same company, but you have this advantage of being a small group. And traveling, it's like a one hour flight. EK: I think having these separate offices -- I found it a bit different when I began working here, but it's actually beneficial in many, many ways. I think once you reach the other studios, you're much more respectful with other people's time than patting people on the back. You set up your communication, and you prepare it a bit more than you would do naturally. I think it's, in some ways, much more effective to have this spread out among the studios. RZ: In certain cases, it can be more helpful to structure your time. Of course, one big thing is it's easier to hire people this way because -- it's a bit probably different in Europe than in the States, probably, that people actually don't want to travel. They don't want to move. RZ: Yeah, because they have family here, they have a different language here, and it's a bit of problem to move to Denmark maybe, just because there's a different language, for instance. People do understand each other, but... But we have people from Germany, from Holland, and many places, so we don't really want to move them all the time, if we can keep them where they mostly have family, and are enjoying, because they're being more productive as well, instead of shuffling people from one place to another place. And it helps to be able to build better ties with the local game development community. Another day you visited Might & Delight, and some small companies here in Stockholm using Unity. So we can go and talk to them and sometimes even help them even. EK: Schools, it's the same thing. Game development universities in this area are growing. There's a lot of respectable game developers in the Stockholm area, and from that, it's become a very likely career choice for people in the Stockholm region, and a lot of talent is growing here. Personally, I think it's a surprising amount of developers comparing with other Scandinavian countries, just in Stockholm. It's sort of a hub that has been created here, for game development in this region. Not only finding talent for Unity, it's perfect being around here. Massive Black's Mothhead Unity is doing something that I don't think many companies are doing: building an engine that's like this, and building an engine this way, both. Neither one of these is really being attempted, I don't think. EK: There's many core differences in Unity, I think, from the culture of the company to the actual talent that's working there, that just differs from any other company that I've been working at, at least. I think that philosophy that Unity has is something you can actually feel. The Unity philosophy is the democratization of game development. That's the succinct way of putting it, right? EK: It's very exciting being part of it, I must say -- how Unity is progressing the mantra of democratizing of game development. I think not only can we provide good tools for developers, but I feel that I can be part of evolving game design, which has almost staggered the last couple of years. You can almost see it on the console market, these repetitive games sort of that are being developed in genres. I think looking at the games that has been done using Unity, it sort of differs. It's fresh. It's more playful. You see all these people getting their hands dirty and you see all the weird shit they're doing, and you realize there's more out there than the tunnel vision you might have, if you're in a major publisher or a major studio and you're just trying to ship. RZ: There's much more tunneling in the bigger industry, especially from a technical point of view. For three years, you know the direction of that tunnel and you dig in it for three years, maybe turning a little along the way. You get feedback once in three or five years, so that changes quite a lot of the process. Much less playfulness when you are digging that tunnel for three years, or five years. You can't really step aside. You can't really experiment that much. There's always a very important feature. EK: The goals are very clear from start, when you're working on those bigger projects. With us, I think even for these bigger developers, Unity has played some part there. Some of them, I heard, are using Unity as a prototyping tool for working with the games. RZ: That is something we hear as well, that even though the studios aren't using Unity for the main production, because they have their own tools already and they're invested in that. They grab Unity just for prototyping. [laughs] EK: I think that's very flattering. I guess you guys are hoping at some point they'll stop just prototyping in Unity, and move into production in Unity. RZ: To achieve that? Yes. That's why we want to have this "triple-A" whatever -- I call it "internal" focus. So we can actually find out, if people prototype, the reason why they would not continue working. And either implement the missing features they need, or fix whatever is broken. But we don't really want to do just that. We want to be more creative on the way. Not only just taking an EA triple-A studio team and working for them -- no. We want something a bit more different. Taking varied input and putting that into the initiative. EK: I think it's fantastic also to see these teams, how they've exploited the tools to such a degree that it surprises us, almost, the way that they're using Unity. You'll see when you visit Might & Delight as well, the way they're constructing the game [Pid] and how they're using Unity to do it. This is exactly how I see that people are using Unity. It's so flexible, and they've taken quite a leap to use Unity in a different way, to make their game come true. But it also shows how flexible it is, and how you, if you really tame it, you can use it any way you like it. There's no real root or any specific flow that you need to use in Unity. You can pick your own path. Page 4 of 4
OPCFW_CODE
How to impute data with mlr3 and predict with NA values? I followed the documentation of mlr3 regarding the imputation of data with pipelines. However, the mode that I have trained does not allow predictions if a one column is NA Do you have any idea why it doesn't work? train step library(mlr3) library(mlr3learners) library(mlr3pipelines) data("mtcars", package = "datasets") data = mtcars[, 1:3] str(data) task_mtcars = TaskRegr$new(id="cars", backend = data, target = "mpg") imp_missind = po("missind") imp_num = po("imputehist", param_vals =list(affect_columns = selector_type("numeric"))) scale = po("scale") learner = lrn('regr.ranger') graph = po("copy", 2) %>>% gunion(list(imp_num %>>% scale,imp_missind)) %>>% po("featureunion") %>>% po(learner) graph$plot() graphlearner = GraphLearner$new(graph) predict step data = task_mtcars$data()[12:12,] data[1:1, cyl:=NA] predict(graphlearner, data) The error is Error: Missing data in columns: cyl. The example in the mlr3gallery seems to work for your case, so you basically have to switch the order of imputehist and missind. Another approach would be to set the missind's which hyperparameter to "all" in order to enforce the creation of an indicator for every column. This is actually a bug, where missind returns the full task if trained on data with no missings (which in turn then overwrites the imputed values). Thanks a lot for spotting it. I am trying to fix it here PR Ok I'll try your solution. However I have a question about the possibility of processing only for a few variables. Let's say I have a dataset with 2 categorical variables and 3 numeric variables. I would like to preproces the categorical and numeric variables separately, all in parallel fashion. Is it possible to have an 'po' that allows me to select a few variables for a specific processing? yes, have a look at the affect_columns hyperparameter in PipeOpImpute and use an appropriate Selector. This hyperparameter is supported by many PipeOps. The following for example selects only a single column: po("imputehist", param_vals = list(affect_columns =selector_name("Sepal.Length"))) How can i drop the original factor columns, after endoding them by po('encode') ? po(select) allows for selecting / de-selecting columns. But po('encode') should drop original factor columns by itself.
STACK_EXCHANGE
Show that the two estimators are unbiased for $\theta$ $X_1$ and $X_2$, one accurate than the other, are subject to the standard deviations, $\sigma$ and 1.25$\sigma$ respectively. $X_1$ occurred 6 independent times, giving a mean of $\bar{x}_1$ while $X_2$ occurred 10 independent times with a mean of $\bar{x}_2$ Suppose the two samples are drawn form a population with mean, $\theta$ and variance, $\sigma^2$ How is it shown that the two estimators are unbiased for $\theta$, and also which estimator will be preferred if I have to choose between the two? To see if an estimator, $\hat{\theta}$ is unbiased for $\theta$ you need to calculate the bias: $$b = bias(\theta) = E(\hat{\theta}) - \theta $$ If $b=0$ then the estimator is unbiased. If the bias is not zero then the estimator is biased. The bias assesses how close an estimate of $\theta$ is to $\theta$ on average. If two estimates are unbiased then the one with least variance is preferred, this is because if it has less variance then it should, on average, be closer to $\theta$. Now the variance of a mean, $\bar{x}$ based on a sample of size $n$ is $$Var(\bar{X}) = Var(X)/n $$ when each $X$ is independent. Therefore, $$Var(\bar{X}_1) =\frac{\sigma^2}{6} \approx 1.67\sigma^2$$ $$Var(\bar{X}_2) =\frac{1.25^2 \sigma^2}{10} \approx 1.56\sigma^2$$ So the second estimator has least variance, thus is preferred. @BruceET On the contrary, often in the design of experiments or monitoring programs one has the option of taking more cheaper samples or fewer expensive samples for a given budget and, typically, the cheaper ones are less precise. This textbook question captures the essence of the tradeoff between cost (that is, numerousness) and precision (that is, the reciprocal of the variance). @whuber. Granted that. But to answer helpfully, it would be nice to have an explicit statement whether the 'estimators' being compared are $X_1$ vs. $X_2$ or $\bar X_1$ based on $n=6$ vs. $\bar X_2$ based on $n=10.$ @Bruce In the context, the bars look clear enough to me. Indeed, apart from the issue of whether we choose to use $\bar x_1$ or $\bar X_1,$ etc., everything is quite well defined within the question itself. $$ E \bar{x}_1 = E[\frac{1}{6}\sum X_1^j] = \frac{1}{6} 6 E[X_1] = E[X_1], $$ and in the same way, $$ E \bar{X}_2 = E[\frac{1}{10}\sum X^j_2] = \frac{1}{10} 10 E[X_2] = E[X_2]. $$ Since the expectation of the estimators are equal to the expected value, they are unbiased. Further $$ V[\frac{1}{6}\sum X^j_1] = \sigma^2\frac{1}{6} > V[\frac{1}{10}\sum X^j_2] = \sigma^2\frac{1.25^2}{10} $$ So the second estimator has lower variance and is preferred. One way to establish unbiasedness. But does not address the issue whether estimator $X_1$ is preferable to estimator $X_2.$ If you have two unbiased estimators $\hat \theta$ and $\tilde \theta$ and they have different variances, then it is ordinarily preferable to use the one with the smaller variance. However, if one estimate is cheaper to obtain or easier to use, you might use the one with the larger variance, as in @whuber's comment. Practical example: Two unbiased estimators for the mean $\mu$ of a normal population are the sample mean $A$ and the sample median $H.$ (See here for unbiased of the sample median of normal data.) That is, $E(A) = E(H) = \mu.$ But in any one situation $Var(A) < Var(H),$ so it the sample mean is the preferable estimator. If we are trying to estimate $\mu$ with $n = 10$ observations from a normal population with $\sigma=1,$ then $Var(A_{10}) = 0.1$ and $Var(H_{10}) \approx 0.138.$ So if we were to insist on using the median rather than the mean we would have to use more than ten observations to get the same degree of precision of estimation we could get from the mean. set.seed(2020) h = replicate(10^6, median(rnorm(10))) mean(h); var(h) [1] 0.000159509 # aprx E(H) = 0 [1] 0.1384345 # aprx Var(H) > 0.1 In particular, if we had an important practical reason to prefer using the median, then we could do about as well using the median with $n = 16$ observations as using the mean with 10 observations. h = replicate(10^6, median(rnorm(16))) mean(h); var(h) [1] -9.955106e-05 [1] 0.09042668
STACK_EXCHANGE
I’d like to start with a quick apology. Sorry that both the abstract algebra and the new game theory posts have been moving so slowly. I’ve been a bit overwhelmed lately with things that need doing right away, and by the time I’m done with all that, I’m too tired to write anything that requires a lot of care. I’ve known the probability theory stuff for so long, and the parts I’m writing about so far are so simple that it really doesn’t take nearly as much effort. With that out of the way, today, I’m going to write about probability distributions. Take a probabilistic event. That’s an event where we don’t know how it’s going to turn out. But we know enough to describe the set of possible outcomes. We can formulate a random variable for the basic idea of the event, or we can consider it in isolation, and use our knowledge of it to describe the possible outcomes. A probability distribution describes the set of outcomes of some probabilistic event – and how likely each one is. A probability distribution always has the property that if S in the set of possible outcomes, and ∀s∈S:P(s) is the probability of outcome s, then Σs∈SP(s)=1 – which is really just another way of saying that the probability distribution covers all possible outcomes. In the last post, about random variables, I described a formal abstract version of what we mean by “how likely” a given outcome is. From here on, we’ll mainly use the informal version, where an outcome O with a probability P(O)=1/N means that if watched the event N times, we’d expect to see O occur Probability distributions are incredibly important to understand statistics. If we know the type of distribution, we can make much stronger statements about what a given statistic means. We can also do all sorts of interesting things if we understand what the distributions look like. For example, when people do work in computer networks, some of the key concerns are called bandwidth and utilization. One of the really interesting things about networks is that they’re not generally as fast as we think they are. The bandwidth of a channel is the maximum amount of information that can be transmitted through that channel in a given time. The utilization is the percentage of bandwidth actually being used at a particular moment. The utilization virtually never reaches 100%. In fact, on most networks, it’s much lower. The higher utilization gets, the harder it is for anyone else to use what’s left. For different network protocols, the peak utilization – the amount of the bandwidth that can be used before performance starts to drop off – varies. There’s often a complex tradeoff. To give two extreme cases, you can build a protocol in which there is a “token” associated with the right to transmit on the wire, and the token is passed around the machines in the network. This guarantees that everyone will get a turn to transmit, and prevents more than one machine from trying to send at the same time. On the other hand, there’s a family of protocols (including ethernet) called “Aloha” protocols, where you just transmit whenever you want to – but if someone else happens to be transmitting at the same time, both of your messages will be corrupted, and will need to be resent. In the token-ring case, you’re increasing the amount of time it takes to get a message onto the wire during low utilization, and you’re eating up a slice of the bandwidth with all of those token-maintenance messages. But as capacity increases, in theory, you can scale smoothly up to the point where all of the bandwidth is being used by either the token maintenance or by real traffic. In the case of Aloha protocols, there’s no delay, and no token maintenance overhead – but when the utilization goes up, so does the chance of two machines transmitting simultaneously. So in an Aloha network, you really can’t effectively use all of the bandwidth, because as utilization goes up, the amount of bandwidth dedicated to actual messages decreases, because so much is being used by messages that had to be discarded due to collisions. From that brief overview, you can see why it’s important to be able to accurately assess what the effective bandwidth of a network is. To do that, you get into something called queueing theory, which uses probability distributions to determine how much of your bandwidth will likely be taken up by collisions/token maintenance under various utilization scenarios. Without the probability distribution, you can’t do it; and if the probability distribution doesn’t match the real pattern of usage, it will give you results that won’t match reality. There are a collection of basic distributions that we like to work with, and it’s worth taking a few moments to run through and describe - Uniform: the simplest distribution. In a uniform distribution, there are N outcomes, o1,…,on, and the probability of any one of them, P(oi)=1/N. Rolling a fair die, picking a card from a well-shuffled deck, or picking a ticket in a raffle are all events with uniform probability distributions. - Bernoulli: the Bernoulli distribution covers a binary event – that is, and event that has exactly two possible outcomes, “1” and “0”, and there’s a fixed probability “p” of an outcome of “1” – so P(1)=p, P(0)=1-p. - Binomial: the binomial distribution describes the probability of a particular number of “1” outcomes in a limited series of independent trials of an event with a Bernoulli distribution. So, for example, a binomial probability distribution will say something like “Given 100 trials, there is a 1% probability of one success, a 3% probability of 2 successes, …”, etc. The binomial distribution is one of the sources of the perfect bell curve. As the number of data points increases, a histogram of the Bernoulli distribution becomes closer and closer to the perfect bell. - Zipf distribution. As a guy married to a computational linguist, I’m familiar with this one. It’s a power law distribution: given an ordered set of outcomes, oi, …, on, the probabilities work out so that P(o1)=2×P(o2); P(o2)=2×P(o3), and so on. The most example of this is if you take a huge volume of english text, you’ll find that the most common word is “the”; randomly picking a work from english text, the probability of “the” is about 7%. The number 2 word is “of”, which has a probability of about 3.5%. And so on. If you plot a Zipf distribution on a log-log scale, with the vertical axis descending powers of 10, and the horizontal axis is ordered ois, you’ll get a descending line. - Poisson. The poisson distribution is the one used in computer networks, as I described above. The poisson distribution describes the probability of a given number of events happening in a particular length of time, given that (a) the events have a fixed average rate of occurence, and (b) the time to the next event is independent of the time since the previous event. It turns out that this is a pretty good model of network traffic: on a typical network, taken over a long period of time, each computer will generate traffic at a particular average rate. There’s an enormous amount of variation – sometimes it’ll go hours without sending anything, and sometimes it’ll send a thousand messages in a second. But overall, it averages out to something regular. And there’s no way of knowing what pattern it will follow: just because you’ve just seen 1,000 messages per second for 3 seconds, you can’t predict that it will be 1/1000th of a second before the next message. - Zeta. The zeta distribution is basically the Zipf distribution over an infinite number of possible discrete outcomes.
OPCFW_CODE
XCode Build Phase script exit code 1 error React Native I am working on a React-Native application. I've inherited it, development has been stopped for a while and some of the dependencies are outdated. I do not believe this is part of the problem but worth mentioning. Whatever I do, both on the command line and in XCode, the build fails with the following error: /bin/sh -c /Users/ME/Library/Developer/Xcode/DerivedData/AppName-cftpundgcnfgkvchyeqvsgfusuhb/Build/Intermediates.noindex/AppName.build/Debug-iphonesimulator/AppName.build/Script-948B6EB41D047ADB00584768.sh Error loading build config, ignoring Command /bin/sh failed with exit code 1 What I've done npm install In the ios directory, "sudo gem install cocoapods" and "pod install" react-native run-ios One way or another, this never works and the error is always the one above What I've tried Locking and unlocking the keychain Deleting the derived data folder, cleaning and re-building Ticking the "only run for install" checkbox in XCode in the RN bundle section Using yarn instead of npm pod deintegrate and install once again Upgrading react-native Once more, no luck I'm fairly new to this and don't fully understand what's going on - or why this is failing. Any ideas as to how to solve it? Version info react-native-cli: 2.0.1 react-native: 0.61.5 xcode: Version 12.3 (12C33) simulator: iphone 11 (ios 14.3) Try to delete the content of the DerivedData folder see - https://programmingwithswift.com/delete-derived-data-xcode/ and https://stackoverflow.com/questions/38016143/how-can-i-delete-derived-data-in-xcode-8 for more info Thanks for taking the time to answer - but as the question states, I've already tried to delete the derived data folder (and also cleaned the project) to no avail This isn't really the error. try running the app via xCode and check the error again Hi, again, as the question states, this error happens both in the command line and in Xcode. Ha, I missed that part you are right, In this case try to go to the build phases of your xcode project and understand which scripts are running during build... try remove them one by. one and see if the problem is resolved, maybe like. you said on of your dependencies is running some old script...
STACK_EXCHANGE
sed -i with quotes as first arg Using sed to fix whitespace in a git pre-commit hook, I ran into problems with the line: sed -i '' [the regex] [the file] Similar comments by others mention this inconsistency, and that taking out the quotes fixes the problem. That confuses me because I thought sed -i should take a filename as the parameter immediately following the flag. It may be a shell inconsistency; the gist I linked to is a .sh script and I'm using bash/centos. What is the magic/issue behind the sed -i '' usage, and how does stripping out the '' correct the problem? sed in Mac needs sed -i '' sed -i '' filename does nothing to the file. -E is used for giving more than one pattern operation on a file. i.e. sed -e 'pattern1' -e 'pattern2' filename OR you have to use "|" pipe (creating sub-shells). BTW, you'd probably be better off using ex instead of sed -i: ex is POSIX-specified for use as an in-place editor, whereas the -i vendor extension to sed, not being standardized, isn't guaranteed to be supported at all, and when it is supported, isn't guaranteed to work in any particular way. ...or, of course, using sed without the -i and implementing the logic to overwrite the original after a successful operation yourself. As @avinashraj mentioned in their comment BSD sed requires an argument to -i whereas GNU sed makes that argument optional (and doesn't allow a space in a quick test on CentOS 5). Because of that GNU sed behaviour using sed -i '' 'script' file is going to fail because it will assume -i has no argument, use '' as the script to run, and use 'script' as the first file to operate on. Using sed -i '' -e 'script' file will solve the script-as-filename problem for GNU sed but then leaves it interpreting '' as a filename. Using sed -i'' -e 'script' file solves that problem. I have no idea whether that is legal for BSD sed though. I expect -e 'script' is fine. The question is whether it accepts the suffix smushed up against the -i flag. Edit: As @CharlesDuffy accurately points out BSD sed cannot possibly accept -i'' or it would accept -i as well since it cannot tell the difference between those in its argument processing (the empty string has already dropped away at that point). Which means, unless I've missed something or GNU sed can actually accept -i '' correctly that I don't see a way to do this portably. sed -i'' -e 'script' file is precisely the same as sed -i -e 'script' file; in the argv array passed to sed from the shell, it's not possible to tell the difference. @CharlesDuffy Good point. Wasn't thinking about that clearly. That makes this rather unfortunate. The below function defines a sed_inplace command that tries to do the right thing on any platform. (Note that it requires bash as shell). sed_inplace() { local -a args=( "$@" ) local sed_version if sed_version=$(sed --version 2>&1) && [[ $sed_version = *'(GNU sed)'* ]]; then # the GNU version sed -i "$@" elif [[ $sed_version = *"[-i "* ]]; then # assuming this to be the BSD version with -i sed -i '' "$@" else # the generic version local file=${args[${#args[@]} - 1]} local tempfile=$(mktemp "${file}.XXXXXX") if sed "$@" <"$file" >"$tempfile"; then mv "$tempfile" "$file" else local rc=$? rm -f -- "$tempfile" return $rc fi fi } Of course, you'd avoid this whole mess if you didn't try to use sed -i at all, and instead used ex.
STACK_EXCHANGE
 using System.IO; using Mono.Cecil.Rocks; namespace Campy.Utils { using Mono.Cecil; using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; public static class MonoInterop { public static Mono.Cecil.TypeReference ToMonoTypeReference(this System.Type type) { String kernel_assembly_file_name = type.Assembly.Location; Mono.Cecil.ModuleDefinition md = Mono.Cecil.ModuleDefinition.ReadModule(kernel_assembly_file_name); var reference = md.ImportReference(type); return reference; } public static System.Type ToSystemType(this TypeReference type) { var to_type = Type.GetType(type.FullName); if (to_type == null) return null; string y = to_type.AssemblyQualifiedName; return Type.GetType(y, true); } public static Mono.Cecil.TypeReference SubstituteMonoTypeReference(this System.Type type, Mono.Cecil.ModuleDefinition md) { var reference = md.ImportReference(type); return reference; } public static TypeReference SubstituteMonoTypeReference(this Mono.Cecil.TypeReference type, Mono.Cecil.ModuleDefinition md) { // ImportReference does not work as expected because the scope of the type found isn't in the module. foreach (var tt in md.Types) { if (type.Name == tt.Name && type.Namespace == tt.Namespace) { if (type as GenericInstanceType != null) { TypeReference[] args = (type as GenericInstanceType).GenericArguments.ToArray(); GenericInstanceType de = tt.MakeGenericInstanceType(args); return de; } return tt; } } return null; } public static System.Reflection.MethodBase ToSystemMethodInfo(this Mono.Cecil.MethodDefinition md) { System.Reflection.MethodInfo result = null; String md_name = Campy.Utils.Utility.NormalizeMonoCecilName(md.FullName); // Get owning type. Mono.Cecil.TypeDefinition td = md.DeclaringType; Type t = td.ToSystemType(); foreach (System.Reflection.MethodInfo mi in t.GetMethods(System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.CreateInstance | System.Reflection.BindingFlags.Default)) { String full_name = string.Format("{0} {1}.{2}({3})", mi.ReturnType.FullName, Campy.Utils.Utility.RemoveGenericParameters(mi.ReflectedType), mi.Name, string.Join(",", mi.GetParameters().Select(o => string.Format("{0}", o.ParameterType)).ToArray())); full_name = Campy.Utils.Utility.NormalizeSystemReflectionName(full_name); if (md_name.Contains(full_name)) return mi; } foreach (System.Reflection.ConstructorInfo mi in t.GetConstructors(System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.CreateInstance | System.Reflection.BindingFlags.Default)) { String full_name = string.Format("{0}.{1}({2})", Campy.Utils.Utility.RemoveGenericParameters(mi.ReflectedType), mi.Name, string.Join(",", mi.GetParameters().Select(o => string.Format("{0}", o.ParameterType)).ToArray())); full_name = Campy.Utils.Utility.NormalizeSystemReflectionName(full_name); if (md_name.Contains(full_name)) return mi; } Debug.Assert(result != null); return result; } public static bool IsStruct(this System.Type t) { return t.IsValueType && !t.IsPrimitive && !t.IsEnum; } public static bool IsStruct(this Mono.Cecil.TypeReference t) { return t.IsValueType && !t.IsPrimitive; } } }
STACK_EDU
Platform: vSphere IPI 1. If user inputs an invalid value for platform.vsphere.diskType in install-config.yaml file, there is no validation checking for diskType and doesn't exit with error, but continues the installation, which is not the same behavior as in 4.10. After all vms are provisioned, I checked that the disk provision type is thick. 2. If user doesn't set platform.vsphere.diskType in install-config.yaml file, the default disk provision type is thick, but not the vSphere default storage policy. On VMC, the default policy is thin, so maybe the description of diskType should also need to be updated. $ ./openshift-install explain installconfig.platform.vsphere.diskType Valid Values: "","thin","thick","eagerZeroedThick" DiskType is the name of the disk provisioning type, valid values are thin, thick, and eagerZeroedThick. When not specified, it will be set according to the default storage policy of vsphere. What did you expect to happen? validation for diskType How to reproduce it (as minimally and precisely as possible)? set diskType to invalid value in install-config.yaml and install the cluster Verified on 4.11.0-0.nightly-2022-06-21-040754 Set diskType to invalid value in install-config.yaml: $ ./openshift-install create manifests --dir ipi ERROR failed to fetch Master Machines: failed to load asset "Install Config": failed to create install config: invalid "install-config.yaml" file: platform.vsphere.diskType: Invalid value: "test": diskType must be one of [eagerZeroedThick thick thin] $ ./openshift-install create manifests --dir ipi3 ERROR failed to fetch Master Machines: failed to load asset "Install Config": failed to create install config: invalid "install-config.yaml" file: platform.vsphere.diskType: Invalid value: "Thick": diskType must be one of [eagerZeroedThick thick thin] Set diskType to valid value in install-config.yaml: $ ./openshift-install create manifests --dir ipi2 INFO Consuming Install Config from target directory INFO Manifests created in: ipi2/manifests and ipi2/openshift $ yq r ipi2/manifests/cluster-config.yaml 'data.install-config' | yq r - platform.vsphere.diskType If set diskType as "" in install-config.yaml, the result is the same as not setting diskType. The description update will be tracked in BZ https://bugzilla.redhat.com/show_bug.cgi?id=2098072, so move this bug to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report.
OPCFW_CODE
Enterprise architecture (EA) is a description of an organisation from the integrated business and IT perspective intended to facilitate information systems planning. Popular EA frameworks1,2, books3,4 and even academic articles5 suggest that EA provides comprehensive descriptions of the current and future states of an organisation as well as a transition roadmap between them. Essentially, the mainstream EA literature typically describes EA as a thick ‘book’ full of various diagrams with four distinct ‘chapters’: business architecture, data architecture, application architecture and technology architecture. A relatively well-known EA guru Jaap Schekkerman even argues that EA is a ‘complete expression of the enterprise’6. Accordingly, mainstream approaches to EA, e.g. TOGAF ADM, suggest that EA, as a comprehensive book, is developed step-by-step, chapter by chapter and then used as an instrument for planning. However, all these approaches are based on the false belief widely promoted by John Zachman that organisations can be planned in detail like buildings or any other engineering objects. These approaches represent typical management fads and cannot be successfully implemented7,8, while successful EA practices do not resemble these approaches in any real sense9,10,11. Unsurprisingly, my analysis shows that the EA documentation in successful EA practices neither can be represented as two separate descriptions of the current and future states, nor can be easily split into distinct chapters (e.g. business, data, applications, etc.). Instead, individual EA artifacts, which often describe multiple domains simultaneously, can describe various points in time and can even be essentially timeless. Moreover, EA, in general, cannot be described as a comprehensive ‘book’, which is developed and then used, but rather as a set of diverse EA artifacts with different stakeholders, lifecycles, usage and complex interrelationships. Stakeholders, lifecycles and usage of EA artifacts Previously I reported that the notion of EA can be explained by the CSVLOD model, which articulates six general types of EA artifacts used in all mature EA practices: Considerations, standards, visions, landscapes, outlines and designs12,13. Considerations (principles, policies, maxims, etc.) are global conceptual rules and fundamental considerations important for business and relevant for IT. Standards (technology reference models, guidelines, reference architectures, etc.) are global technical rules, standards, patterns and best practices relevant for IT systems. Visions (business capability models, roadmaps, future state architectures, etc.) are high-level conceptual descriptions of an organisation from the business perspective. Landscapes (landscape diagrams, inventories, platform architectures, etc.) are high-level technical descriptions of the organisational IT landscape. Outlines (solution overviews, conceptual architectures, options papers, etc.) are high-level descriptions of specific IT initiatives understandable to business leaders. Designs (solution designs, solution architectures, project-start architectures, etc.) are detailed technical descriptions of specific IT projects actionable for project teams. Considerations represent the overarching organisational context for information systems planning. Their purpose is to help achieve the agreement on basic principles, values, directions and aims. Considerations are permanent EA artifacts which live and evolve together with an organisation. They are developed once, updated according to the ongoing changes in the business environment and used to influence all architectural decisions. For example, a set of ~10-20 architecture principles are established once by senior business leaders and architects and then periodically reviewed and updated, often on a yearly basis. These principles drive all EA-related decisions and thereby influence the development of visions, selection of standards and evolution of landscapes, as well as architectures of all IT initiatives described in outlines and designs. Standards represent proven reusable means for IT systems implementation. Their purpose is to help achieve technical consistency, technological homogeneity and regulatory compliance. Standards are permanent EA artifacts which live and evolve together with an organisation. They are developed on an as-necessary basis, updated according to the ongoing technology progress and used to influence architectures of all IT initiatives. For example, technology reference models are developed gradually by architects and periodically updated when new technologies emerge on the market. Technology reference models provide technology selection guidelines to all outlines and designs developed for specific IT initiatives and thereby shape resulting landscapes. Visions represent shared views of an organisation and its future agreed by business and IT. Their purpose is to help achieve the alignment between IT investments and long-term business outcomes. Visions are permanent EA artifacts which live and evolve together with an organisation. They are developed once, updated according to the ongoing changes in the business strategy and used to guide IT investments, prioritise IT initiatives and initiate IT projects. For example, business capability models are developed once by senior business leaders and architects and then periodically ‘re-heatmapped’ based on the changing business priorities. Heatmapped business capabilities initiate outlines for new IT initiatives and guide the selection of standards, evolution of landscapes and development of designs. Landscapes represent a knowledge base of reference materials on the IT landscape. Their purpose is to help understand, analyse and modify the structure of the IT landscape. Landscapes are permanent EA artifacts which live and evolve together with an organisation. They are developed on an as-necessary basis, maintained current to reflect the evolution of the IT landscape and used to rationalise the IT landscape, manage the lifecycle of IT assets and plan new IT initiatives. For example, landscape diagrams are developed gradually for different areas by architects and periodically updated when new IT systems are introduced or existing IT systems are decommissioned. Landscape diagrams provide the descriptions of the existing IT environment for planning all outlines and designs of new IT solutions. Outlines represent benefit, time and price tags for proposed IT initiatives. Their purpose is to help estimate the overall business impact and value of proposed IT initiatives. Outlines are temporary EA artifacts, which are short-lived, single-purposed and disposable. They are developed at the early stages of IT initiatives, used to evaluate, approve and fund specific IT initiatives, but then archived after investment decisions are made. For example, solution overviews are developed by architects and business leaders for all new IT initiatives to stipulate key requirements, discuss available implementation options and support investment decisions. After the IT initiatives are approved, solution overviews lose their value, but provide the initial basis for developing more detailed technical designs for these IT initiatives. Designs represent communication interfaces between architects and project teams. Their purpose is to help implement approved IT projects according to business and architectural requirements. Designs are temporary EA artifacts which are short-lived, single-purposed and disposable. They are developed at the later stages of IT initiatives, used to implement IT projects, but then archived after the corresponding projects are implemented. For example, solution designs are developed by architects, project teams and business representatives for all new IT projects to stipulate detailed requirements and describe required IT solutions. After the IT projects are implemented, solution designs lose their value and get stored in organisational document repositories. The relationship between EA artifacts The relationship between the six general types of EA artifacts of the CSVLOD model12,13 described above are shown in Figure 1. Figure 1. Relationship Between EA Artifacts Figure 1 explains the internal structure of EA as a complex set of different interrelated EA artifacts. As evident from Figure 1, EA can hardly be conceptualised as a single ‘book’ providing a ‘complete expression of the enterprise’, but rather as a complex ecosystem of different interacting elements. EA is not ‘developed and then used’, but consists of different types of EA artifacts with different lifecycles, some permanent and some temporary, which co-evolve together in organisations. The views in this article are the views of the author and are his own personal views that should not be associated with any other individuals or organisations.
OPCFW_CODE
Sure, but with Zip, you can’t reliably figure out where the chunks start and stop. In PNG, you can easily enumerate the chunks and skip over any chunks your program does not parse. When Jean-loup Gailly and Mark Adler started infozip a lot of discussion was done to try to guess what the future would hold, but it was just that. I didn’t see this option on the version of Info-ZIP they were using but that would definitely do what I wanted. I thought I looked at current man pages but I guess I didn’t. I was looking at the details of the ZIP format just last week when I discovered that a major performance issue was being caused by quirks of a ZIP implementation. - The True value indicates that when compressing file size larger than 4 GB, it’ll create zip files with zip64 extension. - Now that we have Dropbox, the cloud makes file storage and sharing faster, easier, and safer than file compression. - The JSON files use .json extension similar to the XML file format while saving. - Since the uncompressed data is the zip archive, the data being checksummed includes the checksum itself. - This will open the7-Zip Add to Archive window, where you can customize your zipped file before it’s created. Set the desired compression level by clicking the down arrow next to “Zip Files”. Ask the magazine company if they have a ftp server or a preferred way to receive the files. Lossy removes detail, among other things, to compress the file. Creating a .zip file is a great way to condense one or more files or folders into a single one. The compressed .zip file is useful for email or sending multiple files in one package. Windows generally opens a wizard to ask where you want the files extracted. The location of the “extract files” tool varies depending on how Windows Explorer is set up and what version of Windows you are using. Notice the address bar now says that you are in a folder with a .zip extension and shows a zipped file icon as well. It can also allow you to deliver files successfully if an email filter is blocking your file extension, since .zip files are typically allowed. Most modern formats for an image or a video such as a JPG file or an MPEG4 file already exist in a highly compressed state. Thus, using a ZIP compression tool for such file formats doesn’t really help. People often think that compressing the video file might take a lot of time or is not even possible. That is not true, you can easily compress your video without losing quality. Compressing the video with an inappropriate method can cause the video to lose quality. Instead, you can read these methods mentioned below properly to solve your problem. What Is The Tax Extension Deadline For 2020? With the slider, you can decide how much you want to optimize your image before you experience any noticeable quality loss. JPEG Optimizer allows you to upload and compress your photos online. This simple tool works, as its name implies, only on JPEG files. Many photo editing programs like Adobe Photoshop optimize images. Is There A Program That Scans For And Identifies Corrupt Files? If the entry doesn’t exist, create a new user variable named Path with the full path to flutter\bin as its value. You are now ready to run Flutter commands in the Flutter Console. Clone with Git or checkout with SVN using the repository’s web address. All are welcome to contribute to Audacity by helping us with code, documentation, translations, user support and by testing our latest code. 4 GB / 2 GHz2 GB / 1 GHzAudacity works best on computers meeting more than the minimum requirements stated above. For lengthy multi-track projects, we rocketdrivers.com/file-extensions/webm-8572 recommend using machines of substantially higher specification than the minimums.
OPCFW_CODE
Hello, my name is Mihkel, a 25-year old from Narva, Ida-Virumaa, Estonia. Currently, I am studying software development at kood/Jõhvi while also working as a QA Engineer. In my free time, I enjoy playing and watching football. I approach life with a problem-solving mindset and strive to find solutions rather than dwelling on issues. My background and decision to join kood/Jõhvi After studying informatics at University for two years, I realised that while theoretical materials were interesting, they weren’t enough to sustain my interest in the field. I even began to question whether programming was the right career path for me, and ultimately made the difficult decision to drop out of the program. “Not only will you learn programming here, but you’ll also get an opportunity to create real life projects” However, the whole picture changed when I discovered kood/Jõhvi and was very intrigued by the whole idea of new educational methodology that was suggested. I participated in the Selection Sprint, which made me confident that it is worth it to try and be a part of kood/Jõhvi’s journey. Life and studies in kood/Jõhvi In 2021, I started my studies on-site at the Sillamäe campus. However, I quickly connected with a group of remote students who became my go-to team for all future group projects. This made it possible for me to transition to off-site learning as well, and I have been studying remotely ever since. (Learn about life while studying remotely.) In my perspective, selecting a suitable team to work with is highly influential. I prefer not to be part of a team with overly skilled professionals or complete beginners in programming, as it may either limit my learning opportunities or hinder the team’s overall progress due to my inability to keep up with their pace. Therefore, in my experience, it is important to collaborate with individuals who share similar levels of expertise and, surprisingly, similar sleeping patterns and daily routines played a vital role for me as well. Initiatives, opportunities and life apart from studies One thing that always stood out to me about the kood/Jõhvi team was their openness to different ideas and requests. As a student, I felt encouraged to share my own ideas for activities and projects outside of the classroom. The first idea I proposed early on was for sport events. A small group of us would gather every week to play football, volleyball, basketball and field hockey. It was a great way to stay active and connect with my fellow students. Switching to studying remotely, which I mentioned above, makes me miss these gatherings. Another initiative that I took with a friend was to create a podcast called kood/Cast. It started as a joke, but we decided to give it a shot and pitched the idea to the kood/Jõhvi team. It was amazing to see our idea become a reality, and we enjoyed creating content that was relevant to our fellow students. During my second year, I was given an incredible opportunity to write a room booking system for the school. It was a challenging but rewarding experience, as I was able to apply my programming skills to a real-life project. The kood/Jõhvi team was always supportive, providing guidance and feedback along the way. In search of employment Even though I acquired necessary skills for a Junior Developer position, I noticed that most of the companies were hiring either Senior or Mid-level developers. I didn’t succeed in searching for the desired position, but still found an opportunity to test my skills as a QA engineer in Evolution Gaming Group, where I now work full-time alongside the studies. I am extremely happy about this opportunity and it added up to my confidence in the IT field. I am grateful to kood/Jõhvi for equipping me with the necessary skills to pursue a career in the IT field, which ultimately led me to my current role. My goal for the near future is to complete a specialisation in Artificial Intelligence, finish the school programm and then apply my newly acquired skills in the field of AI to test my abilities. If you want to take on your coding journey at kood/Jõhvi then head over to our “How to apply” page. See you at kood/Jõhvi!
OPCFW_CODE
<?php namespace Tlr\Display\Assets; use Illuminate\Http\Request; use Illuminate\Routing\Controller; use Tlr\Display\Assets\AssetRenderer; use Tlr\Display\Assets\AssetResolver; class AssetRenderController extends Controller { /** * The asset renderer * * @var \Tlr\Display\Assets\AssetRenderer */ protected $renderer; /** * The asset resolver * * @var \Tlr\Display\Assets\AssetResolver */ protected $assets; public function __construct(AssetResolver $assets, AssetRenderer $renderer) { $this->renderer = $renderer; $this->assets = $assets; } /** * Render the given JS assets * * @Get("assets/scripts.js", as="assets.js") * * @return \Symfony\Component\HttpFoundation\Response */ public function scripts(Request $request) { return response( $this->renderer->scripts($this->resolve($request->input('sources'))), 200, ['Content-Type' => 'application/javascript; charset=UTF-8'] ); } /** * Render the given JS assets * * @Get("assets/styles.css", as="assets.css") * * @return \Symfony\Component\HttpFoundation\Response */ public function styles(Request $request) { return response( $this->renderer->styles($this->resolve($request->input('sources'))), 200, ['Content-Type' => 'text/css; charset=UTF-8'] ); } /** * Resolve the given assets * * @param array $sources * @return array */ protected function resolve($sources) { return $this->assets ->clear() ->resolveArray((array)$sources) ->assets(); } }
STACK_EDU
You want some biomes? Well here you go. This is the 5 Odd Biomes Addon. It adds in 5 extra biomes that is very unusual. It is very odd so yea.... This is my first submission in MCPEDL. The 5 Odd Biomes Addon adds in odd biomes that are not supposed to exist in Minecraft. They are the wood and leaves biome, pure snow biome, bedrock biome and etc. This was not easy to make. The biomes are shown below: This surprized me when it worked. Wood And Leaves Biome: This biome is like the default lowlands noisetype. But rather than grass and dirt, it uses leaves and wood as replacements! This will be useful for builders cause they'll have an INSANE supply of wood! Get an axe and start CHOPPING! Smaller Flower Forests generate here. Lag may occur depending on phonetype due to so many leaves. Pure Snow Biome: The Pure Snow Biome contains snow blocks that goes for a certain y-level. This photo shows the snow underground. This was blown up with tnt. Contains smaller cold taiga biomes. Every SPLEEF lover wants this biome. It is what it is. Bedrock on the surface. The negative effect is: When digging back up under the biome, the bedrock will block your way up and that means you need to find a way out! Contains Stone Beaches. You may be able to get out of there. Do not mine straight forward in order to prevent getting stuck below the bedrock. Here's some hidden text: 01001100 01100101 01100001 01100100 00100000 01110100 01101111 00100000 01101101 01111001 00100000 01101110 01100101 01111000 01110100 00100000 01100001 01100100 01100100 01101111 01101110 00111010 00100000 01001001 01110100 01110011 00100000 01100001 00100000 01101101 01101111 01100010 00101110 00001010 01001000 01100101 01110010 01100101 00100111 01110011 00100000 01101101 01111001 00100000 01011000 01100010 01101111 01111000 00111010 00001010 01010000 01100001 01111001 01101100 01100001 01111001 01100101 01110010 00110001 00110010 00110011 00110100 00001010 01000001 01101110 01111001 01110111 01100001 01111001 00101100 00100000 01100111 01101111 01101111 01100100 00100000 01101010 01101111 01100010 00101110 Its pretty long. But its not. Here are the 3 biomes at once: There are 2 extra biomes but i couldnt find them. If you cant find them i may need to get rid of them. The 2 Extra Biomes: The Decostones Biome adds in a biome containing a lot of stone, granite, diorite and andesite. Good for builders out there. Dead Area Biome: The Dead Area Biome has coarse dirt in the surface. You cannot find trees here. Stone Beaches generate here. This addon is free to use for everyone. Use it on your addons for free! But credit me at least.😁 These biomes use the lowlands noisetype so expect some odd looks. Untested on Windows 10. This only took about an hour thanks to a addon creator app The addon creator app name is Addons Maker For Minecraft PE. Here's the link: Not tested for 1.15 and 1.16 yet. Just in case, use it for 1.14 for now.(or you can use it in the betas if you want.) My first Submission. im a 13 year old who made this. what? Anyways, peace out. Select version for changelog: Forgot to add the zip download.😅 Added it in now. How to import: 1. Download the addon. 2. Click on the file. 3. Select "Minecraft" 1. Unzip the zip file. 2. Move the folders to the following areas. 3. Resource Pack goes here: Main Folder > games > com.mojang > resource_packs 4. Behavior Pack goes here > Main Folder > games > com.mojang > behavior_packs
OPCFW_CODE
A little word cloud generator in Python. Contribute to amueller/word_cloud development by creating an account on GitHub. How to build a CI/CD pipeline with Docker - CircleCI This post will cover: A simple unit test for a Python flask application, how to implement a CI/CD pipeline in the codebase using a CircleCI config file in the project, building a Docker image, pushing the Docker image to Docker Hub, kicking… Python Learn how to use Keras to build a Recurrent Neural Network and create a Chatbot! Who doesn’t like a friendly-robotic personal assistant? Build a Low Latency, Globally Distributed Python App using… Build a low latency, globally distributed Python app using Macrometa's geo-distributed database cloud. Python Certification:100 Hrs of Assignments [+3 Live Projects] What are Word Clouds? The Value of Simple Visualizations A simple visualization such as a word cloud can make an instant impact, and upon Word Cloud Generator for Chrome; Word Cloud Python tools; Google Word Word Cloud Data (Practice Interview Question) | Interview Cake You're building a word cloud. Write a function to figure out how many times each word appears so we know how big to make each word in the cloud. Word cloud from a Pandas data frame | Bartosz Mikulski 5 Sep 2018 In this article, we explore how to generate a word cloud in python in any shape that you desire. We will go through an example of how to create amueller/word_cloud: A little word cloud generator in - GitHub A little word cloud generator in Python. Contribute to amueller/word_cloud development by creating an account on GitHub. WordCloud using Python - YouTube 9 Sep 2017 This video demonstrates how to create a wordcloud of any given text-corpora/article using wordcloud module in Python. Code here: Free online word cloud generator and tag cloud creator - Wordclouds.com is a free online word cloud generator and tag cloud generator, similar to Wordle. Create your own word clouds and tag clouds. Paste text or upload wordcloud · PyPI word_cloud. A little word cloud generator in Python. Read more about it on the blog post or the website. The code is tested against Python 2.7, 3.4, 3.5, 3.6 and 3.7. Installation. If you are using pip: pip install wordcloud If you are using conda, you can install from the conda-forge channel: conda install -c conda-forge wordcloud Installation WORDCLOUD – The Python Graph Gallery Python is an incredible tool for task automation, and it's rapidly pervading the industry. See how to use Pulumi and Python to automate provisioning of cloud infrastructure and delivery of applications.
OPCFW_CODE
Cross-Platform & Hybrid VS Native Apps? Cross-Platform & Hybrid VS Native Apps? It’s time to start developing your app! You might be considering whether a hybrid or a native app is best – the short answer is: Let’s explore the high-level details of each in an effort to point you in the right direction. Although cross-platform and hybrid mobile apps are not exactly the same, they for the purpose of this discussion we will be grouping them together. Roughly, cross-platform apps are developed using HTML and CSS and optimized for mobile and desktop devices whereas hybrid apps are created using a combination of web and native code to achieve an “almost native” look and feel. Native apps are exactly as they sound, applications that are written in the native language and style optimized for a specific platform. Examples of Native app development: - Android (Java/Kotlin) - iOS (Objective-C/Swift) - Windows (C#/Java) - MacOS (Objective-C) Examples of Hybrid & Cross-Platform development: - Flutter (Dart) Notice the programming language trend for cross-platform/hybrid… When to AVOID cross-platform & hybrid If raw performance and fluid rendering are absolutely necessary for your application, you may choose to avoid building your project with cross-platform/hybrid frameworks. In recent years non-native frameworks have been closing the performance gap between themselves and native apps, yet, there still exists a significant difference. In my opinion, ReactNative and Flutter are the highest performing non-native frameworks. The reason for lacking performance is simple: non-native apps need to use a type of “bridge” to access functionality specific to the target platform. Additionally, native rendering is really hard to compete with, the built-in mobile WebViews that are used by many non-native frameworks just cannot compete with the native rendering engines. In the case that your application or its core features revolve around platform-specific functionality, hybrid and cross-platforms can become challenging. Each platform has features that are exclusive, think about iOS; Apple Pay or the ability to insert a 2FA (2 Factor Authentication) code directly from a text message. Even something as simple as accessing contacts or the camera can be cumbersome, often requiring plugins or dependencies that leverage the “bridge” to access platform functionality. If these types of operations happen frequently, is the performance and development cost really worth it? Single deployment or user-base This one is simple. If you are not going to be publishing your application to users on multiple types of devices such as Android & iOS, then, you don’t really need a non-native solution. In this case, a native app might be a better choice. When you SHOULD use cross-platform & hybrid Building prototype applications or working with a smaller budget. Quality software development is never cheap. It becomes even more expensive if you have to hire multiple teams to work on each platform for the SAME app! If you don’t have the desire to pay for multiple teams or learn to code for multiple platforms yourself, then, hybrid apps could be the way to go. There is something to be said for having a single, unified codebase for your application. In many cases, this can lead to fewer repeat bugs and an increase in overall development time. Rapid PoC and Prototyping If your skillset or the skills of your developers are not tuned for native development. Using a non-native development solution has opened the floodgates to novice developers and those who prefer web apps and web technology. Website AND an App?! If your application needs to function as a website AND an app. Using the same code to serve a web app and mobile app is really fun! The concept of Progressive Web Apps (PWA) is becoming quite popular. In short, these are applications that run on the web and seamlessly adapt to mobile devices while implementing a unique content caching strategy to facilitate the stateful “app-y” feel of the project. In general, if you desire platform flexibility, web technology or have a “simple” application that doesn’t need blazing fast performance, then, you should begin to investigate further into specific hybrid/cross-platform frameworks. Interested in Flutter? Take a gander at our recent article to become familiar with the basics. Leave a Reply
OPCFW_CODE
Growing up, I always loved the Atari 2600 version of Asteroids. The colorful graphics, the nifty sound effects, the “Jaws”-esque music ratcheting up the tension–it was a thrilling arcade experience. It was also an incredibly difficult experience. I would always get my butt kicked before I reached even 5,000 points, even sooner if I dared to flip the left difficulty switch to A, enabling the flying saucer enemies. I spent a lot of time playing the “kiddie” mode, game #33, which only has four asteroids per wave, and the big asteroids only turn into one medium asteroid when hit, instead of 2. Even still, I would be lucky to get 10,000 points. So it was quite a surprise to me when I started playing Asteroids on my Atari 2600 emulator while waiting for my spaghetti water to boil, and before I knew it, started exceeding 20,000 points. I was playing it on the default difficulty, game #1. I hadn’t played it in years. But I managed to grasp something that my younger self couldn’t quite get. Unlike the arcade version of Asteroids, which has rocks coming at you from all directions, every wave of Atari 2600 asteroids starts on the left and right side of the screen, going up and down in two orderly rows, and only very gradually move horizontally towards you. Furthermore, hitting the asteroids with your gun makes them smaller, but doesn’t affect their velocity in any significant way. They just continue on the same trajectory. So if you just stay in the center of the screen and blast whichever of those two rows is closest to you, you can destroy most of the asteroids before they even get close to your ship. This simplistic but effective strategy got me past 40,000 points… To 60,000 points… And beyond. The asteroids didn’t even speed up until about after 80,000 points or so, but by then I had so many extra lives (and getting more every 5,000 points) that it didn’t really phase me at all. Here’s me just about to roll over the score counter at 99,950 points: As you can see, I still had plenty of lives left. By that point, the novelty of my accomplishment had worn away and I was getting pretty bored. So after the score had flipped back to 0, I flipped the left difficulty switch to A and turned on the flying saucers. This did not increase the difficulty at first. If there were a lot of asteroids left in the level, it would almost invariably crash into one of them, as the saucers always start on either the left or the right and move horizontally to the other side. It was only after most of the asteroids were clear that the saucer would present any sort of challenge. There, my strategy of staying in the center of the screen was pretty much useless, and the saucers finally whittled down my extra lives at 147,970 points… a new Asteroids record for me. No save states or cheats… just me blowing up a whole bunch of space rocks. So, my question to those reading this article is, which games that you struggled with endlessly as a child are easy to you now? As a corollary, are there any games that were easy for you as a kid that you suck at now? I can think of a few: mostly those early, clunky RPGs for the NES and early home computers. As a kid, you have a lot more time on your hands to memorize the spell system in Wizardry, map the towns and dungeons of The Bards Tale on graph paper, and trial-and-error yourself through the Marsh Cave in Final Fantasy without just dropping it and being distracted by work, or your spouse, or your social life, or whatever. What I wouldn’t give to have that level of attention to a video game anymore, or for that matter, anything else. So, what do you all think?
OPCFW_CODE
"Stratified" sampling of a population, where the subpopulations are based on a "count" instead of a category I have a set of data that is essentially bowls with marbles in them. Each bowl can contain anywhere from 3 to hundreds of thousands of marbles. Many more bowls contain small numbers of marbles than large numbers of marbles--if you arrange the bowls from fewest marbles to most marbles, when you reach the 99th percentile of bowl size you're in the 75th percentile of marbles--1% of the bowls contain about 25% of the marbles. The population is ultimately bowls--we cannot look at a single marble, only a whole bowl of marbles. But we want our sample to represent both bowls and marbles well. Ideally we'd like a sample that represents both marbles and bowls well (although I'm not quite sure what "well" means here). Stated another way, we would like a sample that reflects the full range of bowl sizes. The problem is that these needs are conflicting. If we sample bowls randomly, we end up sampling very few marbles since the vast majority of bowls contain very few marbles. If we randomly select marbles and make a sample by collecting the bowls containing all the marbles we selected (or we weigh each bowl's sampling probability by the number of marbles), we end up with only very large bowls and a sample that doesn't reflect the range of bowl sizes. The superficial solution seems to be to take two samples and combine them--one random sample of the population of bowls, and one random sample of the population of marbles (where we take the entire bowl for each sampled marble). But I can't help but think there's a better way to do this, with some kind of parameter that reflects the relative importance of a random sample of bowls vs. a random sample of marbles. Or at least some way to analyze the tradeoff based on the distribution of marbles across bowls. And even with the superficial sample, I'm not quite clear how to balance the sample sizes to represent bowls and marbles "evenly". I think there is a better way to do this than simply stratifying by bowl size (which is the best idea I have right now) since there's likely a way to parametize the bowl/marble distribution. I'm sure there's a better way to describe this problem, where you have a population where the members have a count value, where the counts are not evenly distributed across the population, and you want a good sample of both random population members and of population members reflecting the diversity of count values. Any clarity on a better way to describe this would be helpful, even if you don't have an answer, so at least I could more deliberately search for a solution. Thank you! It looks to the quite common problem of sampling families or sampling children. Both samples are interesting but they are from different populations, although they are formed by the same people. You need to decide if bowls or marbles are your population. Not sure if this makes any sense so pls correct me if I sound absurd. If I understood right, by drawing "evenly", you meant that you want to draw in a jointly uniform fashion, is this right? If that's the case, perhaps you could use the probability integral transform? I was able to seek the advice of a statistician who specializes specifically in sampling. The recommended approach was to stratify extremely aggressively--order the balls from fewest marbles to most marbles and then take a single example from each strata. This means you have a number of strata equal to the size of the sample. This preserves the distribution of the data, but ensures that the long tail also gets sampled.
STACK_EXCHANGE
When you log intoGlide you'll arrive at your Dashboard. On the left-hand side, you’ll see projects you’re working on, as well as where you can create templates, review your usage data, and manage your billing and teams. You will also see any teams you belong to, and have the ability to create new teams if needed. Click on an app to start editing it. Glide's user interface can be broken into three areas: - The Data Editor - The Layout Editor You can navigate to these different sections using the top navigation bar, which remains consistent on all screens. The Data Editor has four distinct areas. - The tables area on the left shows each of your individual tables. If you’ve synced data from Google, Airtable, Excel, or a file, you’ll see all of your imported data here. You can add more tables directly in the data editor, or hit the + symbol to sync more data from another source. - The sources area on the bottom left is where you see your Data Sources. You can click here to sync or visit your data source. - The main area is the actual data grid itself. Here you can scroll, interact, and edit the data you see. - The Data Editor has its own top bar – which shows the Preview As dropdown, the name of the current table, the Find Column search box, and the Add Column button. The Layout area of Glide can be broken into 5 core areas and is where you do the bulk of your design work. - Device preview - Configuration panel Layout Editor: Tabs Tabs is where you manage the tabs in your app. You can add, delete, duplicate, rearrange, and drag them into the App Menu. Layout Editor: Screen The Screen section is where you manage the Components on that screen. Click on a Component to edit it and click the plus (+) icon to add a new Component. You can also re-order and delete Components here. When you see a dropdown arrow in Glide, very often, you can collapse that area to make the interface cleaner, or to give yourself more room. 👇 Layout Editor: Data Within the layout area of Glide, next to the Screen tab, you have another tab called Data. This is different from the Data Editor. The Data Editor shows you all of your data. The Data Tab shows you the data for the current screen. In the image above, you can see we're viewing a record (row). In the Data Tab, we see that row displayed vertically. This can be confusing at first, but compare it with the highlighted row in the sheet below and it will start to make sense. The Data Tab is editable, so you can actually interact with your data without even visiting the Data Editor. You can also add new columns and change their type. Layout Editor: Device preview The device preview area contains a number of features that let you interact with your app – both as a user and as an app developer. - The device itself - The dropdown within the Layout icon allows you to change the device width & OS. - The preview as dropdown - Play mode & select mode lets you toggle between interacting as a user and as an app developer – allowing you to select and configure Components with the mouse without triggering them like a normal user. - If you right-click or ctrl-click anywhere on your device, you'll bring up simple component editing options such as moving up or down; cut, copy, and paste; duplicate; and delete. Layout Editor: Configuration panel The area on the right of the Layout Editor is all to do with configuration, and it changes depending on your editing context. Knowing what context you are in is important. To help you understand this, there is a breadcrumb menu at the top. In the image below, see how the breadcrumb menu (top right) changes to show you where we currently are in our editing context. Within the configuration panel, many different things will appear, depending on your context. You will usually see 2 or 3 different tabs within the panel. These typically follow the pattern of General, Options, and Form.
OPCFW_CODE
""" Domain information groper (Dig) parsers ======================================= Parsers included in this module are: DigDnssec - command ``/usr/bin/dig +dnssec . SOA`` -------------------------------------------------- DigEdns - command ``/usr/bin/dig +edns=0 . SOA`` ------------------------------------------------ DigNoedns - command ``/usr/bin/dig +noedns . SOA`` -------------------------------------------------- """ import re from insights.core import CommandParser from insights.core.exceptions import SkipComponent from insights.core.plugins import parser from insights.specs import Specs HEADER_TEMPLATE = re.compile(r';; ->>HEADER<<-.*status: (\S+),') RRSIG_TEMPLATE = re.compile(r'RRSIG') class Dig(CommandParser): """ Base class for classes using ``dig`` command. Attributes: status (string): Determines if the lookup succeeded. has_signature (bool): True, if signature is present. command (string): Specific ``dig`` command used. Raises: SkipComponent: When content is empty or cannot be parsed. """ def __init__(self, context, command): self.status = None self.has_signature = False self.command = command super(Dig, self).__init__(context) def parse_content(self, content): if not content: raise SkipComponent('No content.') for line in content: match = HEADER_TEMPLATE.search(line) if match: self.status = match.group(1) if RRSIG_TEMPLATE.search(line): self.has_signature = True @parser(Specs.dig_dnssec) class DigDnssec(Dig): """ Class for parsing ``/usr/bin/dig +dnssec . SOA`` command. Sample output of this command is:: ; <<>> DiG 9.11.1-P3-RedHat-9.11.1-2.P3.fc26 <<>> +dnssec nic.cz. SOA ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58794 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;nic.cz. IN SOA ;; ANSWER SECTION: nic.cz. 278 IN SOA a.ns.nic.cz. hostmaster.nic.cz. 1508686803 10800 3600 1209600 7200 nic.cz. 278 IN RRSIG SOA 13 2 1800 20171105143612 20171022144003 41758 nic.cz. hq3rr8dASRlucMJxu2QZnX6MVaMYsKhmGGxBOwpkeUrGjfo6clzG6MZN 2Jy78fWYC/uwyIsI3nZMUKv573eCWg== ;; Query time: 22 msec ;; SERVER: 10.38.5.26#53(10.38.5.26) ;; WHEN: Tue Oct 24 14:28:56 CEST 2017 ;; MSG SIZE rcvd: 189 Examples: >>> dig_dnssec.status 'NOERROR' >>> dig_dnssec.has_signature True >>> dig_dnssec.command '/usr/bin/dig +dnssec . SOA' """ def __init__(self, context): super(DigDnssec, self).__init__(context, '/usr/bin/dig +dnssec . SOA') @parser(Specs.dig_edns) class DigEdns(Dig): """ Class for parsing ``/usr/bin/dig +edns=0 . SOA`` command. Sample output of this command is:: ; <<>> DiG 9.11.1-P3-RedHat-9.11.1-3.P3.fc26 <<>> +edns=0 . SOA ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11158 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;. IN SOA ;; ANSWER SECTION: . 19766 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2017120600 1800 900 604800 86400 ;; Query time: 22 msec ;; SERVER: 10.38.5.26#53(10.38.5.26) ;; WHEN: Thu Dec 07 09:38:33 CET 2017 ;; MSG SIZE rcvd: 103 Examples: >>> dig_edns.status 'NOERROR' >>> dig_edns.has_signature False >>> dig_edns.command '/usr/bin/dig +edns=0 . SOA' """ def __init__(self, context): super(DigEdns, self).__init__(context, '/usr/bin/dig +edns=0 . SOA') @parser(Specs.dig_noedns) class DigNoedns(Dig): """ Class for parsing ``/usr/bin/dig +noedns . SOA`` command. Sample output of this command is:: ; <<>> DiG 9.11.1-P3-RedHat-9.11.1-3.P3.fc26 <<>> +noedns . SOA ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47135 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;. IN SOA ;; ANSWER SECTION: . 20195 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2017120600 1800 900 604800 86400 ;; Query time: 22 msec ;; SERVER: 10.38.5.26#53(10.38.5.26) ;; WHEN: Thu Dec 07 09:31:24 CET 2017 ;; MSG SIZE rcvd: 92 Examples: >>> dig_noedns.status 'NOERROR' >>> dig_noedns.has_signature False >>> dig_noedns.command '/usr/bin/dig +noedns . SOA' """ def __init__(self, context): super(DigNoedns, self).__init__(context, '/usr/bin/dig +noedns . SOA')
STACK_EDU
#include "UIList.h" #include "UIScrollBar.h" #define BASE_SCROLLBAR_LENGTH 0.4f UIList::UIList(const char* filename, int displayElementsCount, float listHeight, UIController* uiController) : UIElement(uiController){ //La ruta de la imagen a cargar this->filename = filename; this->displayElementsCount = displayElementsCount; this->selectedElement = NULL; this->selectedElementId = -1; this->items = new vector<UIElement*>(); this->listHeight = listHeight; this->selectedButton = NULL; this->initializeButtonItems(displayElementsCount, listHeight); //Como no es un form, tengo que inicializar a mano this->initiate(); } void UIList::initializeButtonItems(int displayElementsCount, float listHeight){ //Creo y agrego todos los botones para los items, calculando su posicion y escala vec2 scale = vec2(1.0f, 1.0f / (float)displayElementsCount); this->buttonItems = new list<UICheckBox*>(); for(int i=0; i < displayElementsCount; i++){ vec2 pos = vec2(0.0f, i * scale.y * listHeight); UICheckBox* button = UIFactory::createCheckBox(pos, 0.0f, scale, "./Textures/UI/listItemBack.png", this->uiController); button->setParentMergeAllowed(true); this->buttonItems->push_back(button); } } //Incializacion de los estados del elemento void UIList::initiateStates(){ this->basicState = UIFactory::createState(vec2(0.0f), 0.0f, vec2(1.0f, 1.0f), 1.0f, this->filename); this->withScrollBarState = UIFactory::createState(vec2(0.0f), 0.0f, vec2(1.0f, 1.0f), 1.0f, this->filename); } //Inicializacion de los subelementos void UIList::initiateElements(){ vec2 listScale = this->externalState->getActualState()->getScale(); float barLength = this->listHeight / BASE_SCROLLBAR_LENGTH; this->scrollbar = UIFactory::createScrollBar(vec2(0.4f * listScale.x, 0.0f), 90.0f, listScale.x * 0.4f, barLength, "./Textures/UI/textbox1.png", this->uiController); } void UIList::bindElementsToStates(){ //Asigno los botones disponibles a los estados std::list<UICheckBox*>::iterator it = this->buttonItems->begin(); while(it != this->buttonItems->end()){ UICheckBox* button = (*it); this->basicState->addElement(button); this->withScrollBarState->addElement(button); ++it; } //Asigno la scrollBar this->withScrollBarState->addElement(this->scrollbar); this->internalState->makeStateTransition(this->basicState); } void UIList::draw(){ //Dibujo el fondo del control this->internalState->draw(); //Calculo el primer elemento a mostrar float factor = this->scrollbar->getScrollFactor(); int startIndex = (int)(factor * (this->items->size() - this->displayElementsCount)); //Me fijo si tengo menos elementos de los que puede mostrar el control int elementsToDraw = this->displayElementsCount; if(elementsToDraw > this->items->size()) elementsToDraw = this->items->size(); //Dibujo los elementos!! int posIndex = 0; for(int i=startIndex; i < elementsToDraw + startIndex; i++){ this->items->at(i)->draw(); posIndex++; } } void UIList::update(UIState* parentState){ //Mergeo los estados UIState finalState = this->mergeParameters(parentState); //Update them all this->internalState->update(&finalState); this->externalState->update(&finalState); //Ahora voy identificar que elementos de la lista mostrar y actualizarlos //Calculo el primer elemento a mostrar float factor = this->scrollbar->getScrollFactor(); int startIndex = (int)(factor * (this->items->size() - this->displayElementsCount)); //Me fijo si tengo menos elementos de los que puede mostrar el control int elementsToDraw = this->displayElementsCount; if(elementsToDraw > this->items->size()) elementsToDraw = this->items->size(); //Hago calculos de posicion de los elementos float scale = 1.0f / (float)this->displayElementsCount; vec2 pos = vec2(0.0f, scale * this->listHeight); //Actualizo los elementos int posIndex = 0; for(int i=startIndex; i < elementsToDraw + startIndex; i++){ UIElement* element = this->items->at(i); element->getExternalState()->getActualState()->setPosition(vec2(posIndex * pos.x, posIndex * pos.y)); element->update(&finalState); posIndex++; } } //Maneja los eventos que se producen cuando un elemento consigue el foco void UIList::handleEvents(){ //Empiezo evitando verificar nada si no hay eventos if(this->onFocusEvent.getStateCode() == NOEVENT_UIEVENTCODE) return; if(this->onFocusEvent.getStateCode() == WAITING_UIEVENTCODE) return; //Compruebo de que elemento se trata el evento y lo proceso if((this->onFocusEvent.getStateCode() == ONFOCUSRELEASE_UIEVENTCODE)){ this->findClickedElementEvent(); } //Vacio el evento (No me interesa en este caso mantener estados de hold ni nada de eso) this->onFocusEvent.setStateCode(WAITING_UIEVENTCODE); } void UIList::findClickedElementEvent(){ //Recorro la lista de botones de item tratando de encontrar de que elemento se trata //Es ineficiente, pero nunca van a ser muchos botones y solo se ejecuta al hacer click, no es un gran impacto UIElement* elementToFind = this->onFocusEvent.getAfectedElement(); unsigned int index = 0; for(list<UICheckBox*>::iterator it = this->buttonItems->begin(); it != this->buttonItems->end(); it++){ UIElement* element = (*it); if(element == elementToFind){ //Encontre el boton que se presiono, me traigo el elemento que le corresponde if(index < this->items->size()){ //Descelecciono el boton anterior y selecciono el actual if(this->selectedButton != NULL) this->selectedButton->setChecked(false); this->selectedButton = (*it); this->selectedButton->setChecked(true); //Consigo el item asociado a ese boton float factor = this->scrollbar->getScrollFactor(); int startIndex = (int)(factor * (this->items->size() - this->displayElementsCount)); this->selectedElementId = startIndex + index; this->selectedElement = this->items->at(this->selectedElementId); break; } //Es un click invalido (*it)->setChecked(false); } index++; } } void UIList::clear(){ //Primero borro los elementos en si for(unsigned int i=0; i < this->items->size(); i++){ this->deleteElement(this->items->at(i)); } //Luego los borro de la lista this->items->clear(); this->selectedElement = NULL; this->selectedElementId = -1; //Saco la barra de scroll this->internalState->makeStateTransition(this->basicState); } UIList::~UIList(){ //Borro los elementos contenidos this->clear(); //Borro todo el resto if(this->basicState != NULL){ delete this->basicState; this->basicState = NULL; } if(this->withScrollBarState != NULL){ delete this->withScrollBarState; this->withScrollBarState = NULL; } if(this->scrollbar != NULL){ delete this->scrollbar; this->scrollbar = NULL; } //Borro todos los botones de items y la lista std::list<UICheckBox*>::iterator it = this->buttonItems->begin(); while(it != this->buttonItems->end()){ delete (*it); it = this->buttonItems->erase(it); } delete this->buttonItems; } void UIList::addElement(UIElement* element){ //Escalo el elemento float scale = this->listHeight / (float)this->displayElementsCount; UIState* state = element->getExternalState()->getActualState(); state->setScale(vec2(scale) * state->getScale()); //Agrego el elemento this->items->push_back(element); //Verifico si es necesario poner o sacar el scrollBar if(this->items->size() > this->displayElementsCount){ this->internalState->makeStateTransition(this->withScrollBarState); //Modifico la precision del scrollbar this->scrollbar->setScrollFactorIncrement(1.0f/(this->items->size() - this->displayElementsCount)); } else{ this->internalState->makeStateTransition(this->basicState); } } void UIList::setScrollBarPositionX(float value){ this->scrollbar->getExternalState()->getActualState()->setPosition(vec2(value, 0.0f)); } void UIList::setSelectedElementById(int position){ if(position < this->items->size()){ // Me guardo cual es el item seleccionado this->selectedElementId = position; // Deselecciono el anterior item, y selecciono el nuevo item if(this->selectedButton != NULL){ this->selectedButton->setChecked(false); } list<UICheckBox*>::iterator it = this->buttonItems->begin(); unsigned int i = 0; while(i < position && it != this->buttonItems->end()){ ++it; ++i; } if(it != this->buttonItems->end()){ this->selectedButton = (*it); this->selectedButton->setChecked(true); } } }
STACK_EDU
Released in 1983, the Nintendo Entertainment System (NES) home console was a cheap, yet capable machine that went on to achieve tremendous success. Using a custom designed Picture Processing Unit (PPU) for graphics, the system could produce visuals that were quite impressive at the time, and still hold up fairly well if viewed in the proper context. Of utmost importance was memory efficiency, creating graphics using as few bytes as possible. At the same time, however, the NES provided developers with powerful, easy to use features that helped set it apart from older home consoles. Understanding how NES graphics are made creates an appreciation for the technical prowess of the system, and provides contrast against how easy modern day game makers have it with today’s machines. The background graphics of the NES are built from four separate components, that when combined together produce the image you see on screen. Each component handles a separate aspect; color, position, raw pixel art, etc. This may seem overly complex and cumbersome, but it ends up being much more memory efficient, and also enables simple effects with very little code. If you want to understand NES graphics, knowing these four components is key. This document assumes some familiarity with computer math, in particular the fact that 8 bits = 1 byte, 8 bits can represent 256 values, and how hexadecimal notation works. However, even those without a technical background can hopefully find it interesting. Here is an image from the opening scene of Castlevania (1986), of the gates leading to the titular castle. This image is 256×240 pixels, and uses 10 different colors. To represent this image in memory we’d want to take advantage of this limited color palette, and save space by only storing the minimum amount of information. One naive approach could be using an indexed palette, with 4 bits for every pixel, fitting 2 pixels per byte. This requires 256*240/2 = 30720 bytes, but as you’ll soon see, the NES does much a better job. Central to the topic of NES graphics are tiles and blocks . A tile is an 8×8 region, while a block is 16×16, and each aligns to a grid of the same size. Once these grids are added, you may begin to see some of the underlying structure in the graphics. Here is the castle entrance with grid at x2 zoom. This grid uses light green for blocks and dark green for tiles. The rulers along the axis have hexadecimal values that can be added together to find position; for example the heart in the status bar is at $15+$60 = $75, which is 117 in decimal. Each screen has 16×15 blocks (240) and 32×30 tiles (960). Let’s dive into how this image is represented, starting with the raw pixel art. CHR represents raw pixel art, without color or position, and is defined in terms of tiles. An entire memory page contains 256 tiles of CHR, and each tile has 2 bit depth. Here’s the heart: And its CHR representation : This representation takes 2 bits per pixel, so at a size of 8×8, that means 8*8*2 = 128 bits = 16 bytes. An entire page then takes 16*256 = 4096 bytes. Here’s all the CHR used by the castlevania image. Recall that it takes 960 tiles to fill an image, but CHR only allows for 256. This means most of the tiles are repeated, on average 3.75 times, but more often than not a tiny number are used as the majority (such as a blank background, solid colors, or regular patterns). The castlevania image uses a lot of blank tiles, as well as solid blues. To see how tiles are assigned, we use nametables. A nametable assigns a CHR tile to each position of the screen, of which there are 960. Each position uses a single byte, so the entire nametable takes up 960 bytes. The order of assignment is each row from left to right, top to bottom, and matches the calculated position found by adding the values from the rulers. So the upper-left-most position is $0, to the right of that is $1, and below it is $20. The values for the nametable depend upon the order in which the CHR is filled. Here’s one possibility : In this instance, the heart (at position $75) has a value of $13. Next, in order to add color, we need to select a palette. The NES has a system palette of 64 colors , and from that you choose the palettes that are used for rendering. Each palette is 3 unique colors, plus the shared background color. An image has a maximum of 4 palettes, which takes up 16 bytes. Here is the palette for the Castlevania image: Palettes cannot be used with complete abandon. Rather, only a single one may be used per block. This is what typically gives a very “blocky” appearance to NES games, the need to separate each 16×16 region by color palette. Skillfully made graphics, such as this Castlevania intro, avoid this by blending shared colors at block edges, removing the appearance of the grid. Choosing which palette is used for each block is done using attributes, the final component. Atributes are 2 bits for each block, and specify which of the 4 palettes to use. Here’s a picture showing which blocks uses which palette via its attributes: As you may notice, the palettes are isolated into sections, but this fact is cleverly hidden by sharing colors between different areas. The reds in the middle part of the gate blend into the surrounding walls, and the black background blurs the line between the castle and the gate. At only 2 bits per block, or 4 blocks per byte, the attributes for an image use 240/4=60 bytes, though due to how they’re encoded they waste 4 bytes, using a total of 64. This means the total image, including the CHR, nametable, palette, and attributes, require 4096+960+16+64 = 5136 bytes, far better than the 30720 discussed above. Creating these four components for NES graphics is more complicated than typical bitmap APIs, but tools can help. Original NES developers probably had some sort of toolchains, but whatever they were, they have been lost to history. Nowadays, developers will typically create their own programs for converting graphics to what the NES needs. The images in this post were all created using makechr, a rewrite of the tool used to make Star Versus. It is a command-line tool designed for automated builds, and focuses on speed, good error messages, portability, and clarity. It also creates interesting visualizations such as those shown here. Most of my knowledge of how to program the NES, especially how to create graphics, was acquired by following these guides: So the heart would be stored like this: Each row is one byte. So 01100110 is $66, 01111111 is $7f. In total, the bytes for the heart are: $66 $7f $ff $ff $ff $7e $3c $18 $66 $5f $bf $bf $ff $7e $3c $18 System palette – The NES does not use an RGB palette, and the actual colors it renders may vary from tv to tv. Emulators tend to use completely different RGB palettes. The colors in this document match the hard-coded palette of makechr. Attribute encoding – Attributes are stored in a strange order. Instead of going left to right, up to down, a 2×2 section of blocks will be encoded in a single byte, in a Z shaped ordering. This is the reason why there are 4 wasted bytes; the bottom row takes a full 8 bytes. For example, the block at $308 is stored with $30a, $348, and $34a. Their palette values are 1, 2, 3, and 3, and are stored in low position to high position, or 11 :: 11 :: 10 :: 01 = 11111001. Therefore, the byte value for these attributes is $f9.
OPCFW_CODE
Unix Time Conversion between Language I have this unix timestamp in milliseconds: -62135769600000 I tried to convert the timestamp using java and javascript, but I got two different response. look below: Date d = new Date(-62135769600000L); System.out.println(d); // output = Sat Jan 01 00:00:00 GMT 1 d = new Date(-62135769600000L) console.log(d) // output = Fri Dec 29 0000 19:00:00 GMT-0500 (EST) As you can see, we have two different results. I want to understand why and if possible how to fix that. Have you tried the code with a more reasonable timestamp, like the timestamp of right now instead of some wonky negative value!? Java Date and LocalDate do not have the concept of timezone. You should consider moving to java LocalDateTime I recommend that in Java you don’t use java.util.Date. That class is poorly designed and long outdated. Instead use Instant from java.time, the modern Java date and time API. It’s probably the difference between Julian and proleptic Gregorian calendars. java.util.Date uses Julian, I don’t know about JavaScript. @OleV.V. yes it is; j.u.Date 'normalizes' to a calendar anytime you do silly things with it (as in, turning an epochmillis into human concepts, which toString itself also does), and that normalizer code will use julian if the timestamp is before a hardcoded cutoff, which OP's near-year-0 timestamp most definitely is. See my answer. @luk2302 this date is coming from the data base. Is a rest endpoint and I consume that in SPA application. I don't know what it means. But it is a correct date. No, that is most likely not a valid date, does year 0 make sense in your context? I doubt it - this is probably a placeholder / dummy / invalid marker and should not be parsed / displayed First, a lesson: java.util.Date is dumb, broken, and should never be used. The right way to do time stuff in java is with the java.time package. Then, to explain your observed differences: javascript is reporting the time by translating the moment-in-time as represented by using the gregorian calendar, and as if you were in the EST timezone. java's Date is reporting the time as if it was the julian calendar, and at the GMT timezone. The timezone difference explains why javascript prints 19:00 and java prints 00:00, as well as 1 date's worth of difference. When you clap your hands at 19:00 in New York on dec 29th, at that very instant in time, it is exactly midnight, dec 30th, in london. The remaining exactly 48 hours (2 days) of difference is because of the julian calendar. The right way, in java: Instant i = Instant.ofEpochMilli(-62135769600000L); System.out.println(i.atZone(ZoneOffset.UTC)); > 0000-12-30T00:00Z The calendar system commonly used today is called the gregorian calendar. It is a modification of what was used before, the julian calendar, which was in use in roman times. The world did not switch from julian to gregorian on the same day. In fact, russia was very late and switched only in 1918: You bought a newspaper in moscow and it read 'January 31st, 1918' (but in russian). If you then hopped on a horse and rode west until you hit prague or berlin or amsterdam, all overnight, and asked: Hey, what date is today? They'd tell you: It's February 13th. THen you ride back to moscow the next day and buy another newspaper. It reads 'February 14th, 1918'. What happened to february 1st - 13? They never existed. That was the time russia switched. FUN FACT: The 'october revolution', which refers to when the tzars were overthrown, happened in november. Russia, still on the julian, were in october when that happened. The rest of the world using roman-inspired calendars were all on gregorian, and they called it november then. Even though this is a dumb thing to do, java.util.Date tries to surmise what you really wanted and has an arbitrary timestamp: When you make a Date object that predates this, it renders times in julian calendar. This is silly (the world didn't switch all in one go), so neither javascript nor java.time do this. That explains the 2 days. That is because both Date and Date have two very different implementations. There are actually two differences here: the format of the output text and the actual date represented by the output. The different date I'm still investigating why Java's Date would return 1 January 0001 Well, rzwitserloot already found out and mentioned this in his answer, while I was eating. Java's Date uses the calendar which was in use at that time, while both Java's JSR 310 Date and Time API and JavaScript's Date uses the Gregorian calendar. This is called a proleptic calendar – a calendar which is applied to dates before its introduction. I do know that JavaScript returns the correct date value. Java's Date is broken, don't use it. Instead, you should use the modern Java Date and Time API, available in the java.time package. The following Java code returns the correct date: OffsetDateTime odt = Instant.ofEpochMilli(-62135769600000L) .atOffset(ZoneOffset.ofHours(-5)); System.out.println(odt); The different formatting Both Java's and JavaScript's Date implement toString() in a different way. toString(). toString()` always returns a textual representation of the object on which the method is called — but how completely depends on what the author had in mind. You should never rely on toString() to be in a certain format. If you do want to format your string, use DateTimeFormatter in Java. I figured out the 48 hour gap for you. See my answer. @rzwitserloot I see. I already thought so, but I decided to have dinner first. ;-)
STACK_EXCHANGE
Sid vs Aptosid location: linuxquestions.com - date: April 8, 2011 I like current software, and after reading this I decided to give "unstable" a go. Only thing is, I'm not sure which of the two should I choose, and why. I understand a degree of tinkering is necessary when running experimental software, but I'd still like to get the distro that'll give me less headaches. what is fix.main mean under aptosid repository? location: linuxquestions.com - date: October 22, 2011 what is fix.main actually do? fixing the main repository package? LXer: aptosid 201002 Review location: linuxquestions.com - date: September 24, 2010 Published at LXer: A full review of aptosid 2010-02. I recently reviewed Linux Mint Debian, a very user-friendly version of Linux Mint based on Debian. This time I looked at another distro based on Debian, called aptosid. Aptosid, for those who arenít familiar with it, is actually made by the same developers that created the popular distro Sidux. There was apparently some conflict and controversy within the Sidux e.V association that resulted in Sidux morphing into Aptosid. dns and aptosid location: linuxquestions.com - date: March 6, 2012 I am trying to install aptosid. First I want to run it from the dvd. It runs pppoeconf to connect to the internet, and brings up the connection. I can see an ip with ifconfig. But: the browser can't connect, nor can dkpg. Someone said this may be a dns problem pppoeconf does prompt for a DNS, but says most providers will automatically assign a dns. So I go for automatic. Question: in what file should I tell aptosid which dns to use? Where is that info stored in Debian?? aptosid, kde and dsl location: linuxquestions.com - date: March 4, 2012 I was thinking of trying aptosid. I have a dvd. I can boot from it if I give the parameter radeon.modeset=0 Of course, it would be nice to have internet. I cannot connect using aptosid. I ran ppp.conf, it says I am connected, and ifconfig ppp0 shows an ip. But no browser can connect, dkpg frontend cannot connect. Is this a permissions thing? I can ping eg my inet address. I am used to gnome Network Manager, put in username and password in the dsl tab, done. How do I do this in aptosid/kde? ceni has no tab dsl>username>password. LXer: Aptosid 201102: is it any good? location: linuxquestions.com - date: August 4, 2011 Published at LXer: July saw release of new Aptosid version, this time 2011-02. It was released on 13th of July. Even though it was not Friday, but still 13th. Was it bad day for Aptosid? Let's have a look. LXer: Aptosid An Overview location: linuxquestions.com - date: July 19, 2011 Published at LXer: aptosid might sound like a package management tool, but it's actually a desktop-orientated (KDE4 or XFCE) Debian derived Linux distro. It's more than a mere respin of Debian, but does it have what it takes to distinguish it from all of the other desktop distros? What is Aptosid ? location: linuxquestions.com - date: September 17, 2010 I'm a dedicated debian linux user and recently I've heared about this Aptosid project. In few words how could this be described and how is it better than Sidux ? Use aptosid's configuration to install Ubuntu? location: ubuntuforums.com - date: January 1, 2011 I have an old Dell Dimension 4100 running at 900 mhz or a bit higher with 512 MBs of RAM I am having a miserable time getting a working modern install of Linux with Gnome running on. I had it set up with Gutsy (Ubuntu 7.10) at one point where upon it was put in the closet after my parent's internet plans were put on hold. I now have the unenviable task of trying to get something slightly more modern running on it in preparation for their long delayed internet installation. I tried doing an upgrade (but apparently the recommended path is to install as upgrades every release until you manage to get current. Not. Gonna. Happen.) Then I installed a command only installation of Lucid and proceeded to install xorg, etc. Hello frozen KMS! Part of the problem seems to be this desktop has an ancient ATI onboard card and a more recent nvidia PCI video card I used with success in Gutsy and which the bios is supposed to detect the presence of and automatically disable the onboard ATI in f kdeprogram to show current print jobs in aptosid location: linuxquestions.com - date: October 4, 2010 im searching a kde programm to show current print jobs. I've installed aptosid (actual version). I've searched the repository and I only found printer-applet which does not run in aptosid - I got the following error messages:
OPCFW_CODE
What was the purpose of the soft hands in Ocean's Eleven? The intermediate certificates MUST come after, your certificate in the file.Example-----BEGIN CERTIFICATE----- <-- this is the site's certificate MIIE3jCCA8agAwIBAgICAwEwDQYJKoZIhvcNAQEFBQAwYzELMAkGA1UEBhMCVVMx... Go to the \clarion6\3rdparty\bin\MakeCertificates folder Run the batch file : CreateCertificateSigningRequest.Bat a. epoxy for keying box headers? http://invictanetworks.net/error-failed/error-failed-to-load.html Browse other questions tagged windows dll installation openssl dllregistration or ask your own question. Trying using depends to see which DLLs the OpenSSL DLLs use, and make sure those are installed on your clean VMware. Making my building blocks modular Why I am always unable to buy low cost airline ticket when airline has 50% or more reduction Is the Word Homeopathy Used Inappropriately? Newer Post Older Post Home 9 comments: Nicholas Ring25 September 2014 at 23:02I get my OpenSSL DLLs from http://opendec.wordpress.com/, which is done by a person that also works on IndyReplyDeleteRepliesJordi Corbilla25 This certificate is in 2 parts. If you are interested in my application, this can be found here: Source code and binaries can be found here: https://github.com/JordiCorbilla/FlickrPhotoStats(Source code) FlickrPhotoStats(x86) v3.0.(binaries) FlickrPhotoStats(x64) v3.0.(binaries) Screenshots: Share This: FacebookTwitterGoogle+ Email None of the other network cards will work. So set the name to something useful that isn’t product related. My tool used to work under Delphi XE6 without problems and after my upgrade to Delphi XE7 I can see that when it tries to use the IdHTTP component the following Optional Company name: Put in a company name if you like. With the selected option (Windows system directory) the libraries will be installed at C:\windows\SysWOW64 folder. http://stackoverflow.com/questions/906310/installing-registering-win32-openssl-libraries-distributed-with-my-app Indy updates are not included in Delphi updates. I suggest using an older version. This of course requires the usage of openssl. By binding the server to the LAN card you prevent people outside the LAN from accessing the web server. To get past this problem you will need to do one of the following: Find OpenSSL DLLs that do not disable SSLv2. qDTMBqLdElrRhjZkAzVvb3du6/KFUJheqwNTrZEjYx8WnM25sgVjOuH0aBsXBTWVU+4= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <-- these are the intermediate certificate(s) WBsUs5iB0QQeyAfJg594RAoYC5jcdnplDQ1tgMQLARzLrUc+cb53S8wGd9D0Vmsf... You create a Certificate Signing Request They take your money, and sign your CSR They give you back a Certificate which You put into a CRT file. Error Failed To Load Openssl Dlls. Ssleay32.dll Or Libeay32.dll From then ▐▌ █ on use the extracted FTPRush.exe to start your FTPRush. ▐▌ █ ▐▌ █ 3. Log in ▼ Keep me signed in Sign up HOME DOWNLOAD PURCHASE SUPPORT PORTAL FORUM CONTACT US News & Information Suggestions & Feedback General Discussions FlashFXP Forums > FlashFXP > There is nothing to register (and you cannot use regsvr32 anyway, as OpenSSL does not implement ActiveX/COM servers, which is what regsvr32 is meant for). this page nj0y View Public Profile Visit nj0y's homepage! FlashFXP v3.7.7 is compatible with OpenSSL.v1.0.1e files. ▐▌ █ Earlier versions of FlashFXP might produce the error (v2.1): ▐▌ █ Failed to load SSL DLLS. For the example I’ve renamed mine to Settings.Key Tip: In your NetTalk web server program you will need to enter the name of your certificate. Fortunately this only has to be done once, and is simple enough if you follow the directions exactly. Cost Getting a Certificate Authority (such as Verisign, Thwate or Godaddy) to sign Copy libeay32.dll and ssleay32.dll from the newly installed ▐▌ █ OpenSSL-Win32 diretory to your FTP client's directory. ▐▌ █ ▐▌ █ Once you are connected it will now state: ▐▌ █ g. get redirected here Use Dependency Viewer profile mode to get detailed diagnostics of why the DLL cannot be loaded. All Rights Reserved. one using a certificate) running on IIS 6 then you can extract it using the following steps; Part 1 Right-Click on My Computer, Manage Services & Applications IIS Highlight the website In the \clarion6\3rdparty\bin\MakeCertificates\YourCARoot\certs folder rename Demo.Crt to Product.Crt, where Product is some name related to your application. For the examples I’ve renamed mine to Settings.Crt (we won’t need the SignedDemo.crt one, so you can leave that alone for now.) In the \clarion6\3rdparty\bin\MakeCertificates\YourCARoot\private folder rename Demo.Key to Product.Key, where Creating a Certificate for your program, with yourself as the Certificate Authority Note that before you can do this step, you must have completed the steps laid out in the section Specifically both objects can use the same NetWebHandler procedure. Secure Dynamic Pages If you have procedures in your application that should only be served via the Secure port then go to All rights reserved. c. Isn't that more expensive than an elevated system? In the examples this is set to 'certificates\settings' - certificates being the Folder part where the certificate is stored, and Settings being the name part. http://invictanetworks.net/error-failed/error-failed-to-load-application-app-cfg.html Join Now For immediate help use Live now! Isn't that more expensive than an elevated system?
OPCFW_CODE
New to Matrix? You're in the right place! This page will help you get started with Matrix and TAMI Matrix server. Matrix is similar to email in that you have a server that you connect to (like Gmail, Yahoo, etc.) and a client app (like Outlook, Thunderbird, etc.) that you use to connect to the server. The difference is that Matrix is optimized for sending instant messages, has end-to-end encryption, and is more modern. It is also open source and federated, meaning that you can connect to any Matrix server in the world and talk to anyone else on any other Matrix server in the world from your server. There are many different Matrix clients available. The most popular ones are Schildichat, Element, and Nheko. Here's a quick comparison of the most popular clients: |Desktop, Mobile, Web |The most popular client. It's pretty stable and polished, but the UI is a bit cluttered. |Desktop, Mobile, Web |A fork of Element (meaning that it has all the features that Element has). It's a bit more lightweight and has a much better UI. |A very lightweight client that is still in development. It's very fast and lightweight. |Desktop, Mobile, Web |A client in Flutter that is still in development. Looks very nice, but is still missing some features and stutters when you have a lot of rooms. |A web client that is still in development. Looks pretty, has all the basic features, but is still a bit incomplete. |A CLI for Matrix. It's very lightweight and fast, but it's not very user-friendly. You can register at our matrix server via this link. You can also find a list of all the servers at https://joinmatrix.org/servers/. Once you've chosen a client and a server, you can connect to the server by entering the server's address in the client. For example, if you're using Element or Schildichat, enter the server's address in the Custom server field in the Sign in screen. The server's address is the domain name of the server without the For TAMI server, enter telavivmakers.space in the Custom server field. After that, just enter your password and you're good to go! In Matrix, every user and room has an address. The address of a user is structured like this: @username:server. For example, if you're using TAMI server, your address will be @username:telavivmakers.space. The address of a room is structured like this: #roomname:server. For example, if you're using TAMI server, the address of the #general room will be #general:telavivmakers.space (which is a real room you can join, by the way). Matrix supports end-to-end encryption, which means that your messages are encrypted before they are sent to the server. This means that even if the server is compromised, your messages will still be encrypted and unreadable. However, this comes at a cost: you need to sync the encryption keys between your devices (for example, if you want to send a message from your phone and read it from your computer). This is done automatically by most clients, however, the first sync requires you to confirm that the communication between your devices hasn't been tampered with. This is done by either scanning a QR code from one device to the other, or comparing the emojis (which are a representation of the encryption keys) on both devices. Once you log in to your account on a new device, you will be asked to verify your new session with an old one, usually in the form of a pop-up or a notification. It's very important to do so, as otherwise you won't be able to read encrypted messages from that device. In Element and Schildichat, you can check that all your sessions are verified by going to the Settings screen, clicking on Security & Privacy, and then clicking on Show all sessions. If you see a red exclamation mark next to a session, it means that it's not verified. You can verify it by clicking on it and following the instructions. Other clients may have a similar screen. You can also verify other users to make sure that your communication with them is encrypted. To do so, go to the user's profile, click on the “Security & Privacy” tab, and then click on “Verify”. You have to do this in person or over a secure communication channel (like a phone call or a video call) to make sure that the other person is who they say they are. Never accept a verification request from someone you don't know in person. Matrix also supports key backup, which means that you can backup your encryption keys to the server. This is useful in case you lose your device or if you want to use a new device. To enable key backup, go to the Settings screen, click on Security & Privacy, and then click on Set up secure backup. You will be asked to enter a passphrase or to save a recovery code. After choosing one of those options, you'll be able to recover your keys with them in case you lose your device. You can also follow those simple rules to make sure that encryption always works: Encrypt to verified sessions only option unless you know what you're doing
OPCFW_CODE
Getting started with Scaleway IPFS using CLI Created by Protocol Labs, InterPlanetary File System (IPFS) is a decentralized protocol used to store and share content. Scaleway IPFS Pinning allows you to permanently store a copy of your data from the public IPFS network on a Scaleway-owned node, thus providing you with an added layer of performance in the region of your choice. That way, your data remains available and accessible, even when your local machine is offline. With your content now available on both your local node and our resilient and reliable external nodes, you drastically minimize the risk of disruptions thanks to redundancy. You can also enhance your data storage efficiency by pinning your content to your Scaleway node, and then remove it from your local node. - Data is public: our IPFS nodes are bootstrapped with public IPFS nodes. This implies that any pinned content will be available on the public IPFS network. - Data is shared: public IPFS nodes can fetch and host your pinned content. This means your data could be hosted anywhere from America, Asia, and everything in between. Once the content is delivered to external peers, Scaleway cannot delete it. - Data is not encrypted: we do not apply any encryption algorithm to your data. The Scaleway Command Line Interface (CLI) allows you to pilot your Scaleway infrastructure directly from your terminal, providing a faster way to administer and monitor your resources. Scaleway CLI is an essential tool for operating efficiently in your cloud environment. It provides many functionalities, including the ability to create and manage volumes and pins. You may need certain IAM permissions to carry out some actions described on this page. This means: A volume serves as a storage area for a set of pins. Think of it as Object Storage buckets. Export the following environment variable to enable the IPFS command in the Scaleway CLI:export SCW_ENABLE_LABS=true Enter the following command to create your volume using the CLI. Make sure to replace the example values with your own, using the configuration table below:SCW_ENABLE_LABS=true scw ipfs volume create project-id=00000000-0000-0000-0000-000000000000 region=fr-par name=my-volume You should get a response like the following, providing details about your newly created volume.ID 11111111-1111-1111-1111-111111111111ProjectID 00000000-0000-0000-0000-000000000000Region fr-parCountPin 0CreatedAt nowUpdatedAt nowName my-volume ID of the Project you want to create your volume in. Your Project name can only contain alphanumeric characters, spaces, dots, and dashes. To find your Project ID, you can consult the Scaleway console Create a volume in this given region. Possible values are nl-ams. Default value is Create a volume with this given name. Now that we have a volume, we can pin content from CIDs to it. Pinning a file essentially allows you to specify which particular content should always be stored and readily accessible on the IPFS network. Enter the following command to pin your content on your volume. Make sure to replace the example values with your own, using the configuration table below:scw ipfs pin create-by-cid volume-id=11111111-1111-1111-1111-111111111111 cid=Qmdi7ERksspfxWXfU8ATRUt7iCjZJbEbDrUoMDtjnbdTwo name=MyNewPinByCID ID of the volume you want to pin your content to. To find your volume ID, you can consult the Scaleway console Retrieve the Content Identifier (CID) of the content you added to your own local node using IPFS Desktop. You can find out more through our dedicated documentation Create a pin with this given name. Enter the following command to check the status of your pin and see if the content was effectively pinned on a Scaleway-owned node:scw ipfs pin get pin-id=22222222-2222-2222-2222-222222222222 volume-id=11111111-1111-1111-1111-111111111111 You should get a response like the following, providing details about your newly pinned content.PinID 0691fe5c-e8ba-40b3-b5cd-bfa69936aac5Status pinnedCreatedAt 14 minutes agoCid.Cid QmQ6znATkFZdgniHofhFUSm3imBWN5yf5yiU5sLp56VRCuCid.Name MyPinByCIDCid.Meta.ID 2672dr5c-z9cd-80s3-c6de-bfa70039bbc6Delegates.0 /dnsaddr/delivery.ipfs.labs.scw.cloud/p2p/QmQ6znATkFZdgniHofhFUSm3imBWN5yf5yiU5sLp56VRCuInfo.ID 0691fe5c-e8ba-40b3-b5cd-bfa69936aac5Info.URL -Info.Size 40733Info.Progress 100Info.StatusDetails pinned_ok Once your content is pinned on our service, you can fetch it via any compatible IPFS client connected to the public IPFS network. This can be Kubo, or IPFS desktop. For the sake of this example, we are going to use Kubo. You can find out more about the Kubo CLI on our dedicated documentation Run the following command to retrieve your content via Kubo CLI. Make sure to replace the example CID value with your own.ipfs get QmQ6znATkFZdgniHofhFUSm3imBWN5yf5yiU5sLp56VRCu You should get a response like the following, indicating that the content has properly been saved on your local machine.Saving file(s) to QmQ6znATkFZdgniHofhFUSm3imBWN5yf5yiU5sLp56VRCu39.78 KiB / 39.78 KiB [============================================] 100.00% 0s Navigate your files to retrieve the content. The content is saved under a name that matches its CID. If you no longer wish to store a content on your Scaleway-owned node, you can delete it. Run the following command to delete your content: scw ipfs pin delete pin-id=0691fe5c-e8ba-40b3-b5cd-bfa69936aac5 volume-id=11111111-1111-1111-1111-111111111111 You should get the following output, confirming your pin has been successfully deleted on Scaleway’s end. ✅ Pin has been successfully deleted. You can delete an entire volume. This will remove all attached pins. Run the following command to delete your volume: scw ipfs volume delete volume-id=11111111-1111-1111-1111-111111111111 You should get the following output, confirming your volume has been deleted. ✅ Volume has been successfully deleted.
OPCFW_CODE
Why did Google ban Myanmar from accessing Google Workspace? I am a student from Myanmar. Our country had a coup 6 months ago and many countries are imposing sanctions on the military junta for killing and abusing its own people. Google has blocked access to their services like Google Workspace. It seems like this would only hurt civilians, not the government. Can someone explain why Google did that? Unfortunately this question doesn't fit any of the three categories that we define in the [help], and so is unlikely to be considered on-topic here. May I suggest taking the [tour] and reading through the [help] to learn about what is considered on-topic on this site, and if you can [edit] your question to make it on-topic, then please take a moment to do so. I strongly feel this is on-topic. I agree this is on-topic. While the internal decisions of Google (arguably) wouldn't be on-topic, because of the political situation in Myanmar and the sanctions against the government, it's reasonable to expect that political factors are the direct cause of this decision. Not a complete answer, but this is too long for a comment. Note that Google writes Google Workspace is available in most countries and regions. However, Google restricts access to some of its business services in certain countries or regions, such as Crimea, Cuba, Iran, North Korea, and Syria. You might note that all five examples given by Google have been strongly sanctioned by the US government, and Google is an American company. Many sanctions regimes face the problem of hurting a regime through the population. But any citizen who works can be taxed. Any citizen who earns foreign currency helps the foreign trade balance. Sanctions are a little like collective punishment in school, e.g. the teacher finds out that cheating took place on a test, so they dock everyone's grade regardless of culpability, in order that the students who cheated will face pressure from other students not to do so in the future. The collateral damage is the point. If you don't like it, and enough of your fellow citizens don't like it, you're free to pursue regime change by all means necessary, or else risk being perceived (rightly or wrongly) as passively complicit. And of course, Google cutting off access to their products is a little less ethically fraught than industrial agriculture cutting off access to food. Last I checked, nobody is entitled to a Gmail account. Really, you should be glad they can't sell your personal information to the American government, or any other highest bidder. "You're free to pursue regime change by all means necessary, or else..." Or, in other words, we'll punish you until you risk your life in a hopeless quest to overthrow the dictator we don't dare to mess with, but we may cheer you from the distance. @Rekesoft I like how you cut off the end of that sentence so you could make it sound worse. If we're doing uncharitable paraphrases, then your comment reduces to, "How could I be complicit with the oppressive government if I was Just Following Orders??" I wasn't criticising your answer, I fully agree with it. Sanctions are indeed a form of collective punishment. And, like all kind of collective punishments, fundamentally unfair by definition. "Really, you should be glad they can't sell your personal information to the American government, or any other highest bidder." True, they don't sell. But provide for free as a part of global surveillance program. Regime change does not work. The collapse of the USSR and Jackson–Vanik amendment provided an example of that. There is no legal mechanism that will revoke sanctions in case of regime change. So it's just the intimidation. Unfortunately when you have an oppressive government, or you live in a territory that is occupied (like Palestine or Eastern Ukraine), sanctions meant for your government end up affecting you. I still don't think that restricting access to a product for an entire country is beneficial. But it's the way things are.
STACK_EXCHANGE
LangFlow 0.6.15 - UI Fails to Load Describe the bug I am installing langflow 0.6.15 on a clean environment. The application successfully starts as shown here: (langflow-3.10) [root@fedora ~]# langflow run --host <IP_ADDRESS> ╭───────────────────────────────────────────────────╮ │ Welcome to ⛓ Langflow │ │ │ │ Access http://<IP_ADDRESS>:7860 │ │ Collaborate, and contribute at our GitHub Repo 🚀 │ ╰───────────────────────────────────────────────────╯ [2024-04-12 14:04:51 -0500] [3249871] [INFO] Starting gunicorn 21.2.0 [2024-04-12 14:04:51 -0500] [3249871] [INFO] Listening at: http://<IP_ADDRESS>:7860 (3249871) [2024-04-12 14:04:51 -0500] [3249871] [INFO] Using worker: uvicorn.workers.UvicornWorker [2024-04-12 14:04:51 -0500] [3249910] [INFO] Booting worker with pid: 3249910 [2024-04-12 14:04:51 -0500] [3249910] [INFO] Started server process [3249910] [2024-04-12 14:04:51 -0500] [3249910] [INFO] Waiting for application startup. [04/12/24 14:04:51] INFO 2024-04-12 14:04:51 - INFO - service - Alembic already service.py:142 initialized INFO 2024-04-12 14:04:51 - INFO - service - Running DB migrations in service.py:150 /root/miniconda3/envs/langflow-3.10/lib/python3.10/site-packages/la ngflow/alembic /root/miniconda3/envs/langflow-3.10/lib/python3.10/site-packages/langflow/alembic/env.py:82: SAWarning: WARNING: SQL-parsed foreign key constraint '('user_id', 'user', 'id')' could not be located in PRAGMA foreign_keys for table credential context.run_migrations() /root/miniconda3/envs/langflow-3.10/lib/python3.10/site-packages/langflow/alembic/env.py:82: SAWarning: WARNING: SQL-parsed foreign key constraint '('user_id', 'user', 'id')' could not be located in PRAGMA foreign_keys for table flow context.run_migrations() No new upgrade operations detected. /root/miniconda3/envs/langflow-3.10/lib/python3.10/site-packages/langflow/alembic/env.py:82: SAWarning: WARNING: SQL-parsed foreign key constraint '('user_id', 'user', 'id')' could not be located in PRAGMA foreign_keys for table credential context.run_migrations() /root/miniconda3/envs/langflow-3.10/lib/python3.10/site-packages/langflow/alembic/env.py:82: SAWarning: WARNING: SQL-parsed foreign key constraint '('user_id', 'user', 'id')' could not be located in PRAGMA foreign_keys for table flow context.run_migrations() No new upgrade operations detected. INFO 2024-04-12 14:04:51 - INFO - utils - No LLM cache set. utils.py:88 [2024-04-12 14:04:51 -0500] [3249910] [INFO] Application startup complete. However, the UI shows a small pink box in the bottom of the screen: An error has occurred while fetching types. Please refresh the page. At this point, I am not sure if this is a bug or just a gap in the documentation. What do the developers think? Should I submit a PR for the README or something similar? It's unlikely to be a bug, but it depends on your environment. You could potentially fix it without altering Chrome flags by setting the cookie flags in the .env file within the Langflow folder or by setting the variables directly in the CLI. Feel free to submit a pull request to the docs about it. @taylor-schneider It's unlikely to be a bug, but it depends on your environment. You could potentially fix it without altering Chrome flags by setting the cookie flags in the .env file within the Langflow folder or by setting the variables directly in the CLI. Feel free to submit a pull request to the docs about it. Which flags for the .env file or CLI are you referring to? I still do not understand what the rot issue is. It seems like certificate is having issues? ssame problem #1508 (comment) this really help chrome set , It's not work @taylor-schneider These environment variables allow you to customize cookie settings based on your specific needs: REFRESH_SAME_SITE: Determines the SameSite attribute of the refresh token cookie. Options include "lax," "strict," and "none" (default). REFRESH_SECURE: Enables or disables the Secure attribute of the refresh token cookie (default is enabled). REFRESH_HTTPONLY: Enables or disables the HttpOnly attribute of the refresh token cookie (default is enabled). ACCESS_SAME_SITE: Sets the SameSite attribute of the access token cookie. Options are "lax" (default), "strict," and "none." ACCESS_SECURE: Enables or disables the Secure attribute of the access token cookie (default is disabled). ACCESS_HTTPONLY: Enables or disables the HttpOnly attribute of the access token cookie (default is disabled). You may need to modify these variables depending on your environment and specific requirements. @anovazzi1 I've tried setting up these to be most open: ENV REFRESH_SAME_SITE=none ENV REFRESH_SECURE=False ENV REFRESH_HTTPONLY=True ENV ACCESS_SAME_SITE=none ENV ACCESS_SECURE=False ENV ACCESS_HTTPONLY=False Don't think that's really doing anything. I searched the repo code and these are not mentioned anywhere either. Setting up the browser to treat the origin as secure is not a good solution - it's only a bandaid. How do the cluster deployments of this work if we're all facing this issue? My bad. All Langflow environment variables start with LANGFLOW_, so the correct variable is LANGFLOW_REFRESH_SECURE instead of just REFRESH_SECURE. This naming convention applies to all Langflow environment variables. Please update the variable and let me know if you encounter any further issues. Thank you! Hi @taylor-schneider and @Pokora22 We hope you're doing well. Just a friendly reminder that if we do not hear back from you within the next 3 days, we will close this issue. If you need more time or further assistance, please let us know. Thank you for your understanding! Thank you for your contribution! This issue will be closed. If you have any questions or encounter another problem, please open a new issue and we will be ready to assist you.
GITHUB_ARCHIVE
|Information in this topic applies only to web tests that implement the classic approach. In cross-platform web tests, TestComplete recognizes third-party controls as standard web controls. TestComplete can recognize YUI 2 Calendar controls in web applications. It provides special properties and methods that let you retrieve the controls data and simulate user actions on the controls (see below). YUI().use() method supported by the YUI 3 library allows loading YUI 2 modules and widgets into YUI 3 and creating YUI 2 controls with YUI 3 support. For more information on this, see documentation for the YUI global object (the on-line version is available at http://yuilibrary.com/yui/docs/yui/). TestComplete can test YUI 2 controls created with YUI 3 support, but each subsequent call of the Supported component versions: Yahoo! User Interface Library 2.9.0. In order for TestComplete to be able to work with YUI 2 Calendar controls, the following requirements must be met: In order to implement support for the YUI 2 Calendar controls, you need to apply a special accessibility patch to the YUI 2 controls' source code. The patch is shipped along with TestComplete. By default, it resides in the <TestComplete 15>\Open Apps\WebControls folder. To apply the patch to the controls' source code, do the following: Follow the link http://sourceforge.net/projects/gnuwin32/files/patch/2.5.9-7/patch-2.5.9-7-bin.zip/download to download an archive containing the free Patch utility. Extract the patch.exe executable file and place it in the root directory of the controls' source library (the directory that contains the build folder). Place the yui_2.9.0.diff patch file in the same root folder in which you have placed the patch.exe file. Apply the patch by executing the following command line:patch -p0 --binary < yui_2.9.0.diff A license for the TestComplete Web module. The Yahoo! Control Support plugin. This plugin is installed and enabled automatically. If you experience issues when working with the controls, select File > Install Extensions from the TestComplete main menu and check whether the plugin is active. (You can find the plugin in the Web group.) If the plugin is not available, run the TestComplete installation in the Repair mode. When testing YUI 2 Calendar controls, you can use properties and methods specific to these controls, as well as properties and methods that TestComplete applies to onscreen objects. For the full list of available properties and methods, see the following topics:
OPCFW_CODE
[Bug] Go-Dependency | invalid pseudo-versio go<EMAIL_ADDRESS>invalid pseudo-version: revision is longer than canonical (6fd6a9bfe14e) needed for https://github.com/alpinelinux/aports/pull/10887 This is resolved in gitea 1.10-dev, We are unable to backport it. Gitea 1.9.x does not support go1.13 @techknowlogick alpinelinux user are still on giea version 1.8.3 ! an i had a PR wich get tehm 1.9.3 but this is blocking it and to wait until 1.10.x has its first stable release will take a wile :( @6543 sadly this error is caused by an upstream library, and we have resolved this, but sadly in a way that can't be backported. 1.10 milestone is almost complete (less than 5 PRs left). this: https://tip.golang.org/doc/go1.13#version-validation is telling me that we would only add one line in go.mod the diff would be: diff --git a/go.mod b/go.mod index 2c137af81..629d35ea7 100644 --- a/go.mod +++ b/go.mod @@ -46,7 +46,7 @@ require ( github.com/go-macaron/binding v0.0.0-20160711225916-9440f336b443 github.com/go-macaron/cache v0.0.0-20151013081102-561735312776 github.com/go-macaron/captcha v0.0.0-20190710000913-8dc5911259df - github.com/go-macaron/cors v0.0.0-20190309005821-6fd6a9bfe14e9 + github.com/go-macaron/cors 6fd6a9bfe14e9 github.com/go-macaron/csrf v0.0.0-20180426211211-503617c6b372 github.com/go-macaron/i18n v0.0.0-20160612092837-ef57533c3b0f github.com/go-macaron/inject v0.0.0-20160627170012-d8a0b8677191 Please send PR for this against release/v1.9 banch :) THANKS with the first changes make will get an errror but go build run just fine ... after it the diff looks similar but now make run too ... ill send the filan patch as PR ... @techknowlogick https://github.com/go-gitea/gitea/pull/8389
GITHUB_ARCHIVE
PALO ALTO, Calif., Nov. 4, 2015 /PRNewswire/ -- One of the biggest problems of cloud-based IDEs is speed. So to fix this problem, Codeanywhere has rewritten their Cloud IDE from the bottom up. Codeanywhere is now up to four times faster than the previous version and is even on par with some popular desktop editors in editing files. "We had to create some very specific components particularly for our use case, just to address the speed issue," said Ivan Burazin, the co-founder of Codeanywhere. Aside from speed, Codeanywhere's new version has a bunch of other new features as well: - All new UI - Goto Anything, hit CMD+P to find anything, including commands - GitHub/Bitbucket, repository import wizard - Drag & drop files and folders from your desktop to Codeanywhere - Edit desktop files by dragging & dropping them into the editor - SSH terminal real-time collaboration - Project based, separate your work logically and switch between your projects instantly - Manage multiple containers in a project - New configuration options, configure every aspect of the interface globally or on a project basis - Large file support, you can now open and save files with over 100,000 lines - and many more This new version, along with its features, supports Codeanywhere's original vision - freeing the development environment and making development fast and collaborative. Just to remind you – Vedran Jukic, the co-founder, came up with the idea when he worked as a freelance developer. He didn't want to have to take his laptop with him constantly, and found there was no good solution for coding or fixing bugs in the cloud. "We believe this is an important milestone for us at Codeanywhere and for the Cloud IDE industry as a whole, as this version of Codeanywhere, which we've built from the ground up gives users the comfort of a cloud-based app with the feel of a desktop one," said Ivan Burazin, the co-founder of Codeanywhere. Codeanywhere aspires to simplify a developer's workflow by giving them freedom to work on their projects in the cloud, no matter if it is on Codeanywhere's Containers, a user's VM or even Google Drive or Dropbox. Codeanywhere, which was founded in 2012, is based out of Palo Alto and has a team of 12 engineers currently working full-time. The company states that they achieved cash flow positive last month as their paying user base has grown by 150 percent in the last year. Codeanywhere also participated in the Fall 2014 batch of prominent Boston accelerator Techstars. They also participated in the TC Startup Battlefield as the audience had voted them the best company in Startup Alley at Disrupt NY 2014. Materials available here: https://www.dropbox.com/sh/yu8f7h9v5spbbp7/AADeeuP0MaemIfhMHxNH4kJDa?dl=0 SOURCE Codeanywhere Inc.
OPCFW_CODE
Apparently I was out of line again? (This is why I don't run for moderatorship. That and time.) 10k+ (It's deleted) I won't complain if it was re-opened. https://stackoverflow.com/questions/21765466/what-is-the-purpose-of-an-interface I didn't think it was a good question (lack of research mostly). I indicated, perhaps rudely, that the OP's C# OOP knowledge was lacking since C# and Java interfaces are identical. I'll point out it was the OOP's language ("very familiar with C# OOP") that made me question how familiar he could be. I posted a tiny rant on Twitter, without a link to the question. OP found this and tweeted to me a half-dozen times with personal attacks/insults, which I don't react well to, and I'm in his neck of the woods several times a year--this just strikes me as a really bad idea. Apparently I was also called out here and/or in his profile. Just doing a sanity check; was I 100% out of line, or just too curt/concise in my response? Why I don't use twitter.... to easy to hit send and very difficult to undo it once it is done @psubsee2003 I'm fine with being called out, although I thought the personal insults and DDOS threat were a bit much. Just shrug and move on @DaveNewton. Saying things in public will get you public responses. No big deal. Just let it be. "I am quite familiar with object oriented programming in c#," "What is the purpose of an interface?" This is why all those proposals to let users indicate their level of expertise are such a terrible idea. ...personal attacks/insults, which I don't react well to, and I'm in his neck of the woods several times a year--this just strikes me as a really bad idea. - this strikes me as a not-so-veiled threat of violence. Not trying to accuse you of anything, but be mindful of how you word things I saw that post and was too late to close so didn't bother responding but I thought the same thing you commented. I've never done any C# but I am pretty sure he should know about interfaces if he's that experienced. Sanity Check == still sane! @RobertHarvey But interfaces are just as important in Case as in Java. OP indicated expertise in C#. I think you were a bit rude. He asks a question and he has a false perception of his C# know-how. You are an expert. You think his question is ridiculous. Then why you are dealing with his question? Just vote down and move-on.
STACK_EXCHANGE
Another robust approach to sending emails in React Native will be with external third-party services. Instead of launching a user’s email app to send an email, we’ll start sending emails to users in the background. These can be registration emails, password resets or anything else that makes sense to be sent once triggered. To perform such operations, we’ll need to set up triggers (e.g. when a user taps on a ‘register’ button) and a way to send emails based on them. There are several approaches here: In this approach, we’ll try to integrate three different tools together to enable email sending. First of all, set up your Firebase account either with the help of the official docs or with the dedicated 3rd party layer for Firebase for React Native. Like Firebase with your React Native project by adding your individual code as shown on the screenshot below. Now, the idea will be to send emails only when a new child record is added to Firebase Realtime Database. You won’t be able to send emails directly when that happens but this is when Zapier comes in handy. It lets you trigger an email sending from e.g. SendGrid the moment a child object is added with one of its ‘zaps’. Now, just set up a customized email template to be sent when a Zap is triggered and you’re done! For more details on setting things up, check out SendGrid’s tutorial. As you can see above, Firebase can be used to trigger events based on actions users take in your React Native app. Since 2017, there’s also a way to send transactional emails without the poor experience of redirecting them to their email app. You can do it through the 3rd-parties such as Zapier. Firebase Cloud SDK allows for sending these emails with a node library called nodemailer. For it to work, you also need to add an email account to handle the sending. The most straightforward approach is with another Google product – Gmail but you can also use tools for mass mailings such as SendGrid or Mailjet. This quickstart demonstrates how to set up an Auth-triggered Cloud Function using the Firebase SDK for Cloud Functions. Finally, if you don’t want to rely on 3rd parties, you could set up your own backend to handle email sending. This is certainly more difficult and time-consuming but if you make it work, it could save you money and provide independence from other providers. One approach to doing this could be with Nodemailer, which we already utilized in the previous example. Integrating it with React Native raises problems as you can’t use server-side modules in React Native as RN is solely client-side. React Native comes with a WebKit JS engine bundled and the latter one misses several features vital for sending emails, for example, support for sockets. If you were to use nodemailer, you would need to add your SMTP credentials to the device which is possible but, this time, raises security concerns. The best way to go about it would be probably by setting up a proxy server to handle email sending away from the device. This would, however, require a lot more additional development without a guarantee of success. As you can see, this approach is far from perfect. Try it out, though, if the previous methods don’t appeal to you and let us know in the comments if you managed to build something reliable. We’re curious to know! Thank you for reading our guide on sending emails with React Native that was originally published on Mailtrap Blog.
OPCFW_CODE
Sounds like there really is no way to get the individual keys. Take a look and see if this solves it for you I've tried Nirsoft, Magic Jellybean and Belarc. Per this Microsoft article and the Belarc program I can and have found the last 5 characters of the Key but that's not going to do me any good after the re-install unless I have the rest of them. Find from system registry: The Office serial key is stored on the hard drive where you install Office program. About Microsoft Excel menu item or any other Office application. Reference from: Retrieve your Office 2013 activation key in registry If you didn't associate your Office product key with your Microsoft account, you may not be able to find it from your Office account page. Check from email: If you downloaded your Office from an online store, you might be able to get the license key from the email receipt. I guess it might unfortunately come down to trial and error. Nor do I see why I should have to divulge my email address to Microsoft. On the Product Key Tuner program, click on Start Recovery button and begin to recover your Office 2013 product key. But you can read it with a Product key tool. At the point when requiring a specific data client can download it from distributed storage? As I see it there is no way really since I just reread your question, you need to find out which key was was used on each computer, it's not like repairing office. I'm not sure which Microsoft Toolkit you are referring to brandonkick - could you post a link please? There is some validity to that statement, it started small and grew into 30 licenses. Other keys show up, but no Office 2013. How can I find back my Office product key quickly from my laptop? I activated those keys through my Microsoft Support Account. None of them show any Office 2013 product keys. Please be so kind to tell us wether the last 5 Digits are ok with you or not, otherwise this thread will never come to an end. Extract - Find your product key for Office 2013, Office 365 Home, Office 365 Personal, Office 365 University Important If you have Office 2010, see. So it sounds like I may be out of luck legally? Didn't pick up the Visio install. Guide Steps Find your Office 2013 Product Key from Office Account page 1. Computer this is installed on is crashing constantly so I just want to wipe it and reinstall everything fresh but can't do until I can get that license key. Hey Travis, hope this helps let me know if you have any other specific questions : Sorry, bad news. The Office serial key will be showed on your Office account page. There is a pretty good chance that with only 30 machines, the last five characters will be unique for all of them. Simply choose Install while you're connected to the Internet and Office will install and activate automatically. You may try to remove it use the uninstall option from the setup then remove shared objects manually yourself. All 30 copies are installed, however needing to do a reconciliation of which license is installed on which computer. I am hoping that Rohn007 will weigh-in on this; from my exhaustive reading of this subject here on the Forums he seems to be the reigning expert on this really pain-in-the-neck topic. Does anyone know where to look in the registry for the location of the Office license key? I am using Windows 7 My goal is to discard the previous key and enter a new key. I will Leave it to Palcouk as he is the expert on these matters If he is not an expert he should be because he has been doing it long enough! It does recognize the retail version correctly though. Anyway, thanks for pointing me in the right direction. In the old days, it was on a sticker on the case for the installation disk. Does anyone know if office will not activate or throw an error if I try and reuse a key I've already used? And then you can find the Office 2013 product key easily and quickly. Click on Save to File to save all of the product keys on a text file. Check your mail folder, if you haven't deleted it, you will find the key on it. For example, to reinstall Office, just sign in to your account page and click Install. If your company has a license for 10 computers with Home office then one key is used for them, same with business. On the next page, click Verify Email. Microsoft is responsible for the sofware. Believe me, documented in triplicate once I get this all figured out! I suppose I can back into it over time that way. There are some that even show the same 5 digits, though I know that they were installed with different keys. Now you can view your Office activation key clearly on the list below. You should consider to submit your own serial numbers or share other files with the community just as someone else helped you with Microsoft Office 2013 Product Key serial number. This is what Microsoft says: Looking for your product key? This can cause activation issues. None of the licenses I have end in that, so it appears that the script does not work. I was able to hard set the clients by running ospp. However, how can we find Office 2013 product key easily and instantly? Step 1: Download , double-click to install on your computer where you want to find the activation key. How to find your Office Product Key after installation on computer Here in this article will show you how to find out your Office product key from your computer after installation. I only tried this method with my Office 2010, if you are using Office 2016, 2013, 2007, or 2003, just have a try following the steps above.
OPCFW_CODE
For this week’s “making things digital” class, I decided to do something a little different than digitizing text. When I saw Ben’s post of suggestions, I was immediately drawn to the last option: Rectify Maps for the New York Public Libraries. I had done a bit of basic GIS before and was interested that they had an in-site rectifying tool rather than requiring complex and expensive GIS software. I went to the site, watched their video tutorial (not the best quality video, but it told me exactly what I needed to know), and decided to start giving it a try. Rectifying a map involves three main aspects: the historical map, the base map, and your control points. In order to rectify a map, the user places control points on similar locations on both the historical map and the base map. These control points are paired to each other. By carefully placing enough of these control points, the user can manipulate the historical map to match up with the modern base map. The next step was to choose what kind of maps I wanted to rectify. I wanted to choose a place and scale I was familiar with so I started searching for historical maps of my home state, New Jersey. I found two maps that I found very interesting and began working on rectifying them. One is a 1795 engraving of New Jersey by Joseph Scott of The United States Gazetteer (Philadelphia) and the other is a 1873 map of New Jersey from the Atlas of Monmouth co., New Jersey. Here are images of the historical maps before rectifying them: I have decided to include some of my control points into the 1873 map so that you can see what they look like. In order to properly rectify a map, you must have more than one control point. The NYPL site requires that you have at least three control points in order to rectify the historical map with the base map. Also, the warper includes a mechanism that determines how off (margin of error) each of your control points are between your historical map and your base map. The tutorial video instructs you to make sure each of your control points have a margin of error of less than 10. Going into this, I assumed that more control points linking my historical map to my base map would result in a the more accurate rectified map. However, this is only if you can get your control points under that margin of error of 10. Also, adding more control points can often distort the margin of error for your other control points. So it is not always best to have the greatest number of control points, but instead one should place control points in optimal positions yielding the least margin of error. Each map is also unique, so you need to find out what you think the best arrangement and number of control points are. I am not saying that my rectified maps are perfect (they are far from it), but I found that around six control points did the trick. After placing these control points, I cropped the historical map a bit so that it would fit better on the base map, then I clicked “Warp Image!,” then played around with the transparency settings of the historical map in order to produce these new rectified maps: Now I will offer a few final thoughts about the process (although I definitely expect to do this again). First, rectifying maps is a frustratingly precise process. Borders, state lines, and towns on the base map are often in very different locations (or non-existent) on the historical map. Also when you are placing control points you have to be constantly aware of not only whether or not your control points line up to the correct location on the historical map and the base map, but also of how each control point affects the margin of error of each other control point you placed. For example, I tried rectifying a map of the United States and was able to place three control points with very little margin of error for each. I placed them at the Northwestern part of Washington, the Southwestern corner of California, and the Southern-most point in Texas. However, no matter where I placed the next control point, the margin of error seemed to skyrocket for all four points as soon as I placed the fourth one. This might have been a problem with the first three points, but it did prompt me to scale down my efforts from maps of the entire United States to New Jersey maps. There is one last thing that I wanted to comment on, and it deals with the base map. I was thinking about how the entire process of rectifying these maps concerned warping the historical map to fit the base map. This one-way process assumes that the base map is the accuracy standard and all other maps must conform to its scale and borders. I think that this assumption is something that is taken for granted. I understand the need to have a standard map, but could it not also be useful to have the program do the reverse? What if it generated an overlay of the historical map on the base map AND an overlay of the base map on the historical map? What kind of value would an arrangement like that have? I am not sure, but I think it is something that at least needs to be considered. Also there are many historical maps that contain different information than the base map and are, therefore, incompatible with the rectifying process (although they are still listed on the site). I just wonder that if by placing such confidence in the base map, we are losing important information from the historical map. I’ll finish this post by showing one of those maps that are listed on the NYPL site but could not possibly be rectified to our modern base map. There are many of them, but this one in particular stuck out as a very valuable and informative map that is completely incongruous with the base map.
OPCFW_CODE
# implements Cantor's N^K->N bijection and ist inverse from math import comb def n2ns(n) : if n==0 : return [] k,m=unpair(n-1) return to_kseq(k+1,m) def ns2n(ns) : if ns==[] :return 0 k=len(ns) m=from_kseq(ns) return 1+pair((k-1,m)) def from_kseq(ns) : return from_kset(seq2set(ns)) def to_kseq(k,n) : return set2seq(to_kset(k,n)) def from_kset(xs) : return sum(comb(n,k) for (n,k) in zip(xs,range(1,len(xs)+1)) ) def to_kset(k,n) : return binomial_digits(k,n,[]) def binomial_digits(k,n,ds) : if k==0 : return ds assert k>0 m = upper_binomial(k,n) bdigit=comb(m-1,k) return binomial_digits(k-1,n-bdigit,[m-1]+ds) def upper_binomial(k,n) : def rough_limit(nk, i): if comb(i, k) > nk: return i return rough_limit(nk, 2 * i) def binary_search(fr, to): if fr == to: return fr mid = (fr + to) // 2 if comb(mid, k) > n: return binary_search(fr, mid) return binary_search(mid + 1, to) m = rough_limit(n + k, k) return binary_search(m//2, m) def kpair(xs): assert len(xs)==2 return from_kseq(xs) def kunpair(z): return to_kseq(2,z) def seq2set(xs) : """ bijection from sequences to sets """ rs=[] s = -1 for x in xs: sx=x+1 s+=sx rs.append(s) return rs def set2seq(ms) : """ bijection from sets to sequences """ rs=[] s=0 for m in ms : rs.append(m-s) s=m+1 return rs def pair(xy) : x,y=xy return 2**x*(2*y+1)-1 def unpair(z) : k=0 z+=1 while z % 2 == 0: z = z // 2 k+=1 return k,(z-1)//2 def test_cantor(): xs = [2, 5, 7, 10, 12, 18, 19] x=from_kset(xs) ys = set2seq(xs) print(ys) zs = seq2set(ys) print(xs, zs) xs_=to_kset(len(xs),x) print(xs, xs_) us=[2,3,1,0,4,4,5,0,3] u=from_kseq(us) k=len(us) vs=to_kseq(k,u) print(us,vs) if __name__=="__main__": test_cantor()
STACK_EDU
What Does Velocity Do For Me? We believe that Velocity is one of the best proxies for Minecraft around, and there's not much that can top it. However, we do diverge from more established, mainstream solutions in some important ways. That can make Velocity a bit hard to sell. We are frequently asked "why?" so often. This page is our answer to that question. The founder and primary developer of Velocity (Tux) has been active in developing proxy software for Minecraft: Java Edition since 2013. They created the RedisBungee plugin, contributed to BungeeCord from 2014 to 2017, and also founded the Waterfall project and led it from 2016 to 2017. In fact, the current maintainer of Waterfall helped encourage them to start a brand new proxy from the ground up! Velocity powers several highly-populated Minecraft networks, while using fewer resources than the competition. The recipe to the sauce is simple. No entity ID rewriting When a Minecraft client connects to another Minecraft server, the server will send back an ID that uniquely identifies a specific player connection. This ID is used in packets that target the player that the server may send. But what happens when they're actually connecting a proxy that has the ability to change what server the player is connected to? Other proxy solutions try to solve this problem by rewriting entity IDs that reference the current player, changing it from the entity ID assigned by the server the player is currently connected to, to the entity ID that the player got when they connected to the first server they connected to through the proxy. This approach is often complicated, leads to bugs, reduces performance, breaks mods, and ultimately cannot be a complete solution. However, the Minecraft client actually supports changing its entity ID with a special packet sequence. Velocity takes advantage of this and forces the client to change its entity ID. This approach improves performance, improves mod compatibility, and reduces issues caused by incomplete entity ID rewrites. Velocity goes deeper than optimizing the handling of the Minecraft protocol. Smart handling of the protocol produces incredible performance gains but for more performance, we need to go much deeper. One way in which we drastically improve performance and throughput is by improving the speed of compressing packets to be sent to the client. On supported platforms (Linux x86_64 and aarch64), Velocity is able to replace the zlib library ( which implements the compression algorithm used by the Minecraft protocol) with libdeflate which is twice as fast as zlib while delivering a similar compression ratio. Velocity also employs several tricks to get the JIT (just-in-time) compiler on our side. Those tricks require deep understanding of how Java works, but we put in the work to apply those tricks which translate to increased performance. Internal stability policies Finally, Velocity does not attempt to maintain a stable internal API between minor and major releases. This allows Velocity to be more flexible and still deliver performance improvements and new features with each release. For instance, Velocity 1.1.0 delivered massive performance improvements and added many significant new features by breaking parts of the internal API while still keeping full compatibility with older plugins. Compare to BungeeCord which is often very conservative about API breaks and when it does so, provides little notice of the break, and even when doing a break, does not take the opportunity to seriously improve the API being broken (for instance, adding RGB support to Control is in your hands We take pride in tuning Velocity to be the most performant proxy, but in case the speed provided out-of-the-box is not good enough, you can easily tweak several performance-related settings in Velocity also features more security features, some of which are unique to Velocity. We proactively foreclose as many denial-of-service attacks as soon as possible and feature a unique player info forwarding system for Minecraft 1.13+ that requires the server and proxy to know a pre-arranged key. Standards and mod support Unlike certain platforms which only provide lip service to the modding community (and can be at time hostile to them), Velocity embraces the richness of the platform Minecraft provides. As just a small example, we have a Fabric mod that helps bridge the gap between Velocity itself and mods that extend the Minecraft protocol and feature full Forge support for 1.7 through 1.12.2, with support for newer versions in development. Velocity also supports emerging standard libraries in the community such as Kyori's Adventure library. We collaborate with the Minecraft modding community.
OPCFW_CODE
Windows 7 Taskbar and Windows 7 Pin to Taskbar Feature Windows 7 has an enhanced Taskbar for Windows users helping to organize desktop of their computers. Using Windows 7 taskbar with pin to taskbar utility efficiently, users can take control of their desktops easily before it goes mad. Users can easily customize Windows 7 taskbar in order to tidy their workspace environment Windows 7 desktop. If you work a lot with Office Word for example, drag and drop the Office Word short-cut icon to the taskbar. MS Word short-cut will be placed on Windows 7 taskbar. This will remove at least a short-cut from your crowded desktop. But what is more important, you can pin to taskbar of your favourite documents, or frequently used documents. The above screenshot is showing the screen, when the user right clicks on the Windows Explorer taskbar icon. You can un-pin the document by opening the taskbar list of the application, and selecting un-pin using right-click mouse on the document. The above screenshot is showing how to use Windows 7 pin to taskbar with pinnig the Snipping Tool to Windows 7 taskbar example. Using the taskbar pin-up and recent files options, you can find and reach required documents, images, movies, etc. easier and faster. Actually taskbar with pin-up and recent files list is an alternative Windows desktop search tool for Windows 7. So that you will less need files to put on your Windows 7 desktop. To summarize, Windows 7 enables drag-drop application icons to the taskbar in order to open them with single click. This is named as Windows 7 pin to taskbar feature. The order of the taskbar icons can be changed by drag-dropping an icon to the right or to the left position of the other taskbar icons. After you place your application icon on the taskbar, you can pin to taskbar the frequently used files there. What I did to organize desktop of my Windows 7 running computer is to create pin-up folders that I use frequently on the taskbar icon of Windows Explorer. So by just two clicks of mouse on Windows 7 taskbar, I can open the Windows folder that I use a lot for work. Using pinned files list in the Windows 7 taskbar is preventing me to create short-cuts on desktop in huge numbers. This methods results in a tidy desktop which gives a pleasent feeling of running Windows 7. It is easy to remove a pinned file from the taskbar icon pinned files list. All you have to do is right-click the Windows 7 taskbar icons. Then the pinned list, frequently used files list, and taskbar actions screen will be displayed. Go and right-click on the item that you want to unpin from this list. As a last note, it is easy to clean up the Windows 7 taskbar by selecting Unpin this program from taskbar option. Right-click on the program icon on Windows 7 taskbar. Select Unpin this program from taskbar menu option as seen in the below screenshot. I know how easy to have a messy desktop by saving everything in your desktop. Many users requires third-party desktop organizer software to clean up desktops. But if you customize Windows 7 taskbar, that will help a lot to use to organize your workspace to have a clean Windows 7 desktop, and work efficiently using the Windows 7 pin to taskbar function. You will not have to pay for desktop organizer software. Start immediately to change Windows 7 taskbar according to your requirements.
OPCFW_CODE
Red Hat Bugzilla – Bug 472968 [RFE] - Ability to define cluster resources as critical or non-critical within a service. Last modified: 2010-01-21 15:18:07 EST The current ability to establish a separate recovery policy on a per resource basis is not enough due to limitations associated with the recovery policy options. For example, given a resource of an “Oracle listener” and another resource as an “Oracle instance”, configure the “Oracle listener” as non-critical and the “Oracle instance” as critical within the same service. This would have the following desired effect: If the “Oracle listener” resource fails, it attempts to restart a defined number of times and if unsuccessful then fails (stops trying) and alerts the administrator WITHOUT failing or relocating the service that it is a member of thus allowing the “Oracle instance” to continue operating. If the “Oracle instance” resource fails, because it is critical, it will immediately try to restart the entire service and if this action fails, relocates the service to a different node in the cluster. The example identifies and issue with the Oracle application "agent" or "resource", but the feature request I am asking for is in general more granular control of non-critical and critical resources within a cluster not just with the Oracle resource. For a critical resource, I would like to see the critical resource attempt to restart itself (having the flexibility to define the number of failures over a definable period of time would be a great feature) and by restarting itself I mean that individual resource with the problem attempts to restart itself not the entire service. If restarting does not resolve the problem (i.e. it fails immediately) after attempting to restart x number of times or once again even if the restart is successful, but the critical resource "faults" y number of times over z period of time, the service group is then re-located to another node in an attempt to revive the critical resource. It seems at present the only way the define a "non-critical" resource is to give that particular resource a "restart" recovery policy, the other two available recovery policies would seem to ultimately cause a resource to affect the entire service group it was a part of rather than just the individual resources itself. Once again I am asking for a somewhat tiered approach to the "non-critical" resource definition. For a non-critical resource if a fault is detected I would like the cluster to attempt to restart the non-critical resource and if the restart of the resource fails x number of time or faults y number of times over z period of time, then since we have identified this resource as non-critical it "disables" just the individual faulty non-critical resource leaving the rest of the service up and running while alerting the administrator. Certain parts of this can be implemented rather simply. We already record the last operation of each type internally; it's simply a matter of recording multiple failures (and we already have a general purpose mechanism to do this). * Add restart counters to each node in the resource tree, * If there is a restart counter structure, follow standard max_restarts and restart_expire_time for that resource. Part of the feature request works today using __independent_subtree - what is missing is a defined # of restarts if a resource before giving up and restarting the service. __independent_subtree treats a node and all of its children as 'non-critical' - meaning they can restart independently of a service restart. Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.
OPCFW_CODE
Can you explain about YARN and its components in detail like 1. What is Resource Manager ,how it works. What happens when it goes down and how every thing will restore when Resource manager restart 2. What is Node manager. 3. What is application manager Please find the answers below: Apache Hadoop YARN (Yet Another Resource Negotiator) is a cluster management technology. YARN is one of the key features in the second-generation Hadoop 2 version of the Apache Software Foundation's open source distributed processing framework. Originally described by Apache as a redesigned resource manager, YARN is now characterized as a large-scale, distributed operating system for Big Data applications. YARN is a software rewrite that decouples MapReduce's resource management and scheduling capabilities from the data processing component, enabling Hadoop to support more varied processing approaches and a broader array of applications. For example, Hadoop Clusters can now run interactive querying and streaming data applications simultaneously with MapReduce batch jobs. The original incarnation of Hadoop closely paired the Hadoop Distributed File System with the batch-oriented Map Reduce programming framework, which handles resource management and job scheduling on Hadoop systems and supports the parsing and condensing of data sets in parallel. YARN combines a central resource manager that reconciles the way applications use Hadoop system resources with node manager agents that monitor the processing operations of individual cluster nodes. Running on commodity Hardware clusters, Hadoop has attracted particular interest as a staging area and data store for large volumes of structured and unstructured data intended for use in analytic applications. Separating HDFS from MapReduce with YARN makes the Hadoop environment more suitable for operational applications that can't wait for batch jobs to finish. The fundamental idea of YARN is to split up the two major responsibilities of the JobTracker and TaskTracker into separate entities. In Hadoop 2.0, the JobTracker and TaskTracker no longer exist and have been replaced by three components: ResourceManager: a scheduler that allocates available resources in the cluster amongst the competing applications. NodeManager: runs on each node in the cluster and takes direction from the ResourceManager. It is responsible for managing resources available on a single node. ApplicationMaster: An instance of a framework-specific library, an ApplicationMaster runs a specific YARN job and is responsible for negotiating resources from the ResourceManager and also working with the NodeManager to execute and monitor Containers. The actual data processing occurs within the Containers executed by the ApplicationMaster. A Container grants rights to an application to use a specific amount of resources (memory, cpu etc.) on a specific host. YARN is not the only new major feature of Hadoop 2.0. HDFS has undergone a major transformation with a collection of new features that include: NameNode HA: automated failover with a hot standby and resiliency for the NameNode master service. Snapshots: point-in-time recovery for backup, disaster recovery and protection against use errors. Federation: a clear separation of namespace and storage by enabling generic block storage layer. NameNode HA is achieved using existing components like ZooKeeper along with new components like a quorum of JournalNodes and the ZooKeeper Failover Controller (ZKFC) processes. You can go through the below links for video lectures and more reference.
OPCFW_CODE
If a company currently uses a cloud-based email solution, or are entertaining the idea of switching over to one in the near future, both inside and outside counsel should be aware of the system’s built-in preservation and collection capabilities and how to operate them. One of the most popular cloud-based email solutions is Office 365, and while Microsoft has recently released an E5 subscription – many corporations still utilize the E3 version. Use these tips for your workflows for keyword searching and implementing in-place holds in Office 365 E3. Collecting Emails from Microsoft Office 365 E3 A few conditions need to be met before in-place eDiscovery searches and holds can take place in Office 365. - First, the client must maintain an Office 365 Enterprise E3 (or higher) subscription. - The user account performing the collection must have an active mailbox associated with it, and it must be added to the “Discovery Management” role group. - Finally, the “Mailbox Import Export” role must be added to the Discovery Management role group. When these conditions have been met, searches and exports can be performed. Log into Office 365 from a web browser (use Internet Explorer for best results) and navigate to the “Compliance Management” section of the Exchange Admin Center. Create a new in-place hold with a recognizable name and description. From this point, all mailboxes on the domain can be searched, or specific accounts can be targeted. Once the scope of the search is determined, filtering criteria may be selected. Office 365 allows for filtering based on keywords, dates, and the From/To/CC/BCC fields. If the results of the search are required to be placed on hold, they may be placed on hold indefinitely or for a number of days relative to the received date on a given item. Note that a hold is “in-place,” meaning the items placed on hold will remain in their original location in a mailbox and cannot be deleted. The search will save, and then statistics about a given search can be previewed before export. Using the eDiscovery PST Export Tool Results of Office 365 searches can be exported using the “eDiscovery PST Export Tool.” A destination for the exported data must be chosen, and the option to export unsearchable items will be presented. According to Microsoft, “unsearchable items are mailbox items that can’t be indexed by Exchange Search or that have been only partially indexed.” An unsearchable item is typically a file, which can’t be indexed, attached to an email message. The need to export unsearchable items depends on how a search was run. If a search includes all content from a given mailbox (instead of choosing to filter by keywords, date, or senders/recipients), unsearchable items will be included by default. The same is true if filtering is applied only by date and/or senders/recipients (meaning that no keyword filtering was applied). It is only necessary to include unsearchable items if keyword filtering was employed when creating a search.The eDiscovery PST Export Tool will also present the option to “Enable Deduplication.” Selecting this option has been known to trigger a bug in the export process wherein all messages are exported to the root of the PST instead of creating the proper folder structure. It is advised that this option not be selected. If desired, it is possible to initiate multiple instances of the eDiscovery PST Export Tool. Email Collection from Office 365 E5 It is also important to note the Microsoft is released new eDiscovery solutions within the Microsoft Unified Compliance Center. The Enterprise E3 license is a great tool to handle eDiscovery matters – but if you’re looking for more robust features, you may want to consider upgrading to the E5 license. These solutions include the integration of the Equivio platform which enables advanced analytic and discovery features within Microsoft Office 365. This allows corporations to reduce the volume of dataset to reduce costs and ease the process of document review. The Advanced eDiscovery Center in E5 allows for CSV files with your data to be exported to be ready to load into the review platform of your choice. As cloud-based systems become more complex and offer more features, understanding how to use them effectively and efficiently will become even more vital. It’s also important to learn how your current tools can be used in conjunction with Office 365 to improve your overall eDiscovery workflow. Attract & Retain Top Talent With a rapidly changing industry, it's vital to offer the right compensation and set the right expectation. With our Salary Guide, get detailed job descriptions, industry insights and local salary data to equip your managers with hiring confidence and expertise.Get your copy »
OPCFW_CODE
Nmap Network Scanning. ... ICMP Traceroute messages are still experimental, see RFC 1393 for more information. ICMP Codes. These identifiers may be used as mnemonics for the ICMP code numbers given to the --icmp-code option. They are listed by the ICMP type they correspond to. URL: https://nmap.org/book/nping-man-icmp-mode.html Go now To make an ICMP echo request, open your terminal and enter the following command: If the host responded, you should see something similar to this: # nmap -sP -PE scanme.nmap.org Nmap scan report for scanme.nmap.org (184.108.40.206) Host is up (0.089s latency). Nmap done: 1 IP address (1 host up) scanned in 13.25 seconds. URL: https://subscription.packtpub.com/book/networking... Go now Proper protocol headers for those are included since some systems won't send them otherwise and because Nmap already has functions to create them. Instead of watching for ICMP port unreachable messages, protocol scan is on the lookout for ICMP protocol unreachable messages. Table 5.8 shows how responses to the IP probes are mapped to port states. URL: https://nmap.org/book/scan-methods-ip-protocol-scan.html Go now ICMP echo request messages were designed specifically for this task, and naturally, ping scans use these packets to reliably detect the status of a host. The following recipe describes how to perform an ICMP ping scan with Nmap and the flags for the different types of supported ICMP messages. URL: https://subscription.packtpub.com/book/networking... Go now Jan 08, 2019 · ICMP scan can also identify live hosts by sending an ICMP Echo request. A live host will send back a reply, signaling its presence on the network. nmap -sP -PE 192.168.100.1/24. Using the -PP option, Nmap will send ICMP timestamp requests (type 13), expecting ICMP timestamp replies (type 14) in … URL: https://pentest-tools.com/blog/nmap-port-scanner Go now Jan 29, 2008 · Nmap finished: 256 IP addresses (2 hosts up) scanned in 5.746 seconds. -sP : This option tells Nmap to only perform a ping scan (host discovery), then print out the available hosts that responded to the scan. This is also known as ping scan. -PI : This open tells Nmap that we are sending ICMP echo requests. URL: https://www.cyberciti.biz/faq/howto-pingscan-icmp-ip-network-scanning Go now Nmap sends an ICMP type 8 (echo request) packet to the target IP addresses, expecting a type 0 (echo reply) in return from available hosts. Unfortunately for network explorers, many hosts and firewalls now block these packets, rather than responding as required by RFC 1122 . URL: https://nmap.org/book/man-host-discovery.html Go now Sep 03, 2021 · -sn is for ping scan which basically prevents nmap from scanning all the ports (and probably scans one port). Then scrolling down the nmap help I found another option -PE which is the ICMP scan. I studied a bit on it and came to know it sends a request to the destination host to check whether it is up or not and receives a reply query as a ... URL: https://security.stackexchange.com/questions/... Go now Dec 16, 2020 · ICMP ECHO Timestamp scan. The pentester can adopt this technique in a particular condition when the system admin blocks the regular ICMP timestamp. It is usually used in synchronization of time. nmap -sn -PP 192.168.1.108 --disable-arp-ping. The packets captured using Wireshark can be observed. URL: https://www.hackingarticles.in/nmap-for-pentester-host-discovery Go now Mar 31, 2020 · Nmap, which stands for "Network Mapper," is an open source tool that lets you perform scans on local and remote networks.Nmap is very powerful when it comes to discovering network protocols, scanning open ports, detecting operating systems running on remote machines, etc.The tool is used by network administrators to inventory network devices, monitor remote host status, save the … URL: https://www.redhat.com/sysadmin/quick-nmap-inventory Go now Feb 25, 2018 · In order to bypass this rule, we’ll use ping scan with ICMP packets, for that we’ll use –PP attribute. –PP sends ICMP timestamp request packet [ICMP type 13] and received ICMP timestamp reply packet [ICMP type 14]. nmap -sP -PP 192.168.1.104 --disable-arp-ping. From given below image you can observe that observe it found 1 Host is up. URL: https://www.hackingarticles.in/nmap-for-pentester-ping-scan Go now Apr 26, 2016 · Simple NMAP scan of IP range. The default scan of nmap is to run the command and specify the IP address(es) without any other options. In this default scan, nmap will run a TCP SYN connection scan to 1000 of the most common ports as well … URL: https://www.networkstraining.com/nmap-scan-ip-range Go now • Scan a single target nmap [target] • Scan multiple targets nmap [target1,target2,etc] • Scan a list of targets nmap -iL [list.txt] • Scan a range of hosts nmap [range of IP addresses] • Scan an entire subnet nmap [IP address/cdir] • Scan random hosts nmap -iR [number] • Excluding targets from a scan nmap [targets] –exclude ... URL: https://cs.lewisu.edu/~klumpra/camssem2015/nmapcheatsheet1.pdf Go now Oct 20, 2021 · When you use the -sn subnet option in nmap, the help screen mention that it is a "Ping Scan." Most analysts know ping and probably know that ping uses ICMP as its protocol. Well, in this video, you will see how I used Wireshark to observe how nmap discovers a subnet and if it uses ICMP to accomplish this. With this specific option on Windows 10 ... URL: https://www.networkcomputing.com/networking/get... Go now Aug 06, 2020 · Wireshark package capture for Target#1 scan with “-Pn” option. Observed Results: We see no host discovery packages (i.e. ICMP echo request, TCP … URL: https://medium.com/@informationsecurity/nmap-pn-no... Go now The parameter -Pn (no ping) will scan ports of the network or provided range without checking if the device is online, it wont ping and won’t wait for replies. This shouldn’t be called ping sweep but it is useful to discover hosts, In the terminal type: # nmap -Pn 172.31.1.1- 255. Note: if you want nmap to scan the whole range of an octet ... URL: https://linuxhint.com/nmap_ping_sweep Go now Jul 30, 2018 · In this article, we mainly focus on what types of network traffic is captured by nmap while we use various nmap ping scan. Ping scan in nmap is done to check if the target host is alive or not. As we know that ping by default sends the ICMP echo request and gets an ICMP echo reply if … URL: https://www.hackingarticles.in/understanding-nmap-packet-trace Go now Oct 26, 2020 · So in my defence, the above is a little misleading, as a “port scan” does occur (of sorts, on TCP 80 and 443, as we’ll see below), and a “ping” scan certainly implies ICMP pings to me. If we read on however, we see that -sn works differently depending on the privilege level of the user running nmap, and whether or not the target is on ... URL: https://defaultroot.com/index.php/2020/10/26/... Go now Mar 20, 2021 · Answer: Microsoft Windows. # Task 9 - [Scan Types] ICMP Network Scanning. How would you perform a ping sweep on the 172.16.x.x network (Netmask: 255.255.0.0) using Nmap? (CIDR notation) Answer: nmap -sn 172.16.0.0/16. URL: https://doretox.com/nmap-walkthrought Go now The following example shows an ARP scan against all possibilities of the last octet. nmap -sn -PR 192.168.0. *. The following scan forces and ip scan over an arp scan, again the last octet using the wildcard. nmap -sn --send-ip 192.168.0. *. As you can see while the scan made before took 6 seconds it … URL: https://linuxhint.com/nping_nmap_arp_scan Go now Dec 10, 2020 · To find this you can type man nmap and go to Firewall evasion tab and you can see this to your self. Task 14 → Practical. Does the target (MACHINE_IP)respond to ICMP (ping) requests (Y/N)? N. Perform an Xmas scan on the first 999 ports of the target — … URL: https://mohomedarfath.medium.com/nmap-tryhackme... Go now Jan 25, 2021 · If you specify nmap -sn option, it will indicate that the host is up as it receives arp-response. This happens when a privileged user tries to run a scan on LAN network where ARP requests will be used. To see if the host is responding to ICMP ping, you would need to specify --send-ip option. URL: https://goayxh.medium.com/tryhackme-nmap-practical-f19f712334c7 Go now Mar 10, 2019 · NMAP Tutorial and Examples. #1 My personal favourite way of using Nmap. #2 Scan network for EternalBlue (MS17-010) Vulnerability. #3 Find HTTP servers and then run nikto against them. #4 Find Servers running Netbios (ports 137,139, 445) #5 Find Geo Location of a specific IP address. URL: https://www.networkstraining.com/nmap-commands-cheat-sheet Go now
OPCFW_CODE
The 1.12.2 Pack updated to version 1.1.0 Like quarks, each individual added feature is small, but they build into a larger whole (taken from description of the Quark mod) - this is what this update is all about. This is the first feature update of the modpack and it is big. Please note that performance should be the same but the loading time might extend by about half a minute. If you suffer from any issue after updating, please be sure to report them over http://bit.ly/The-1-12-2-Pack-Issues to help the modpack's development! • By your request, added Planet Progression & Quark! Both are heavily modified and configured, also by using the newly added CraftTweaker & its addon LootTweaker. Also added Construct's Armory, Bad Wither No Cookie - Reloaded and the Unloader mod. • We've been working with the author of Planet Progression to add some newly exciting features to the mod, especially with this modpack update. In order to find new planets & moons in your Galacticraft map, you will first need to research them. Research papers need to be used in the Telescope & can be obtained with the Satellite Controller by sending off your own research satellite! You can find more info in-game. • Quark is heavily configured to fit with the theme and style of the modpack. It adds a lot of exciting new features including new world generation (underground biomes, crabs, pirates & more), new decorative blocks (colored chests, paper walls, stained wooden planks, and more), and literally hundreds of new small features. You can read more about some of them over http://bit.ly/Quark-1-12-2-Info. • Construct's Armory adds new armor options in the Tinkers Construct style. Bad Wither No Cookie - Reloaded will suppress server-wide Wither-killing-sound so Wither farms will not be annoying. The Unloader mod will improve performance by unloading unused dimensions. • Wrote and added multiple custom scripts for CraftTweaker & LootTweaker which will introduce some tweaks & fixes that improve overall gameplay! • Added a custom RestoreBackup.bat script with the server package. It's supposed to help with the fact that restoring backups on the server-side lacks a GUI. It will automatically restore the latest server backup, and the script can be modified to target a different backup. • Many config tweaks, especially regarding the newly added mods. Should balance world generation more and introduce the new mods well in the pack. • Updated 17 mods to their latest versions. Some of these updates contain very important bug fixes! • You can read more information regarding the changes of this update through this link: http://bit.ly/2U6zNn8 This update is currently set to the "Latest" branch. After more testing we're hoping that soon it will be pushed to "Recommended". Remember to post any issues/suggestions you have over http://bit.ly/The-1-12-2-Pack-Issues.
OPCFW_CODE
Process torsion profiles [x] Add TorsionStore for storing torsion profiles as a database [x] Add input models [ ] Add output models [ ] Add MM optimization about target torsions [ ] Yoink https://github.com/lilyminium/openff-strike-team/blob/a6ccd2821ed627064529f5c4a22b47c1fa36efe2/torsions/datasets/mm/minimize-torsion-constrained.py#L35-L106 [ ] Add example plotting script [ ] Add tests [ ] Update documentation Codecov Report Attention: Patch coverage is 0% with 57 lines in your changes missing coverage. Please review. Project coverage is 86.30%. Comparing base (ce588da) to head (1e2dc8c). Report is 6 commits behind head on main. Additional details and impacted files Something's wrong with unit-handling. I'm taking the energies from QCArchive to be in atomic units, like optimization datasets are, and converting them to kcal/mol just the same. I'm getting minimized energies out of OpenMM in kcal/mol as well. When I put together plots using some variant of run_torsion_comparisons.py, I need to multiply the QM energies by 10 (a completely arbitrary number) for the profiles to vaguely match up: One could imagine or re-generate the plot with another factor besides 10, but the general conclusion (MM energies are about an order of magnitude higher, but similar in shape, to QM profiles) would be the same. Intuitively the scale of the QM profiles (a few kcal/mol up to a few tens of kcal/mol) seems reasonable and the raw MM values (up to a few hundred kcal/mol) seem like the wrong ones. But this is the closest I've come to being a force field scientist fitting torsions, so I don't trust my intuitions here very much. The fact that the shape of (most of) the MM torsion profiles line up with the QM data implies to me that (most of) the minimizations are behaving. But the way I'm getting energies out is so obviously correct, or at least what I do elsewhere to me that I think I'm going crazy if that's the problem: energy=simulation.context.getState( getEnergy=True, ) .getPotentialEnergy() .value_in_unit( openmm.unit.kilocalorie_per_mole, ), Is there some gotcha I'm unfamiliar with? I assumed that this was as simple as (molar) energy at a handful of grid points, so simply kcal/mol or kJ/mol. When I've worked on these, I just plotted the QM and MM energy of the QM structure, so I didn't do any minimizations. However, the QM and MM relative energies were both on the order of 0 to 10 kcal/mol (sometimes a bit larger, but not regularly 300). And you're right that the QM should indeed be coming out of QCA in hartrees. I don't see any obvious issues with your code, I wonder if it's something to do with the constrained optimization raising the energy of the MM structures? Thanks @amcisaac and @jthorton! Re-generating the same data with some fixes produces data which seems much more reasonable: There are still some details to hash through but I at least think it's in the right ballpark now Some bad optimizations? Noise due to making all torsion profiles relative to (its) minimum energy? Cleaner defaults for plots Don't draw lines across parts of the grid that are missing lots of data Better default axes ranges Maybe some lightly-colored grids Okay, I'm done messing with the plots for now. @ntBre noticed that the MM profiles were normalized to the (location of the) MM minima whereas the convention is to use the (location of the) QM minimum. The differences are fairly small: I consider this complete if all rows in the CSVs produced from this run are non-zero: https://github.com/openforcefield/yammbs-dataset-submission/actions/runs/12361612232/job/34499304927 This is precisely what I hoped to see - the 0.0 rows are only in the red, small representative sample bit here: https://github.com/openforcefield/yammbs-dataset-submission/pull/13/commits/b5deede6ce88649efca59121bc47c193a2e49040#diff-db3b6996a30e2a55bc83efbea52e18dbd86f135f9860641f6eb6d12f36571d4eL254-R258
GITHUB_ARCHIVE
After Fedora 17, I stopped clean installing any of following Fedora releases. For Fedora 18 and 19, I did only “fedup” (FEDora UPgrader tool). In both cases, the upgradation process went well with no known glitches whatsoever at all. The only problem was that, my machine was using GRUB and not GRUB2, latter which Fedora 18 and 19 adopted for bootloading process. “fedup” process didn’t update GRUB automatically to GRUB2. I tried the manual upgradation and didn’t work on Mac machines (and it was not much supported at that time). But, GRUB was working flawlessly with Mac’s own Boot Manager, rEFIt, rEFInd, etc., despite GRUB legacy was no longer used on EFI systems starting from Fedora 18. So, I was happy letting the Linux bootloader be GRUB, until the following error message showed up during boot process immediately after updating the kernel to 3.10.3-300 (the numbers really don’t matter). fb: conflicting fb hw usage nouveaufb vs EFI VGA – removing generic driver I tried my best to rectify the problem with all the varying kernel boot parameters like nomodset, single, acpi=off, acpi=ht, removing rhgb, quiet, etc. Nothing worked. I had no way to enter into Fedora 19, for quite some weeks. Fortunately, I backed up the entire ESP partition immediately after having tried grub2-efi installation and creating grub configuration file, before reverting to GRUB, which led me to have a copy of “grub.cfg” with old Fedora 18 kernel parameters. So, I mounted Fedora 19 ISO image file in Mac OS X, copied the EFI, mach_kernel, System structure from mounted ANACONDA partition to Linux ESP (or you can test run on a USB stick formatted with “Mac OS extended” partition) and replaced the “grub.cfg” with backed up “grub.cfg”. Booted with this Linux ESP (or USB partition) and replaced the old kernel number with new kernel number in editing mode. And, voilà, it booted after some painful weeks of silence. So, I figured it out it’s the absence of GRUB2 that has caused this video related problem. This write-up is for updating GRUB to GRUB2 uniquely on Mac machines with 64 bit EFI and for 64 bit Fedoras. This also servers the purpose of rectifying corrupted GRUB2 in your Fedora on Mac machine. This write-up is based on these following posts: The directory structure of Linux ESP, a Mac OS extended (not journaled) partition, on Mac machines, consists of a file called “mach_kernel” and two directories “EFI” and “System”. It is the files inside System → Library → CoreServices and “mach_kernel”, that take care of Mac machines Boot Manager, Mac OS X (or BootCamp Microsoft Windows) System Preferences → Startup Disk, EFI bootloading, etc. EFI directory has BOOT, fedora and redhat subdirectories. GRUB seems to be using redhat and GRUB2 is using fedora. BOOT has files for fallback during booting. Install GRUB2 for EFI. This creates the necessary /boot/efi/EFI/fedora directory files and updates /boot/efi/EFI/BOOT files. $ sudo yum install grub2-efi shim Create the GRUB configuration “grub.cfg” file. $ sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg Now, create a link to this file in System → Library → CoreServices. $ sudo ln -s /boot/efi/EFI/fedora/grub.cfg /boot/efi/System/Library/CoreServices/grub.cfg That is it. Make sure inside System → Library → CoreServices directory, you have a grubx64.efi file, a grub.cfg link, SystemVersion.plist file and an optional boot.efi file (I’m not sure whether it’s required or not, test it on your own). Remember, inside this ESP partition, simple copy/paste (cp command) seems to work always. If you’re an advanced user and have trouble with EFI-GRUB booting, hesitate not to play around with files created by grub2-efi command and from aforementioned ANACONDA partition. If you’re still having problems, try this detailed step-by-step process for updating GRUB on EFI systems. Thanks for the read and please leave comments 🙂
OPCFW_CODE
The art of snake charming Snake charming works, on the fact that snakes are deaf the waves of the sounds from certain instruments are identified as a possible threat, which ca. Training starts early, with toddlers getting to grips with the art of snake charming and growing up surrounded by serpents he said: 'the training begins at two. Snake-charmer snake charming is the practice of pretending to hypnotise a snake by playing an inst giclee print find art you love and shop high-quality art prints, photographs, framed artworks and posters at artcom 100% satisfaction guaranteed. Snake charming - wikipedia - download as pdf file (pdf), text file snake charming in art serpent charmers (p161org/wiki/snake_charming. Snake charmers appear in the when the quests first meet hadji in india in the episode calcutta adventure he is charming a cobra sequential art tabletop. From the art of charm: “jason mccarthy teaches lessons focused on problem solving and rapport building related to “charming the snake,” a way of life for green berets that works in war, business, and love. Rousseau, who was self-taught and began painting late in life, travelled very little most of his jungles were painted in the natural history museum and in. Snake charming: can snakes really be charmed by music snake charming is the practice of pretending to history of art and design many snake charmers live a. Art, music , literature, sports snake charming refers to the practice of hypnotizing a snake by playing patrick f spellbound: charming the snake & scorpion. Find snake charming stock images in hd and millions of other royalty-free stock photos, illustrations, snake psychedelic art design. Orochimaru opens his mouth, letting out a giant snake that slithers its way towards the opponent and rams them when the snake stops its. In the past, snake charmers were considered an indian icon, without which no tourist brochure was complete they were highly revered and admired by tourists and locals alike, and thousands of street performers made a decent living from the profession. Welcome back to spotted, your weekly resource of exciting works from the worlds of contemporary ceramic and contemporary ceramic art. Download snake charming images and photos over 212 snake charming pictures to choose from, with no signup needed download in under 30 seconds. Snake charming, oris, india photographic print by judith haden find art you love and shop high-quality art prints, photographs, framed artworks and posters at artcom 100% satisfaction guaranteed. Shop for snake charmer clothing & apparel on zazzle snake charming lady t-shirt t-shirt art deco snake rear by slipperywindow leggings. Snakes have long been popular subjects of hindu art snake charming as it exists today snake charmers typically walk the streets holding their serpents in. Snake charming has been listed as a level-5 vital article in art if you can improve it, please dothis article has been rated as c-class this article is of interest to the following wikiprojects. Buy fine art and canvas prints, photographic prints, original art, and books welcome to the world of obsessionart welcome to obsessionart, specialist in erotic, nude, figurative and fetish art prints, original art and books. Ancient egypt was home to one form of snake charming, or to a skilled snake-charmer snake charming as it exists today probably originated snake charming in art. Snake charming is a tribal art performed widely in india financial charmer pattern with indian snake charmer - illustration dancing. An ancient performing art that is captivating and mystical snake charming is a rare art form and one of singapore’s old time trades thankfully, at party parlour you still have the opportunity to hire a snake charmer, from one of the historic indian snake charming families who are still practicing their snake charming trade here in singapore. This highly evocative work invokes the air of exotic mystery that surrounds the art of snake charming an eastern scale, very colorful percussion writing, aleatoric techniques, and an optional loud or soft ending make this a one of. To actually mimic the traditional indian art of snake charming, however, of course, snake charming is dangerous, especially if using truly venomous snakes. Snake charmers in art (5 c, 27 f) media in category snake charming the following 66 files are in this category, out of 66 total a snake charmer. Evans does not inquire further into their function, but i suspect that it is in this role, as snake-charming priestesses, minoan snake goddess. An ancient indian art form doesn't stand the test of time the last snake charmers snake charming is a dying art form.
OPCFW_CODE
Readers recommend the best products Remotely Control User Desktops— for Free Simple, repetitive tasks such as adjusting a user's screen resolution or fixing the "my printer isn't working" problem can be quite time-consuming, especially when you have to physically visit the desktop system to solve the problem. Remote-access solutions let you administer user desktops from a central location, but many of those solutions require you to install client software on each system before you can administer it remotely. Peter Vogel, a network and systems administrator from Vancouver, British Columbia, faced a barrage of tedious tasks but found a solution that didn't require him to install software on the client, saving him heaps of time. Vogel got his hands on Gensortium's GenControl and quickly reaped the benefits. "Within seconds of installing GenControl, I could deal with maintenance issues on the other side of the building. I simply installed the product on the administrative machine and was immediately presented with all workstations in the domain. I just clicked on the client machine, and I was in." GenControl lets administrators perform maintenance tasks from the comfort of their own computers. When connecting to a client machine, the product drops a small client file onto the remote machine and automatically removes the file at the end of the session. GenControl is a free download. Save Time with Server Migration Today's fast-paced work environment continually introduces new challenges to our work routine and requires us to do more in the same window of time. In no area is this more obvious than IT. Dave Romesburg, a senior infrastructure management analyst for a large Pittsburgh data center, has in the past spent a lot of time and money streaming servers between physical and virtual machines (VMs). But with PlateSpin Power-Convert installed in his infrastructure, he's seen this otherwise costly, lengthy undertaking become a quick, seamless task. "I can take an existing server, boot up using PowerConvert, and the product will take a full server image and transform it to a virtual image on VMware ESX Server or Microsoft Virtual Server. Additionally, I can take an existing server, specify hardware and OS settings, and PowerConvert will take care of the configuration changes and migrate the server to the new hardware." PowerConvert also lets you store server images, then push them out to a new physical server or VM by dragging the image from the PowerConvert repository and dropping it onto the new server. Additional PowerConvert capabilities include server consolidation, disaster recovery, automated test lab deployment, and data center optimization. Free All-in-One Communications Tool With all the types of Internet communications in use today, wouldn't it be nice to have one solution that provides music streaming, photo sharing, file sharing, and remote access to your desktop? These are just a few of the features that Veselin Kulov of Sofia, Bulgaria, has enjoyed while using Qnext. In addition to the features just mentioned, Qnext includes universal IM, audio chat, videoconferencing, file transfer, online gaming, group text chat, and Internet Relay Chat. "Best of all, Qnext is free," says Kulov. "I like that I can listen to my home music collection while at work without downloading any files. I have also saved hours when transferring large files and cut down collaboration time with the product's built-in videoconferencing." Kulov notes that Qnext is very secure. The product uses what it calls Zones, which gives you control over who has access to your photo albums, who can share files with you, and who can chat by letting you assign permissions to specific individuals. Qnext works not only on Windows XP, Windows Me, and Windows 2000 but also on Mac OS X and Linux.
OPCFW_CODE
Hey, Techie’s/Pal’s are you planning to start automation for an application? Then do not pick a tool randomly and then test an application. Later, if you have come to know the tool which was selected to automate is not feasible, and then you might be losing precious time and Money. Here is a great way to do it: Yes, you got it right, it is the feasibility study on tools by considering the below checklist. Aren’t you excited to know about it more, here you go: The checklist for any automation tool. - The nature of the application: First, we need to know for which application we want to automate like is it Web, Mobile, Desktop or API. If there are more than one applications, then we need to pick the right tool which supports all these applications. - Client requirement: There are two types of clients in the industry 1) one who is having good Knowledge About Testing and automation tools, he can quickly identify the right tool and programming language to be used to automate. So directly, you can go to step 4 of checklist no four and do the feasibility study. 2) The Client who does not have any knowledge of automation, he informs you to pick the right tool to automate. While choosing the tool, you should also consider the financial budget of the Client. Usually, The Client having a minimal budget opts for open source tools(like Selenium, Katalon Studio, Cypress, WebdriverIO), if the Client does not have a stipulated budget, we can suggest the best commercial tool(Like QTP, Test Complete). - Study of Application in which Technology it was built: The web applications, which are built-in Technology like PHP, React having the webpages has some common attributes like ID, name, and value for HTML elements for locating. However, Pages built through Angular extends HTML and assigns new attributes called directives which create dynamic HTML content, so tools other than protractor cannot identify those elements, so we use protractor tool. - Locators Identification: Open the application and check If your able to locate all the elements through inspect element option present in the browser, if you are not able to identify the locators then the application is not feasible for automation. - Tool selection: Before opting for open source tools, make sure that there are enough supporting community and blogs so that if there are any queries in a tool, then it can be answered/resolved quickly. If it is a commercial tool, then make sure that the company is providing support round the clock. - Browser compatibility: There are tools which do not support all the browsers, so before picking the tool check tool if it is supporting the browser which you require. - Integration Support: You should also check that tool supports to integrate with tools like Continuous Integration(like Jenkins) and Version Control(like Git). If it is an open-source tool, check your framework should be compatible to integrate with reporting tools(like Extent Report), bug tracking tool (like Jira, HP ALM). - Do Sample Test: Pick any two simple test cases and try to automate with that tool, and if the tool is working fine, then go ahead and start the automation. Hope you have got a good insight on the checklist for any automation tool, Please feel free to ask your queries [email protected]
OPCFW_CODE
How can I tell if my mini computer is dying or it is just the fan? I have a small Zotac mini computers, one of those computers that is in a small box like a book. Everything is crammed in there in a tight space. The computer is making some scary noises. How can tell if it is just the fan going bad or the hard drive is dying? Is your computer running slowly and/or freezing up? @juniorRubyist No, just a lot of bearing noise, but I don't know if its the bearings of the fan, or the bearings of the hard drive. You might be in luck. The fans are probably jammed with dust or whatnot, otherwise just going bad. Do check the S.M.A.R.T. status on the drive, though, just to be sure. You have backups, right? :) If you have an SSD in there, it's the case fans. If you don't have an SSD... consider one. BTW, mini PC would be the correct term for these Zotacs. Minicomputers were the size of refrigerators when they came out. They were only "mini" when compared to the mainframes of the time. If your hard drive is dying, you would hear repetitive clicking or buzzing noises and the computer may occasionally freeze up, getting worse as it dies. You can also tell that your hard drive is dying because of extremely slow transfer rates. You can try checking the S.M.A.R.T. status of your drive (almost like the "Check Engine" light on a car) by using the Command Prompt (or PowerShell) with the wmic utility in Windows. Corrupt files can also be a warning sign of drive failure. Linus Tech Tips does a good job explaining all sorts of hard drive issues. If you determine that your hard drive is dying, stop using it immediately and go buy yourself an external drive (1 TB+) to backup your data ASAP. Try not to rock the computer or drive around to prevent further breakage of your drive. If your fan is broken, no big deal; just replace it. You could try opening up the case and watching the fans spin up to watch for any issues. If you have a solid state drive, then any noises would be from the fan (but you said you have a hard drive, so...). You can open it up and disconnect the fan (or otherwise stop it from spinning). If the noise stops, the problem was the fan. Otherwise try disconnecting the power to the HDD to confirm that it's the hard drive. And obviously, if the fan is already running because of the heat and you stop it, you might damage the rest of the computer. @pipe running the computer without a fan for a short period of time will not harm anything. But stopping it from spinning is not that good of an idea. @pipe Assuming it has an Intel CPU the CPU has overheat protection and running without a fan will not cause permanent damage to it -it will just slow down. Over very extended periods the additional heat would put stress on the components, but you are talking months or years. Run your computer for as little time as possible without the fan; it's there for a reason, and doesn't just cool the CPU in many computers. Running your computer for even a relatively short time without the fan can destroy the machine. @wizzwizz4 - Care to evidence your assertion about it destroying the machine? What components (other then CPU - and presumably built-in gpu) do you assert are that heat sensitive, and why is there this built in time-bomb on the huge number of computers which do not detect the fan speed, and why do you get so many fanless mini PCs? @davidgo It can destroy the machine. Not definitely, but some machines can't deal with the loss of the fan and will keep running until wires melt. This is especially a problem if the computer has a built-in PSU. @wizzwizz4 yes, I hear your assertion. Can you evidence this happening in any non-ancidnt system (ie one where CPU and any discrete GPU do throttle on excess heat) in a non-extreme environment. Seperately Its worth n also note that these tiny systems are typically built from laptop parts with an external PSU. The "laptop parts" bit also implies much lower TDP, so less heat buildup then the dual 120watt tdp systems with 3.5" high speed disks of old and 400 watt GPU. Your problem can easily be addressed referring to the computer behavior. If you do not face any undesired halt or freeze and the computing speed is, less or more, that you are accustomed to, the problem is the fan, to be substituted. In order to be more confident simply launch a detailed HD check or some defragmenting program. You will be able both to understand more precisely the jerky sound origin and to find vaste amount of errors in the disk in case of hd failure. If many many errors are not found, change the fan.
STACK_EXCHANGE
Should I Choose Server Core When Installing Windows Server? Starting in Windows Server 2012, Server Core is the default install option. As it makes initial configuration easier, it’s tempting to opt for the full GUI install of Windows Server instead of Server Core, but Server Core is the default choice for a reason. Among the benefits, Server Core has a smaller footprint, a reduced attack surface, and it lowers the frequency with which reboots are needed after applying Windows updates. Today I’ll go into the reasons why you should stick to the default Server Core install option in Windows Server 2012. Server Core Compatibility Microsoft announced compatibility for more applications in Windows Server Core 2012. Nevertheless, there will still be applications that cannot run on Server Core. While some Microsoft server-based applications such as SQL Server 2012 are now compatible with Server Core, you should check the requirements for any applications that you plan to install on Server Core. Exchange 2013 and SharePoint Server 2013 are not compatible with Windows Server Core 2012. Server Core Is More Secure The work undertaken by Microsoft in the development of MinWin and Windows 8 allowed for the first time componentization of the operating system, untangling complex dependencies which had previously necessitated installing the entire code base, even if only a subset of the OSes features were being used. The ability to separate components and load them as required lead to Windows Server Core: a bare minimum install of the server OS managed from the command line. One benefit of this approach is the reduced attack surface. Windows Server Core is less vulnerable to attack because there is less code that could be exploited. If this is not reason enough to consider Server Core, the reduced complexity means simpler patching and less need for reboots after updates are applied. Performance is also improved because without a GUI and other unnecessary components, there is less overhead. Legacy Desktop Environment and Risks The Modern UI interface has been designed from the ground up to be secure. Apps cannot interact with each other or the operating system, and they require your permission before they can access user data or hardware. The desktop on the other hand, was designed in an era where usability came before security. Although it is possible to secure the desktop, it requires additional features such as AppLocker, and adhering to other security best practices, to ensure that the risk is minimized. For example, it’s much easier on the desktop for a user to run an unauthorized application or piece of code. That’s not to say that Server Core is immune to malware, but if you follow best practices and use PowerShell with constrained endpoints for administrative purposes, Server Core is still less vulnerable than its GUI cousin. PowerShell constrained endpoints, and adhering to other security best practices, provide the most secure way to manage Server Core and the full GUI version of Windows Server. PowerShell Remoting can be configured to give IT staff access to only the management features and commands needed for the job, and combined with least privilege security, is more secure than managing Windows Server using Remote Desktop or the Remote Server Administration Tools (RSAT). Switching Between the GUI and Server Core Prior to Windows Server 2012, opting for Server Core or the full GUI was a one-time only decision and couldn’t be reversed. This perhaps made system administrators nervous about choosing Server Core at install time, partly because they weren’t familiar with command-line management and Server Core configuration, rising to concerns about how to support the server in the event of a problem. Windows Server 2012 (and later) provides administrators with the option to switch between Server Core and the full GUI version if required. A simple command can be used to make the change, and after the server reboots, the GUI will be restored or removed as specified. Get More Out of Virtualization Due to the improved performance and reduced overhead of Server Core, you can run more virtual instances of Windows Server on VMware or Hyper-V than might be possible with the GUI version. This allows organizations to make better use of servers in datacenters and squeeze more resources out of each physical server. Windows Server Without Windows! While Server Core will make some system administrators nervous about the management challenges and learning curve involved, Server Core can bring many benefits, not least including improved security and lower operational overhead. On that basis alone, wherever possible, you should stick to the default Server Core install option in Windows Server 2012 to get the most benefit from the advances Microsoft has made in performance and security, which will help reduce the operational and management costs of Windows Server. More in Windows Server 2012 Windows Server 2012 Extended Support Ends in October Jan 5, 2023 | Rabia Noureen Microsoft Acknowledges New Netlogon Issues On Windows Server Machines Feb 25, 2022 | Rabia Noureen How to Fully Patch the PrintNightmare Vulnerability Jul 9, 2021 | Brad Sams Understanding Windows Server 2016’s Disaster Recovery Features Aug 29, 2018 | Michael Otey What Are Shared Virtual Hard Disk Sets on Windows Server 2016 Hyper-V? May 26, 2017 | Aidan Finn What Is the Storage Resiliency of Windows Server 2016? May 25, 2017 | Aidan Finn Most popular on petri
OPCFW_CODE
How to reasonably fight "that will do" attitude at work (when it turns out it clearly won't do)? At a company I currently work (mind you, as a junior), we (as in IT team) are responsible for both the hardware (building the equipment from provided parts) and software (developing applications) parts of the job. However, there is an "that will do" attitude present with the highest management, whether it comes to hardware (some god knows what parts swept from under the deepest shelves at warehouse) to software (How long will it take to make it properly? 2 weeks? You have 2 days, just ignore all the possible errors and application crashes). When the solutions inevitably fail, the blame is being shifted onto us, which is why I started keeping both paper and electronic trail of all my recommendations, warnings and doubts, making sure all concerned are informed ahead of time. This, to bluntly put it, shuts their mouths for the duration of the current project, but come next one we begin all over again. Point is, I really like my current team and other coworkers, it's the higher management which I consider to be a problem. Given that, I decided to try and improve the working culture instead of just "Effective xx-xx-xx, I hereby resign from..." Are there any concrete arguments I could use to open the eyes of "low cost is everything" people? There is one, find a new job because they won't change, they will blame you that's all. +1 for starting to keep documentation. Eventually you'll be able to present a pattern down the line, when people are more willing to take you seriously. For the moment you'd best work on building their trust, so that's gonna take a while. This is the completely normal situation in all software (or hardware-software) companies. Just get a new job to get a raise. Don't complain - move jobs. @Fattie where in my post am I complaining about my salary? hi @Yuropoor - you are not, but I know you need a big raise. Enjoy it ! From your description of the situation, I infer that you do not have a lot of clout. As a junior IT person in this organization, I recommend that you start by talking to some of your other coworkers to see if they view the situation the same way. They may have some perspective or experience that is helpful here. Identifying an issue is only part of the challenge. In addition, you will need to propose an alternative. Those members of your team you like can help you hone your ideas and to see if they seem feasible. I suggest that you start with something small that will make a noticeable impact. Ideally, implementing this idea will form evidence against the "low cost is everything" belief you are trying to change, but your primary concern should be learning how to make beneficial changes in this culture. Once you've persuaded your colleagues that there is an issue and there is a better way, your best chance of success lies in finding an ally who does have some influence and credibility with the decision maker. For your first attempt, it may be your manager. Then, convince your ally to raise this concern. The most senior member of the team might be a good choice here. You want the person who has the respect and ear of the person you are targeting. If your idea gets implemented and is successful, your influence will grow. People will remember that you were the one with the good idea. Over time, you can begin to propose solutions to larger issues. As you gain influence, you will be able to raise issues yourself. (In parallel, I recommend you start getting to know members of senior leadership, so you can approach them directly and accelerate this process.) If this sounds like it will take a long time and that it will be a lot of work, that's because it will be. Many people choose to change companies or departments, because it is often less effort than spearheading large changes. Note also that the above description describes the happy path. There will likely be some false starts, and some proposals that don't pan out. Make sure you find people who can help you remain encouraged and help you push through temporary setbacks. My colleagues share my viewpoint and they are on my side, lower management included. However, it's a "we tried and failed" sort of deal now, but they said if I come up with some reasonable solution I have their backing in this matter. Since costs is everything they care about, you need to speak their language to make yourself heard. to achieve that, keed up your documenting effort, but add costs, and let mathematics do their magic. Example : you had to rush 2 weeks worth of work in 2 days, this is a net "benefit" of 8 man-days. when the solution crashes, take 15 minutes to estimate the failure's cost, which will probably be much more than those 8 man-days, and do the math. Once you have all that data, phrase it to avoid pointing fingers, and provide solutions, like @neontapir suggested. The objective of to make them realize that going cheap is NOT cheap at all.
STACK_EXCHANGE
M: Show HN: My side project - SnapRobot - daverecycles http://www.snaprobot.com/news.ycombinator.com R: daverecycles I find myself taking screenshots of websites all the time. Pages that I find visually interesting, because I've had far too many experiences where I went back and it looks like a completely different website. Or sometimes to remember information that is shown on a website for only a short period of time, like holiday layouts and fun coming soon pages. And I even take screenshots of my own webpages so I can look back at it later. I know there's archive.org, but they take too long to archive content when all I want is a screenshot of the recent past of a webpage. So I created SnapRobot as a side project. It was built in less than a day. You feed it a URL and leave it. SnapRobot monitors the page, taking screenshots whenever changes occur. You can come back whenever you want and relive pages of the past. :) In the example, you can see how the top Hacker News items evolve over the course of a day. Currently, any HTML change triggers a new screenshot so it works better for websites that don't dynamically generate different code on every request. No query strings for now. R: Ogre I am disappointed that entering <http://snaprobot.com> only gives an error page. I was hoping for an infinitely recursive snapshot. Edit: tried it again, and I DID get a snapshot of snaprobot displaying the BBC site, which is its default page. I guess the error was an actual error (it was an indexing disallowed error). Kudos! R: johnrob I assumed the large numbers were dates, when they are actually times (13 = 1 pm, not the 13th). I'd either use 'am/pm' or add ':00' to each time to make this more apparent. Large numbers inside boxes is a similar pattern to calendars, which is why this can be confusing. R: daverecycles Right now they're actually just numbered in sequence (1, 2, 3). I have been playing around with it. Sorry for the confusion, I'll work on it! R: kapitalx Looks neat. You might need a threshold in changes made. Seems like most websites have small irrelevant things that change on every visit. Like number of comments on techcrunch, or a timestamp in the code, resulting in a screenshot on every try. R: daverecycles Thanks. Definitely that is a very important improvement to make. R: _grrr Nice. Something's up with www.google.com though [http://www.snaprobot.com/www.google.com?&sl=0&d=1](http://www.snaprobot.com/www.google.com?&sl=0&d=1) R: daverecycles Yeah, I know... sorry. The app passes an extra parameter for cache busting and Google doesn't like it. It will be fixed in an update. R: 9oliYQjP This would be incredibly useful for publicly traded corps to help prove that they made information public at a specific time (i.e., disclosure requirements were met). R: ssing Pretty neat and useful. May be you have already thought about it but I felt the need to forward/share that screen shot with my friends. R: daverecycles Thanks. Will look into making it easier to share - perhaps permalink short URLs? R: Travis This is pretty cool. Very well laid out and aesthetically pleasing for a 1 day project. Congrats! R: daverecycles Thanks! R: mikelbring That is pretty neat. Would it be giving it away if I asked how you generated the screenshots? R: zalew f.ex. like this python webkit2png.py -x -o hn.png <http://news.ycombinator.com/> <http://www.alexezell.com/code/webkit2png.txt> R: uggedal More up to date version of webkit2png: <https://github.com/AdamN/python- webkit2png> There's also an OS X version available: <https://github.com/paulhammond/webkit2png/>
HACKER_NEWS
Enable USE_MATHJAX = TRUE First of all, thank you for this repository! I've looked for everything under the sun for something that would actually handle Doxygen documentation for highly-templated code, only to run into confusing quirks at best or even flat-out erroneous output from several other workflows. m.css together with poxy, gets it perfect! I just wanted to ask/request how one might be able to insert math equations into the markdown files? As far as I understand, this requires USE_MATHJAX option which is by default false, and poxy (as stated in README) does not allow configs to be changed manually. Would such an option being allowed in poxy.toml cause any issues with the current locked-down Doxygen configuration? Thanks again in advance! Hiya, Would such an option being allowed in poxy.toml cause any issues with the current locked-down Doxygen configuration? I suspect it would be fine. My choice to lock down the config mostly just to limit the chaos that doxygen can actually cause, since their testing process is lacking and there are always many regressions from one version to the next. I don't have strong opinions on most of the available options - I just by disabled things I personally didn't want when I started wrapping things up in Poxy as a starting point :) It only depends on whether m.css supports this particular thing, since poxy doesn't do much in the way of HTML rendering (only minor bugfixes and small features using pre- and post- processes). Ah ok so m.css provides support for math formulas but it does not appear to be based on the Doxygen option USE_MATHJAX option, since that requires you to using Doxygen's GENERATE_HTML option, which isn't the case (m.css handles HTML rendering itself using doxygen's XML output mode instead). Have a look at the math-related information in the m.css Doxygen generator documentation; you might find this already works without needing any extra work :) You are right and @f$ x+y @f$ works out of the box the irony of not having read the documentation :sweat_smile: thanks! Haha, yes, it's builtin and should "just work", if you have a LaTeX installation set up. Some background info -- one of the reasons I started making the m.css theme in the first place was because I hated how math equations looked like on most websites, either being blurry / aliased raster images, or flickering and causing page relayouts because it was rendered by some heavy JavaScript on the client side. So it's instead rendered into a SVG which is then embedded directly into the markup. Here's a "marketing page" for it back from 2018, most / all features shown there including the window-size-dependent layout are possible with Doxygen and the custom @m_class command as well: https://mcss.mosra.cz/admire/math/ One follow-up on this: is it possible to use the markdown math syntax (i.e. enclosing $ or ```math blocks) and still use the m.css functionality? I ask with the aim of using a README.md both as a GitHub front page, and the front page of the poxy-generated documentation. If Doxygen itself supports that (which is a GitHub Flawored Markdown extension I think) and correctly passes that to the generated XML, m.css will as well. There has been a new 1.11 release yesterday, but I can't see anything latex-math-specific in the changelogs. Maybe it's already there, I never really tried. (Also no idea how crashy or buggy it is, I didn't even manage to get 1.9 running properly for my code yet.)
GITHUB_ARCHIVE
Microsoft Compliance Manager (preview) Compliance Manager isn't available in Office 365 operated by 21Vianet, Office 365 Germany, Office 365 U.S. Government Community High (GCC High), or Office 365 Department of Defense. In this article: Read this article to learn what Compliance Manager is and understand its main components. Learn about updates: Visit the Compliance Manager release notes to see what's new and known issues. What is Compliance Manager Microsoft Compliance Manager (preview) is a free workflow-based risk assessment tool in the Microsoft Service Trust Portal for managing regulatory compliance activities related to Microsoft cloud services. Part of your Microsoft 365, Office 365, or Azure Active Directory subscription, Compliance Manager helps you manage regulatory compliance within the shared responsibility model for Microsoft cloud services. With Compliance Manager, your organization can: - Combine detailed compliance information Microsoft provided to auditors and regulators about its cloud services with your compliance self-assessment for standards and regulations applicable for your organization. These include standards and regulations outlined by the International Organization for Standardization (ISO), the National Institute of Standards and Technology (NIST), the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), and many others. - Enable you to assign, track, and record compliance and assessment-related activities, which can help your organization cross team barriers to achieve your compliance goals. - Provide a Compliance Score to help you track your progress and prioritize auditing controls that help reduce your organization's exposure to risk. - Provide a secure repository for you to upload and manage evidence and other artifacts related to your compliance activities. - Produce richly detailed Microsoft Excel reports that document compliance activities performed by Microsoft and your organization for auditors, regulators, and other compliance reviewers. Recommendations from Compliance Score and Compliance Manager should not be interpreted as a guarantee of compliance. It is up to you to evaluate and validate the effectiveness of customer controls per your regulatory environment. These services are currently in preview and subject to the terms and conditions in the Online Services Terms. See also Microsoft 365 licensing guidance for security and compliance. Relationship to Compliance Score Microsoft Compliance Score (preview) is a feature in the Microsoft 365 compliance center that provides a top-level view into your organization's compliance posture. It calculates a risk-based score measuring your progress in completing actions that help reduce risks around data protection and regulatory standards. Knowing your overall compliance score helps your organization understand and manage compliance. Understand how your compliance score is calculated. Compliance Manager shares the same backend with Compliance Score. During the public preview phase for both tools, Compliance Manager is where you'll manage your custom control implementations. Learn more about the relationship between Compliance Score and Compliance Manager. Compliance Manager components Compliance Manager uses several components to help you with your compliance management activities. These components work together to provide a complete management work flow and hassle-free compliance reports for auditors. The diagram shows the relationships between the primary components of Compliance Manager: Groups are containers that allow you to organize Assessments and share common information and workflow tasks between Assessments that have the same or related customer-managed controls. When two different Assessments in the same group share customer-managed control, the completion of implementation details, testing, and status for the control automatically synchronize to the same control in any other Assessment in the Group. This unifies the assigned Action Items for each control across the group and reduces duplicating work. You can also choose to use groups to organize. Assessments by year, area, compliance standard, or other groupings to help organize your compliance work. Assessments are containers that allow you to organize controls based on responsibilities shared between Microsoft and your organization for assessing cloud service security and compliance risks. Assessments help you implement data protection safeguards specified by a compliance standard and applicable data protection standards, regulations, or laws. They help you discern your data protection and compliance posture against the selected industry standard for the selected Microsoft cloud service. Assessments are completed by the implementation of controls included in the Assessment that map to a certification standard. By default, Compliance Manager creates the following Assessments for your organization: - Office 365 ISO 27001 - Office 365 NIST 800-53 - Office 365 GDPR Assessments have several components: - In-Scope Services: Each assessment applies to a specific set of Microsoft services. - Microsoft-managed controls: For each cloud service, Microsoft implements and manages a set of compliance controls for applicable standards and regulations. - Customer-managed controls: These controls are implemented by your organization when you take actions for each control. - Assessment Score: The percentage of the total possible score for customer-managed controls in the Assessment. This helps you track the implementation of the Actions assigned to each control. Controls are compliance process containers in Compliance Manager that define how you manage compliance activities. These controls are organized into control families that align with the Assessment structure for corresponding certifications or regulations. - Control ID: The name of the selected control from the corresponding certification or regulation. - Control Title: The title for the Control ID from the corresponding certification or regulation. - Article ID: This field is only for GDPR assessments and specifies the corresponding GDPR article number. - Description: Text of control from the corresponding certification or regulation. Due to copyright restrictions, a link to relevant information is listed for ISO standards. There are three types of controls in Compliance Manager, Microsoft-managed controls, customer-managed controls, and Shared management controls. For each cloud service, Microsoft implements and manages a set of controls as part of Microsoft's compliance with various standards and regulations. Each control provides details about how Microsoft implemented the control, and how and when that implementation was tested and validated by Microsoft and/or by an independent third-party auditor. Customer-managed controls are managed by your organization. Your organization is responsible for customer-managed control implementation as part of your compliance process for a given standard or regulation. Customer-managed controls are organized into control families for the corresponding certification or regulation. Use the customer-managed controls to implement the recommended actions suggested by Microsoft as part of your compliance activities. Your organization can use the prescriptive guidance and recommended customer actions in each customer-managed control to manage the implementation and assessment process for that control. Customer-managed controls in Assessments also have built-in workflow management functionality that you can use to manage and track your progress towards Assessment completion. With this workflow functionality, you can: - Assign Action Items for each control - Track assigned Action Items - Upload evidence of the implementation of the control - Document the testing and validation of the control - Mark the Action Items as implemented and tested For example, a Compliance Officer in your organization assigns an Action Item to an IT admin with the responsibility and necessary permissions to perform the recommended action. The IT admin uploads evidence of the implementation tasks (screenshots of configuration or policy settings) and assigns the Action Item back to the Compliance Officer when completed. The Compliance Officer evaluates the collected evidence, tests the implementation of the control, and records the implementation date and test results in Compliance Manager. Shared management controls A shared control refers to any control where Microsoft and customers both share responsibilities for implementation. For example, controls related to personnel screening, account and password management, and encryption require actions by both Microsoft and customers. Actions Items are included in customer-managed controls as part of the built-in workflow management functionality that you can use to manage and track your progress towards Assessment completion. People in your organization can use Compliance Manager to review the customer-managed controls from all Assessments for which they're assigned. When a user signs in to Compliance Manager and opens the Action Items dashboard, a list of Action Items assigned to them is displayed. Depending on the Compliance Manager role assigned to the user, they can provide implementation or test details, update the Status, or assign Action Items. Certification controls are typically implemented by one person and tested by another. For example, after Action Items initially assigned to one person for implementation are completed, those Action Items are assigned to the next person to test and upload evidence. Any user with sufficient permissions for control assignments can assign and reassign Action Items. This enables central management of control assignments and decentralized routing of Action Items between implementors and testers. Note that Improvement actions in Compliance Score are the equivalent of Action Items in Compliance Manager. Compliance Manager uses a role-based access control permission model. Only users who are assigned a user role may access Compliance Manager, and the actions allowed by each user are restricted by role type. View a table showing the actions allowed for each permission. The portal admin for Compliance Manager can set permissions for other users in within Compliance Manager by following these steps: - From the top More drop-down menu, select Admin, then Settings. - From here, select the role you want to assign, and then add the employee you want to assign to that role. Users will then be able to perform certain actions. Users who are assigned the Global Reader role in Azure Active Directory (Azure AD) have read-only permission to access Compliance Manager. However, they cannot edit data or perform any actions within Compliance Manager. There is no longer a default Guest access role. Each user must be assigned a role in order to access and work within Compliance Manager. Compliance Manager can store evidence of your implementation tasks around testing and validation of customer-managed controls. Evidence includes documents, spreadsheets, screenshots, images, scripts, script output files, and other files. Compliance Manager also automatically receives telemetry and creates an evidence record for Action Items that are integrated with Secure Score. Any data uploaded as evidence into Compliance Manager is stored in the United States on Microsoft Cloud Storage sites. This data is replicated across Azure regions located in Southeast Asia and Western Europe. Compliance Manager provides pre-configured templates for Assessments and allows you to create customized templates for customer-managed controls for your compliance needs. New templates are created by importing controls information from an Excel file, or you can create a template from a copy of an existing template. The pre-configured templates are: - Brazil General Data Protection Law (LGPD) - California Consumer Privacy Act (CCPA) (preview) - Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) 3.0.1 - Dubai Information Security Resolution (DGISR) - European Union GDPR - Federal Financial Institutions Examination Council (FFIEC) Information Security Booklet - FedRAMP Moderate - HIPAA / HITECH - IRAP / Australian Government ISM (preview) - ISO 27001:2013 - ISO 27018:2014 - ISO 27701:2019 - Microsoft 365 Data Protection Baseline - NIST 800-53 Rev. 4 - NIST 800-171 - NIST Cybersecurity Framework (CSF) - SOC 1 - SOC 2 Secure Score integration Compliance Manager is integrated with Microsoft Secure Score to automatically apply Secure Score credit to the Compliance Score for synced Action Items. This is configurable for individual Action Items or all actions globally, and provides updates from Secure Score. For example, you have a security-related requirement for activating Azure Rights Management in your organization that also applies to a compliance-related Action Item. When Azure Rights Management is activated and processed by Secure Score, Compliance Manager receives notification of the update, and the score for the Action Item automatically updates with completion credit. Ready to get started? Start working with Compliance Manager to manage regulatory compliance activities for your organization.
OPCFW_CODE
Let's say you have a map with three or four data frames. In each data frame there is the same road layer. Is there a way to ensure that all the symbology/labeling/annotation is the same for that layer across all data frames. So let's say I modify the symbology of the road layer in data frame 1, is it possible to automate that change across all data frames? As you can see, this code doesn't handle group/composite layers (need to recursively search composite layers). And it's an ArcGIS 10 addin button and you are on 9.3. I'm putting code in this answer--it shouldn't hard to wire up to 9.3 (don't think I used anything new to 10): Maybe a year ago, I was working on an ArcMap button that would keep layers in sync and along the same idea's as dmsnell's manual copy except that I was working to automate the process by updating layers with layers stored in a subversion repository. In my case, I had layer files in a subversion repository and I was looking to replace any layer in an MXD that was older than the layer file in my subversion repository. I think @dassouki question is the same except that instead of a layer in a subversion repository, his layer is the one he just modified. And he's only wanting to modify the MXD that he is in, rather than any MXD (although that is probably useful too). I think that it's just remove a layer and add a new layer (that is a copy) at the same location. This was work that was low priority for me and I stopped before it was completed, but I think I have some code for the layer operations(swap) buried somewhere. I'll look around and update this answer. The challenge for my solution was to uniquely identify layers (that would persist with the layer)--which I accomplished by using a guid in the layer extension properties. Given the number of end-users I support, using the layer name would be risky since the chance of a layer with different renderers having the same layer name is high. The amount of our layer modifications decreased and with the promise of web map services, this all seemed like overkill, so I stopped the work. The only thing I completed was an ArcMap button to modify the layer extension properties. Update: It just occurred to me that rambled on without actually answering the question of whether it is possible to automate layer changes across data frames. Yes, but not with out some custom code. You have to update each data frame manually, as far as I know. You can right-click on a layer and save it as a "Layer" file (.lyr), then right click on the other data frames and use the "Add Data" tool to find that saved layer file. If you make changes, just re-save that layer to the same .lyr file and delete and re-add it to each data frame. I've been looking for some kind of text-file based rules (such as XML) for describing the symbology and layering and be able to apply that to a map, where the map would reload that file at least at every program load, so I could just change the file but so far I haven't been able to find anything like that. I agree with the aproach that dmsnell describes, but you can simply copy the layer from one Dataframe to another by a rightclick on the layer -> select copy -> rightclick the other Dataframe name -> select Paste Layer(s). I have not found any automation script or add-in to take care of this task. There are a few automatic options you can use 1) Cartographic Representation Cartographic Representations add new fields to your geodatabase so that the symbology is stored in the data. This would allow you to make a change to the representation (using editing) so that any change you make will be reflected in all locations the representation is being used 2) Views and Visual Specifications Views are available with Production Mapping (Previously PLTS) and are stored Dataframe Properties and allow you to specify which components of a dataframe are being used from the stored database version of the Dataframe. Where As Visual Specifications are SQL Select Queries which are used in Production Mapping to assign Cartographic Representations based on attribute fields These are just some ideas, I will edit this and add more as I think of them, but based on you use case I would say carto Reps are the go, will need to be in either File Geodatabase or SDE. Have Fun, CDB
OPCFW_CODE
I see your point, but with the way the weather has been and the fact that we live in wisconsin I feel that they would probably be safely blown away by the next war. While I'd be ok with glue domes, it's the fact that they'd be on the ground for future wars is what bothers me. Just sayin' Awno 2 (awesome Wisconsin Nerf Out) New Host Posted 01 February 2011 - 06:35 PM Posted 01 February 2011 - 07:00 PM edit: Mini marshmallows work great at a room temperature, very poorly in cold weather. They stop feeding through hoppers at least. Maybe RSCBs would still work? They did continue to work fine singled, but that largely defeats the point of marshmallows. Edited by KaneTheMediocre, 01 February 2011 - 07:22 PM. My Half-Baked MHA Site Posted 01 February 2011 - 07:19 PM Well I look forward to seeing you there! I think we all miss nerfing....Damn cold... If it looks like it will be an unusually warm day in February, I might go to this. I miss nerf, but I hate cold a LOT. DART RULE CHANGE: I AM OFFICIALLY ALLOWING GLUE DOMED DARTS. 3/0 IS THE HEAVIEST ALLOWED WEIGHT. I will also hopefully have a few hundred darts for a community dart bucket. Edited by Edible Autopsy, 03 February 2011 - 12:01 AM. Posted 05 February 2011 - 12:23 PM The war is in exactly one week. Get ready! Edited by Edible Autopsy, 10 February 2011 - 07:59 PM. Posted 10 February 2011 - 08:00 PM The snow is pretty deep at the park, but tolerable. Probably a solid 6 inches deep. I'll be bringing a couple of shovels to try and shovel a few paths out atleast, and would very much appreciate help. Snowpants are HEAVILY recomened, along with boots. Be my guest and wear niether, but please don't whine to anybody when you're frozen. It should be a great time nonetheless. Snow mobstacles anyone? The weather looks pretty good also, nearly 40 degrees and sunny. Hope to see everybody there still. Posted 10 February 2011 - 09:39 PM Onto the anvil of war! He parkours over flat land while yelling tactical orders at no one. Demon Lord is the most legit ballsy nerfer of all time. Posted 11 February 2011 - 09:35 PM Posted 11 February 2011 - 09:37 PM Well hope to see him there. My friend's dad has work becaue of overtime and he can't come, but he might come for a little bit before his dad has to got to work. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
Tor is a system intended to enable online anonymity, composed of client software and a network of servers which can hide information about users' locations and other factors which might identify them. Imagine a message being wrapped in several layers of protection: every server needs to take off one layer, thereby immediately deleting the sender information of the previous server. Use of this system makes it more difficult to trace internet traffic to the user, including visits to Web sites, online posts, instant messages, and other communication forms. It is intended to protect users' personal freedom, privacy, and ability to conduct confidential business, by keeping their internet activities from being monitored. The software is open-source and the network is free of charge to use. Like all current low latency anonymity networks, Tor cannot and does not attempt to protect against monitoring of traffic at the boundaries of the Tor network, i.e., the traffic entering and exiting the network. While Tor does provide protection against traffic analysis, it cannot prevent traffic confirmation (also called end-to-end correlation) Caution: As Tor does not, and by design cannot, encrypt the traffic between an exit node and the target server, any exit node is in a position to capture any traffic passing through it which does not use end-to-end encryption such as TLS. (If your postman is corrupt he might still open the envelope and read the content). While this may or may not inherently violate the anonymity of the source, if users mistake Tor's anonymity for end-to-end encryption they may be subject to additional risk of data interception by third parties. So: the location of the user remains hidden; however, in some cases content is vulnerable for analysis through which also information about the user may be gained. The Tor Browser Bundle lets you use Tor on Windows, OSX and/or Linux without requiring you to configure a Web browser. Even better, it's also a portable application that can be run from a USB flash drive, allowing you to carry it to any PC without installing it on each computer's hard drive. You can download the Tor Browser Bundle from the torproject.org Web site (https://www.torproject.org), either as a single file (13MB) or a split version that is multiple files of 1.4 MB each which may proof easier to download on slow connections. If the torproject.org Web site is filtered from where you are, type "tor mirrors" in your favorite Web search engine: The results probably include some alternative addresses to download the Tor Browser Bundle. Caution: When you download Tor Bundle (plain or split versions), you should check the signatures of the files, especially if you are downloading the files from a mirror site. This step ensures that the files have not been tampered with. To learn more about signature files and how to check them, read https://wiki.torproject.org/noreply/TheOnionRouter/VerifyingSignatures (You can also download the GnuPG software that you will need to check the signature here: http://www.gnupg.org/download/index.en.html#auto-ref-2) The instructions below refer to installing Tor Browser on Microsoft Windows. If you are using a different operating system, refer to the torproject.org website for download links and instructions. Note: You can choose to extract the files directly onto a USB key or memory stick if you want to use Tor Browser on different computers (for instance on public computers in Internet cafs). Before you start: Launch Tor Browser: When a connection is established, Firefox automatically connects to the TorCheck page and then confirms if you are connected to the Tor network. This may take some time, depending on the quality of your Internet connection. If you are connected to the Tor network, a green onion icon appears in the System Tray on the lower-right-hand corner of your screen: Try viewing a few Web sites, and see whether they display. The sites are likely to load more slowly than usual because your connection is being routed through several relays. If the onion in the Vidalia Control Panel never turns green or if Firefox opened, but displayed a page saying "Sorry. You are not using Tor", as in the image below, then you are not using Tor. If you see this message, close Firefox and Tor Browser and then repeat the steps above. You can perform this check to ensure that you are using tor, at any time by clicking the bookmark button labelled "TorCheck at Xenobite..." in the Firefox toolbar. If Firefox browser does not launch, another instance of the browser may be interfering with Tor Browser. To fix this: There are two other projects that bundle Tor and a browser: There has been error in communication with Booktype server. Not sure right now where is the problem. You should refresh this page.
OPCFW_CODE
Initially I was going to post this article as part of an install and configure series that I’m writing, but quickly realized that it’s going to be a probably going to be a reference article for other stuff as well. Hopefully, as I continue to learn and pick things up, this article will expand as well. Why Nexenta is different If you’re coming from the standard storage arena, it’s hard to put a bead on Nexenta. First off, it’s just software. But more than that, at it’s core is open source software. Think of it as a gobstopper with three layers. At it’s core it’s based off of illumos and ZFS. Added on top is NexentaOS, a ubuntu userland that makes software management easy. If you’re ever tried Ubuntu, you know that it’s pretty simple to install updates. The third and outermost layer to Nexenta are the additional modules that add additional enterprise functionality to the storage appliance. What is illumos The illumos project started as a binary compatible distribution of OpenSolaris, the comunity supported distribution of Solaris. Years ago, for me anyway, it was the most cost effective way to learn Solaris. The intent was to create a straight fork that would stay compatible with any possible future releases that Oracle makes to the community. It turned out this was a pretty wise move. The OpenSolaris project within Sun didn’t adjust well to the transition . Now illumos stands at the center of a new OS community, with a SOLID enterprise class foundation. Many of the folks that were contributing/creating to the solaris codebase continue to contribute to illumos, and it will continue to be an interesting project to watch. What is ZFS In short, ZFS is an enterprise grade file system. It was designed by Sun to be a modern filesystem to handle modern hardware. To understand why ZFS is important you also need to understand how modern harddrives work: A modern hard disk devotes a large portion of its capacity to error detection data. Many errors occur during normal usage, but are corrected by the disk’s internal software, and thus are not visible to the host software. A tiny fraction of errors are not corrected. For example, a modern Enterprise SAS disk specification estimates this fraction to be one uncorrected error in every 1016 bits, or approximately one in every 1.2 PB. A smaller fraction of errors are not even detected by the disk firmware or the host operating system. This is known as “silent corruption”. In a recent study, CERN found this issue to be problematic. As hard drives get bigger, we’re all seeing data corruption become more of an issue. The changes of corruption, both noticed and silent, increase. In vmware envionments silent data corruption can ruin your day. I’m actually not sure how common it is, but in a service provider environment I worked at we had VMs that would not let us do anything with the vmdk. It seemed “stuck”. We couldn’t storage vmotion it, and it would continually give us grief. After troubleshooting for couple days, the only thing we could do was clone it and delete the original VM. That’s worked fine, but it could have been a lot worse. As storage envionments get bigger, silent data corruption will eventually happen in your environment unless you protect against it. ZFS checksums every block that is written and automatically repairs blocks where corruption occurs. This provides end to end data integrity allowing you to be confident there won’t be a problem with the data when you need it. Beyond this feature, ZFS allows for heterogeneous block and file replication, UNLIMITED snapshots and clones, compression, deduplication, non-disruptive volume increases and a host of other features. I’ll work through (and link to from here) these features in the future, but it’s important to get some other base functions explained. What is the ARC As legacy storage vendors will tell you; Cache is king. The ability to cache data efficiently or “move blocks” or “other fancy caching name here” basically does one thing. It moves data that is most accessed to a faster tier, or more responsive, layer. ZFS leverages the ARC as it’s front line cache for most accessed blocks. This means if your Nexenta appliance has 24G of RAM, it’s going to get 23 or so gigs to use to cache the most heavily used blocks. As RAM is added, the ARC cache grows. RAM being A LOT faster than SSD’s this proves invaluable for providing FAST response times and lower latency to the client devices. What is the L2ARC Below the ARC ZFS can leverage additional cache devices, in most configurations today this is SSD drives. Cheaper than RAM, but still way faster than spinning tin. This is the 2nd level of cache that Nexenta appliances use for caching “warm” data. This hybrid storage model allows for Nexenta to use slow SLOOOOOWWWW disks for it’s main storage pool of the working size set (average amount of blocks that are constantly active) fit inside both the L2ARC and ARC cache. What is the ZIL The ZIL (ZFS intent Log) is the write cache. Random R/W is the bane of every VMware administrators life. Not only does Nexenta leverage a read cache, we can cache writes to more efficiently lay down blocks on the slow disks to better handle write workloads. How does all of this tie together? For our lab environment, if you can move the L2ARC and the ZIL to an SSD drive, you’ll already start being able to use some of the performance gains that Nexenta brings to the table. Even in a two drive configuration, if one of them is SSD, and there is plenty of Ram available, Nexenta can start helping increase performance through it’s caching technology. But that’s not it, Nexenta brings to the table a host of other features. - Nexenta provides a software stack that can deliver NAS (NFS/CIFS) and/or SAN (iSCSI/FC) storage that allows for high performance allowing users to leverage current commodity technologies (ex SSD for performance). - It provides UNLIMITED snapshots and clones - It can be configured in a fully active/active HA configuration. - It provides block and file level replication that can be leveraged for easy DR configurations either in a syncronous or asynchronous configuration. - It’s vmware certified - It provides compession, dedupe and inline virus scanning. (not post process!) - metro clustering - global namespace - (this goes on and on) The opensolaris project wasn’t alone. Oracle’s purchase of Sun resulted in the forking of LibreOffice from OpenOffice.org, and MariaDB from MySQL. Bryan Cantrill’s talk at LISA this year pretty much sums up oracle in a nutshell.
OPCFW_CODE
Investigating the Modernity of the University Library This material is replicated on a number of sites as part of the SERC Pedagogic Service Project - construct a reasonable sampling design for the specified population, study objective and budget. - describe the selected sampling design using appropriate sampling terminology, e.g., strata, clusters, ratio estimation. - justify the choice of sampling design, e.g., using "optimal allocation" concepts and budget as guidelines for sample size, decisions to use subgroups as strata rather than clusters. - estimate the parameter of interest and give appropriate standard error. - state required assumptions, plausibility of these assumptions and impact of probable violations. Context for Use Before beginning the project, students need to be familiar with various sampling strategies (design features and appropriate estimators): - Ratio estimation (optional) - Complex survey analysis Description and Teaching Materials Teaching Notes and Tips Getting students started. This project can be introduced at any point in the semester, but is probably best after students have learned about stratification and clustering. Encourage students to wander around the library to get familiar with the population. They have likely been to the library before, but haven't looked at it through the lens of this project. It is very easy for students to come up with complex designs, often using stratification and clustering, though they may have difficulty labeling strata and clusters. Correctly identifying strata and clusters will be key to choosing appropriate estimators and to learning to communicate with standard statistical terminology. Use of online card catalogs. Online catalogs do not provide a good estimation of "new" books as easily as students may think. The target population may not match the online frame exactly. (It is often possible to define the population as "easily accessible" items or items in certain section of the library, which is not necessarily easy to identify from a database.) Ratio estimators using online information can also be encouraged and/or use of online information to guide sample size allocation. Optimal allocation rules. Students may be tempted to try to apply optimal allocation rules that have been learned in class. Emphasize that these rules should guide (not dictate) resource allocation. For example, larger strata should generally have larger sample sizes, but strict proportional allocation is not required for a good design. It is often impossible to use optimal allocation rules exactly in practice, because population parameters are unknown, e.g., Neyman allocation in a stratified design. They will also need to consider the budget restrictions in determining how to allocate resources (stratification vs. clustering, sample size decisions). Keep it simple. Use of budget is key to forcing students to use clustering effectively but it is easy for them to develop an overly complex design. Remind them to consider appropriate estimation strategies as they develop the sampling design. Simplified strategies which depend on reasonable assumptions should be considered. For example, use systematic sampling and assume the simple random sampling variance formulas are conservative OR assume that variance due to third and higher stages in a cluster design is negligible relative to first and second stage variance components. Sources of bias. Discussion of selection bias, measurement error and nonresponse in this study may be incorporated as whole class discussion. Student often have trouble separating these concepts. An obvious problem is the potential bias of checked out books. Are checked out books more likely to be "new" books? This may be used for class discussion when the project is introduced. Caveats regarding the report - Make sure students are estimating total number of books (not proportion). Total number of books is harder to estimate. - In the reports, there is a difficulty in terminology (What is a shelf, stack, row, aisle, etc.?). Ask students to provide a diagram labeling their terminology. - Instructor may want to enforce strict penalty for going over budget or using less than, say, 85% of budget. - The pilot study requires a projected standard error calculation. The idea is that students should get some sense of what their final project standard error will be. Students have difficulty calculating this. An example in class may help clarify: Present data from SRS of size 10. What kind of standard error would you expect if a sample of size 100 was taken? - Students may appreciate a summary (after projects are turned in) of estimates, standard errors, and use of strata and clusters. You may also give some recognition/points for lowest standard error. - Giving students a grading rubric before they turn in the report can vastly improve organization and quality of the reports, but can also be overly prescriptive. More mature students, e.g., graduate students, may not need a point-by-point outline for the structure of the report. - Pilot Study Report is designed to give students feedback on writing and correctness of estimation procedures. (Data collection is not complete at this stage, so some groups may modify entire design.) A rubric (Microsoft Word 41kB Aug10 06) for instructors to use in grading the pilot study report can also be distributed to students with the project assignment. - Final Report is used as final assessment. A rubric (Microsoft Word 43kB Aug10 06) for instructors to use in grading the final report can also be distributed to students with the project assignment. - Suggested division of points: - 30 for Pilot Study Report - 70 for Final Report - Follow-up questions (Microsoft Word 12kB May17 07) on a test can be used to assess how well students can identify strata and clusters in a given design. The familiar context makes it easier for students to quickly understand the population. - Other ideas for assessment - Competition / reward for lowest standard error - Have students give oral reports describing the design and their estimates - Have students critique other groups' oral reports - Whole class discussion of why some groups achieved lower standard errors than others
OPCFW_CODE
Re: Can resources of a IDLE application be shared by others? Over allocation of computing resources is a very serious problem in the contemporary cloud computing environment and across enterprise data centers in general. Famous Quasar research paper published by Stanford researches talks about this issue in great detail. There are startling numbers such as industry-wide utilization between 6% and 12%. A recent study estimated server utilization on Amazon EC2 in the 3% to 17% range.toggle quoted message Show quoted text Techniques which Google ("Borg") internally employs is called "Resource Reclamation" whereby over allocated resources are taken advantage of for executing best-effort workloads such as background analytics, and other low priority jobs. Such best-effort workloads could be pre-empted in case of any interference to the original normal running workload or resources are needed back at a later time. In order to be able to utilize unused resources in such a manner ("Resource Reclamation") requires underlying support for capabilities such as "Preemption & Resizing", "Isolation Mechanisms", "Interference Detection" etc. Another important aspect of all this is the fact that users never know how to precisely do resource reservations - this leads to underutilization of computing resources. Instead, what is being proposed is some kind of classification based predictive analytics techniques whereby resource management system itself determines right amount of resources to meet the user performance constraints. In this case, user only specifies the performance constraints (SLO) not the actual low level resource reservations itself. Combination of predictive analytics based proactive approach in conjunction with reactive "resource reclamation" approach is the optimal strategy. This accounts for any mis-predictions as well. Mesos Resource Management also supports "Resource Reclamation". Although, both Google ("Borg") and Mesos do not employ predictive analytics based proactive approach in their environment. I hope this makes sense and helps. - Deepak Vij (Huawei, Software Lab., Santa Clara) From: Stanley Shen [mailto:meteorping(a)gmail.com] Sent: Tuesday, March 08, 2016 7:10 PM Subject: [cf-dev] Can resources of a IDLE application be shared by others? When pushing an application to CF, we need to define its disk/memory limitation. The memory limitation is just the possible maximum value will be needed in this application, but in most time, we don't need so much memory. For example, I have one application which needs at most 5G memory at startup some some specific operation, but in most time it just needs 2G. So right now I need to specify 5G in deployment manifest, and 5G memory is allocated. Take m3.large VM for example, it has 7.5G. Right now we can only push one application on it, but ideally we should can push more applications, like 3 since only 2G is needed for each application. Can the resources of a IDLE application be shared by other applications? It seems right now all the resources are pre-allocated when pushing application, it will not be released even I stopped the application.
OPCFW_CODE
Is it possible to add an additional HDD or a SSD to a 2011 mac-mini (no the server edition) in addition to the 500GB HDD that comes with the computer? If so, are additional parts required? Donate: paypal.me/JRNROSSLINK: https://www.ebay.com/itm/USA-821-1347-A-Cable-add-second-SSD-for-Mac-Mini-A1347-Server-2011-2014-model-/55?hash=item. I have a mid 2011 Mac mini, 2.7 MHz, i7, 8 GB Ram with the 7200 RPM Western Digital 750 GB factory upgrade drive. (Macmini 5,2) I'm planning to upgrade to a Solid State Drive (SSD). My choice is to either install the new SSD as a second internal HD in the mini, or to pull the old conventional drive and retire it to an external. Users can install a second hard drive in non-Server models of the new Mac Mini, according to MacRumors forum member Slyseekr. IFixit's teardown of the new Mac Mini released last month revealed. - Add Second Hard Drive To Mac Mini 2011 Specs - Add Second Hard Drive To Mac Mini 2011 Garageband - Add Second Hard Drive To Mac Mini 2011 Price I just purchased a Mac mini and the seller assures me that there are two 1 TB hard drives inside, with a total storage space of 2 TB (picture from the listing). At first, I was under the impression that there was a second drive without an OS and that the partitioning was done on that (Windows-format?) drive. However, after erasing the accessible drive following Apple's instructions, and restoring my data with Target mode, I do not see a second drive listed anywhere: - If I start the computer pressing - In the Storage tab I seethis. - In Recovery mode the DiskUtility lists this. diskutil listcommand returns this inRecovery mode (more images). I really, really don't want to have to go through this process to figure out if the seller is saying the truth about there being two drives inside, since it might prevent me from returning it, but I'd like to know if what I bought matches the item description. - there is no second drive - there might be a second drive, but badly connected/wired in so undetectable - there is a second drive that is blank / oddly formatted / undetectable How can I know if there is a second drive in the Mac mini?user3439894 Add Second Hard Drive To Mac Mini 2011 SpecsMicroMachine - There is no second drive - There is a second drive that is undetectable because faulty or badly connected You can find out which it is by simply opening the black plastic bottom and looking, no 'process' required. In the picture of what you will see when opening the back, I have indicated the two connectors for the two possible disks with red arrows. One will already be occupied by the disk we know is in the Mac mini. If the other one is also occupied, then it is the second alternative: there is a disk but it is faulty or the connection is faulty. If it is unoccupied there is no second disk inside.user3439894 You have one drive. diskutil list command tells you everything you need to know: there's only one entry that has a 'physical' characteristic to it and it's When you boot into Recovery, there are many RAM disks that get created and in your case they are disk19. If you notice on each identifier, it describes it as a 'disk image' as in the example below: What you want to look for is anything that says 'internal, physical' because if the seller assured you 'there are two hard drives,' you would have (in a Mac mini) two identifier with 'internal, physical' descriptors like that of What the seller showed you was a (Bootcamp) partition. When booted, the partitions appear to the OS as their own separate drives.Allan
OPCFW_CODE
After we found out about the fdrawcmd and the characteristics of the Nashua and its track 0, we hoped to reproduce it on the relevant disks. Unfortunately no success here. We had to rule out possible problems like: - The disks were not formatted at all (because of being backup disks – at least some labels imply this – and many people never check if their backup worked until they actually need it). - The disks might have been improperly stored before they were transferred into the university archive, thus losing most of the magnetic charge - The drive simply cannot interpret the actual recording In parallel we are still trying to get in contact with people who either have old hardware running, some 8” disks to spare or deeper background knowledge to help us to better understand the problem and verify our hardware setup. In the meantime we are trying to get down to the lowest layer – the physical readings produced by the drive before interpretation and digitisation takes place. We hope to identify if the disks contains any recorded information at all using an oscilloscope connected to the analogue amplifier after the magnetic reading head. Using an oscilloscope Fortunately, a colleague owns such an electronic oscilloscope and helped us with the experiment. The challenge here is what to expect. The rotational speed of the floppy is rather slow compared to modern equipment and the recording density is very low. Even with just 26 sectors of 128 Byte size, a relevant number of changes in the magnetic flux are required to represent the data. We hope that we can at least distinguish between an unformatted disk (or demagnetized disk because of bit rot) and a disk containing data, white noise in the first case and some visible, repeated changes in the second. The GW Instek GDS-1022 oscilloscope was connected to the disk drive at AD3 and AD4 on the read circuit. The signal was set to normal level 2.8V on TP-AD3 and 2.75V on TP-AD4. Signal peaks for both channels were 800Vpp. To trigger readings the low level program fdrawcmd mentioned in the previous blog post was run with different options. We were hoping to see differences when trying the different floppy disks. Low level readings In a first run we focused on track 0 and started with the Nashua disk using the fdrawcmd in the setting which produced visible pattern of different frequencies on the oscilloscope. In further rounds we looked for similar patterns on track 0 of the other disks. The picture we got was quite different and looking more like white noise compared to the measurements on the Nashua. Changing to higher track number produced for both classes of disks pattern which looked different though (compared to the Nashua). Differences between the two classes of disks were definitely visible. This could explain why we were not able to read anything with the methods we were applying up to now. The oscilloscope setup we were using had a couple of limitations to the purpose of data interpretation. We were unable to store longer sequences of signal readings but could just freeze the content of the screen. For further stream interpretation a logic analyzer would be required, like e.g. a Bitscope Micro with a software like DSO Data Recorder. Finally, the reading of the disks does not implies that they are completely empty. So we can hope to find something meaningful with more advanced approaches. They should include the Kryoflux device which is created to follow a very low level approach and circumvent most of the floppy disk controller logic.
OPCFW_CODE
Introduce isNot(After|Before) equivalent to is(Before|After)OrEqual Feature summary Disclaimer, this is merely a suggestion. Introduce isNot(After|Before) for TemporalAccessor-related assertion regarding original method signatures which are equivalent to is(After|Before)OrEqualTo. I'm not sure AssertJ may permit alases for existing assertions yet letting me try. Let's say the LocalDate class (and other classes). It's been a while, for me, why there is no such method like isAfterOrRepresentSame nor isBeforeOrRepreentSame just like any other comparison methods in various APIs such as lt, le, lessThan or lessThanOrEqualTo. There are only isAfter and isBefore, e.g. in LocalDate, and other chrono-related classes. Anyone would've been figure out that those OrEqualTo functionalities may be achieved by reversing the logic. LocalDate yesterday = LocalDate.now().minusDays(1L); LocalDate today = yesterday.plusDays(1L); yesterDay.isBefore(today); // true today.isAfter(yesterday) // true LocalDate someDay = getSome(); // someDay should represent the same day as today or any next day of today. someDay.isAfter(today) || someDay.equals(today); // seems ok; those equals methods are righteously overridden // in a reversed term, today should be not before the someDay !today.isBefore(someDay); // seems weird, negative(!)|reversed expression, yet may be intended by design. AssertJ seems have been decided to adding is(After|Before)OrEqualTo and they have no problems. Yet combining a chrono-graphic comparison and a logical equivalence seems weird. (A word of same looks better, IMHO.) It will be (more) helpful when we introduce isNot(After|Before) methods besides those methods. Example LocalDate today = LocalDate.now(); LocalDate someDay = getSome(); // assert someDay represents the same day as today or a day in the future | someDay should represent a day in this range including today ----|-----------------------------> t today | today shouldn't be in this range, excluding someDay ----|-----------------------------> t someDay assertThat(someDay).isAfterOrEqualTo(today); // existing assertThat(today).isNotAfter(someDay); // equivalent alias Additional notes, Above expression may arise arguments about the actual subject that being asserted, someDay? or today? which, in such case, may be overcame with as(...). assertThat(today).isNotAfter(someDay) .as("someDay(%1$s) compared with today(%2$) regarding ....", someDay, today) ; Thank you. @onacit not completely sure I'm following you, in you example you say both assertions are logically the same: assertThat(someDay).isAfterOrEqualTo(today); // existing assertThat(today).isNotAfter(someDay); // equivalent alias but the logical opposite of isAfterOrEqualTo is not after and not equal to which in this case is exactly isBefore. I feel avoiding negation in assertion is usually preferable when there is a positive form assertion @joel-costigliola Fair enough. Thank you. I feel avoiding negation in assertion is usually preferable when there is a positive form assertion Are you happy we close this issue then?
GITHUB_ARCHIVE
Only part of what he has been accused of was distorted. It took him years to get that there is no "but it's consensual" escape hatch to underage sex (as RMS himself writes in https://www.stallman.org/archives/2019-jul-oct.html#14_September_2019_(Sex_between_an_adult_and_a_child_is_wrong It's part of his (apparent) approach to life of having strong opinions, strongly held: he sets up some theory, however flawed, and then it takes years for others to undo that position. RMS _always_ required a significant amount of cleaning up after him by those around him, pretty much wherever he went (and I don't mean that in a hygiene sense because I don't know about that). In this instance it simply was too much to clean up after, _even though_ some of the statements were trivial to rebuke. Why was that? Because it was all too much in line with his public persona. So, he was ousted for holding and expressing undesirable opinions, and some of them were distorted. The last part was unfair, I agree, but I also think that the rest was enough of a reason to get rid of him, not necessarily because of these particular opinions, but because his way of expressing and defending them made him a liability to the organization. Let's move on to what happened more recently: After all this happened, the FSF made a decision to appoint somebody to their BoD (who apparently aims for President) who has shown for the last 20+ years that he's an ineffective communicator, an ineffective campaigner, an absent software maintainer or developer, and a high-maintenance personality (to put it mildly). So he's not a good match for the role. He also carries that baggage. Yet they picked him. That doesn't look well for the FSF: Either they don't know better (in which case, wtf are they doing?) or they know better but have found no suitable alternative (in which case they could as well shut down), or - most likely - they're simply helping a buddy back to the steering wheel, which is textbook nepotism. It also doesn't look well for RMS: Even though he should by now have learned that his presence is a liability for the FSF, he's still going back. That may be good for him personally, but he doesn't seem to care at all for "the cause". So congrats to RMS for being back to a leadership role (because apparently he won't settle for anything less - a sign of a principled person, indeed), and congrats to the FSF for demonstrating its helplessness. (Nothing in this piece covers the problem that some people argue that they don't feel safe around him. This is mostly because it's a) not necessary to argue that RMS is a lousy choice for any leadership position, b) something that detractors did claim in the past to be merely a political ploy and that they could bring up again to distract from that RMS is a disaster along all other dimensions as well. While I disagree with the idea that all or even most people are claiming this for political reasons, I'll stick to the bits that aren't potential minefields.)
OPCFW_CODE
Joff Thyer // Black Hills Information Security loves performing both internal penetration tests, as well as command and control testing for our customers. Thanks to the efforts of many great researchers in the industry, we are lucky enough to escalate privileges in many environments, laterally move, and demonstrate access to sensitive data. As a penetration tester, we must always have the mindset of demonstrating business risk as the number one goal. Thus I find myself often saying things like “Domain Admin” in the Windows Active Directory world are not everything. In fact, in support of this idea, there have been many occasions whereby I have managed to move laterally within an environment because of enumerated/discovered local administrative privileges and even used token impersonation exclusively to mimic a normal business user with the goal of demonstrating access to data. Did I need to grab that domain admin privilege for this? Well not at all. Do these sorts of actions scare the daylights out of business executives that are trying to guard the crown jewels? You bet it does!! Gaining local privilege escalation is something we often achieve through any number of methods including: - Mis-configured service accounts - Unquoted service path names - Unattended installation XML files - Group policy preferences XML files - DLL hijacking - The “always elevate” registry key for MSI installations - Kerberoasting – thank you, Tim Medin!!! - Password spraying In recent tests, I have found that Kerberoasting remains very fruitful but in unusual ways. One attack path I have found interesting is to spray for passwords that you first discover through Kerberoasting or other local machine escalation. It is amazing how many times that Systems Admins will reuse passwords and not be cognizant that password reuse across different privileged accounts is a really bad idea. All that said, let’s just face it, there is nothing better than the thrill of finally getting full domain administrative access. While it does not demonstrate business risk directly, it sure makes you feel a little tingle followed by your happy little “got root” dance. (Yes, admit it… you all have a “got root” dance). My lovely wife always knows when I hit the jackpot because my office is next to the kitchen, and I cackle loudly when it happens. Ok, so after you get that Domain Admin account through whatever means, there are so many things you can do. Among these is performing the “value-added service” of grabbing the full domain account hashes, and letting your “hashcat” flag fly in all its glory. Yes, crack those hashes and see just what percentage of all the creds you can actually obtain. In the process of doing so, you will turn your rockin’ video GPU water-cooled cracking masterpiece into a small space heater while using about 3,000 watts of electricity over a couple of days… but oh the wonderful beauty of the result! Of course, when you are finished you should use Carrie’s Domain Password Audit Tool to produce a beautifully formatted HTML report. It is not unusual to obtain figures such as over 70% of hashes cracked in any one organization. Now I will get to the point… extracting the hashes can be dangerous!!! Why do you ask? Well, many of us in the bad old days are accustomed to using either “hashdump”, or “smart_hashdump” (thanks Carlos… https://www.darkoperator.com/blog/2011/5/19/metasploit-post-module-smart_hashdump.html) in the Metasploit project. While these are lovely, well-written pieces of software, both approaches on a Domain Controller will try to extract hashes from the LSASS.EXE process. Within a small environment, you will probably be just fine. However, there are times when you are operating within an environment of 20,000 – 100,000 credentials or more. If you muck around with LSASS.EXE in this sized environment, you will likely crash a domain controller, and that is sub-optimal. How do we handle this situation? It would be really nice if we could gain access to the NTDS.DIT, SAM, and SYSTEM files directly and just copy the data down. This works well because the folks at Core Security have a Python script called “secretsdump.py” within the Impacket repository giving us the ability to grab the hashes directly from the database, and registry files. Next question is… how on earth do we gain access to these files? On a running domain controller, they are locked files, so you can’t just romp on into the “%SYSTEMROOT%\SYSTEM32” directory and copy the files. Well, not technically true, you can find backups of SAM, and SYSTEM in the “%SYSTEMROOT%\SYSTEM32\CONFIG” directory, and it might well be possible to locate the files in a Volume Shadow Copy. Alas, even if you do locate these files, they will be old by perhaps a day or a week. There happens to be a fantastic tool located on a Windows domain controller called “NTDSUTIL.EXE”. The NTDSUTIL tool is used for accessing and managing a Windows active directory database. The tool should typically only be used by experienced system administrators, but also has this wonderful penetration testing use case. |WARNING: This tool is POWERFUL. Do not experiment ad-hoc with NTDSUTIL unless using your own lab system. It will directly interact with Active Directory databases, and you might well destroy the domain. The nice part about NTDSUTIL from a penetration testing perspective is that you can create a full active directory backup in “IFM” media mode in a completely safe manner. Once you do this, all that is left is to copy the files to where you need them for hash extraction and cracking purposes. The upside of this method is a safe, and relatively stealthy method of grabbing the data for your “value-added service” of cracking the hashes. The downside of this method is that some Active Directory Databases are sizeable, often several hundreds of megabytes or gigabytes in size. In general, I have found that ex-filtration works fairly well if you created an encrypted ZIP, base64 encode and then download the result. Specifically, when using NTDSUTIL we are doing the following: - Setting the active instance to “NTDS” - Entering “IFM” media creation mode - Creating a full backup to a specified directory path - Quitting out of IFM, and then quitting from NTDSUTIL NTDSUTIL is normally a text menu-driven process, however, it is possible to specify each of the commands directly on the command line from CMD.EXE as follows: C:\> ntdsutil “ac in ntds” “ifm” “cr fu c:\TEMP\AD” q q The most important thing to realize is that the “C:\TEMP\AD” directory specified above must exist, must be an empty directory, and must have enough free disk space to hold the full database. The other important thing is that you must have an administrative account in order to perform this operation. In general, I prefer to not use Remote Desktop Protocol but rather use WMIC to launch the required NTDSUTIL command. One of the challenges with this is you end up with the age-old quotation escaping challenge in crafting your command. Let’s assume your target domain controller is 10.10.10.10. What you can do is map a drive to the domain controller, create your directory to store the results, and invoke WMIC to run your NTDSUTIL command. A command sequence as follows should do the trick assuming you are resident on a regular workstation within the environment. C:\> NET USE Z: \\10.10.10.10\C$ /USER:DOMAIN\Administrator C:\> MKDIR Z:\TEMP\AD C:\> WMIC /NODE:10.10.10.10 /USER:DOMAIN\Administrator /PASSWORD:XXXXX process call create "cmd.exe /c \"ntdsutil \"ac in ntds\" \"ifm\" \"cr fu c:\TEMP\AD\" q q\”" You can check on the progress using the command while NTDSUTIL completes. It may take a while especially if the active directory environment is large. C:\> DIR /S Z:\TEMP\AD After this completes, your job is to compress the resulting files (SYSTEM, SAM, and NTDS.DIT) using ZIP with encryption, optionally base64 encode, and download the results to a Linux system you control. The IMPACKET secretsdump script can then be used to extract all hashes in a format suitable for cracking with “hashcat” as follows: $ python secretsdump.py -system SYSTEM -security SECURITY -ntds NTDS.DIT -outputfile outputfilename LOCAL After you have successfully exfiltrated the data, please ensure that you clean up your mess on the domain controller itself. I would suggest doing the following: C:\> Z: Z:\> CD \TEMP\AD Z:\TEMP\AD> RD /S /Q “Active Directory” Z:\TEMP\AD> RD /S /Q “Registry” Z:\> C: C:\> NET USE Z: /DELETE If you have sufficient drive space on the local workstation you are working with, another option is to create a share from your system, and mount that share on the domain controller. Subsequently, you would create an empty directory on this share, and use a similar NTDSUTIL command to create the backup of the database with this path. The greatest advantage of following this sort of methodology when extracting hashes is that you are NOT endangering the LSASS.EXE process via any DLL injection, and thus not risking a domain controller crash. Another potential method to use is to abuse domain controller replication functionality with “DCSYNC”. If you would like to read about this, please refer to Harm Joy’s blog at http://www.harmj0y.net/blog/redteaming/mimikatz-and-dcsync-and-extrasids-oh-my/. Happy hunting folks. You can learn more straight from Joff himself with his classes: Available live/virtual and on-demand!
OPCFW_CODE
My current router to connect to the Linux box and act as a 2nd router/firewall, the Linux box to have a wireless antenna and allow other computers to connect to the Linux box and be able to browse the net etc etc. I'm well ware I still require a wireless antenna just wanted to start a little research before I … My current router to connect to the Linux box and act as a 2nd router/firewall, the Linux box to have a wireless antenna and allow other computers to connect to the Linux box and be able to browse the net etc etc. I'm well ware I still require a wireless antenna just wanted to start a little research before I … How to use Linux server as a router? | Linux.org Jun 22, 2020 LinuxQuestions.org - Setting up Linux box as IPv6 router Setting up Linux box as IPv6 router to replace Netgear WNR1000 wireless router . Hi, I want to set up a Linux box as a wireless router to replace our existing Netgear WNR1000 router, as I believe the Netgear does not support the coming IPv6 protocol. Unfortunately, it is not flashable with OpenWRT or … List of router and firewall distributions - Wikipedia Jun 28, 2016 · Linux desktop computers also support multiple network interfaces, and you can use your Linux computer as a multi-network client, or as a router for internal networks; such is the case with a couple of my own systems. Apr 13, 2005 · To begin, you’ll need a PC with any recent GNU/Linux distro installed. You’ll also need three network cards to put into this Linux box. Two of the three network cards, say eth0 and eth1, will connect to the Internet routers/gateways of your primary ISP (say ISP1) and secondary ISP (say ISP2). Using a Linux Box as a Router See how you can emulate that Docker feel by converting a Linux box into a router for your distributed cloud infrastructure. The router will act as a DHCP server and forward IP packets to the private network. I will configure the DHCP pool in the range 192.168.50.50/24 to 192.168.50.100/24 . This is how I am going to configure the CentOS 7 router in this article. Dec 22, 2010 Linux-based router project supporting a large set of layer-1 technologies (e.g. Ethernet LAN, Wireless LAN, ISDN, DSL, UMTS), layer-3 protocols and functionality (IPv4, IPv6, stateful packet filter), and various network-related functionality (e.g. Bridging, Bonding, VLANs; DNS, DHCPv4, DHCPv6, IPv6 RA; PPP (client+server), PPTP (client+server Apr 12, 2018 · Or you get lucky and your connection from the ISP is ethernet. But yes, I threw out the stock all-in-one router which was the perfect example of an underspecced ISP-provided device. Oct 28, 2018 · I work in a public library as a system administrator. Recently my task was to put public computers behind a separate router in order to control internet access. As I have plenty of computer parts lying around I decided to build a router with some older computer with a Linux operating system. In my case Ubuntu Server 18.04.1 LTS (Bionic Beaver). Aug 11, 2015 · Im going to Configure a Linux Gateway server for my LAN. It will locate in between router and my LAN. I used 192.168.0.x private ip range for my LAN PCs. and im expecting to configure firewall, nat, proxy on that gateway server. who can guide me to do that. Im new for linux and im interesting Linux. thanks Apu How to Create a Virtual Ubuntu Linux Router. In order to use our own DHCP server in VMware Workstation/Fusion (and even Virtual Box), we need to use the Host-Only Networking feature. The reason we need a totally private virtual network is that Bridged Networking would put us on the production network and that’s not an option.
OPCFW_CODE
I'm trying to setup a form where a panel is hidden until after someone selects "Yes" and selects Save. Is that possible? Right now once I select "Yes" on the dropdown it automatically unhides the panel. I want that to stay hidden until after it's saved (as the controls within the panel will be for someone else to fill out after they're emailed a notification). Solved! Go to Solution. Just connect the button to a backing list column and write out a value to the column (this can be done in button's configuration settings). Next time you open the form you can hide or unhide based on this value. Actually only Edit form. I have a set of required fields with all my other panels hidden on "IsNewMode" in rules. The first panel is un-hid when form submitter and other required fields are completed and saved. Thanks for the response but I'm pretty new when it comes to Nintex and SharePoint forms. I'm not sure how to complete the process you mentioned (backing list column?). I'm in the Save button's control settings but I can't connect the dots with the rest of your comment. Thanks in advance if you're able to dumb it down some for me. hm, that might be tricky... how should it behave if one unchecks yes flag? should it hide panel again, and and show it just when it is changed back to yes and saved? or after it is set to yes once, it should be shown regardless of current flag status? I'd say it might be easier to condition showing/hidding of panel by a rule like flag == yes and 'someone' who should approve is member of a specific group What Mike Matsako means is that you can reference either the control on the form that is connected to the SharePoint List Item, OR, you can reference the value of for that column on the Item you're referencing. Consider the following. I'm going to make a new Yes/No Column for my test List. If you go into the Nintex Form editor, you can find the control for that newly created column in the List Columns section. Editing the control, can give it some description text as shown: Now I'm going to place a Panel onto the form and give it some text: Afterwards, I'll put a Formatting Rule onto that Panel which will hide it. (In the picture below, I haven't yet set the conditions for how it's being hidden!): It is at this point that Mike's comment comes into play. You may have noticed that there are few way to reference controls when you're in the Formula Builder. Be default, the first tab that you're on is the Named Controls tab, which gives you access to the controls that are on the form itself. Essentially, they also reference the values of those controls in real-time. So, if you were to reference the 'YesNoToggle' control (as shown in the pic below), it would be referencing the value for whatever that control is currently set to. Change it from Yes to No, and the current value would be false, likewise, changing it from No to Yes would set the current value to true. Instead what you'd like to actually do is reference the value of that particular column (YesNoToggle) from the current item. You can do this by switching over to the Item Properties tab. Because the panel should be hidden when the value of that column is not true (Yes / Checked), and because a checked Yes / No control equals true, all we need to do to get this rule to work is 'invert' the value of whatever it currently is by putting an exclamation mark infront of the reference column value, as shown: Now if we Publish the Form, Create a New Item, and Toggle the control, the Panel will stay hidden! Saving the Item creates a New Item with a Yes / No column value to whatever we set it as: In the case of this Example, we set the value to Yes. Editing the Item, we NOW see our panel because the Column value on the Item has been set to Yes!: For extra credit and safety, you could also put a rule onto the Yes / No control that would prevent people from messing with the Toggle control if the Form was in Edit Mode, and the Value of the COLUMN was set to Yes / true. I hope that this helps you to solve this problem, though it may not solve all of the particularities of your environment. Let me start by saying thanks for how much time and effort you put into walking me through that process. I really do appreciate it. While I didn't go with this exact process, it did get me thinking about setting it up how I have the email notifications going out. If anyone else encounters this in the future, below are the steps I took. While I'm not sure if it's the most efficient way to implement it, this process works for me: So when running through the process, new form created with only the first bunch of controls including the Yes/No at the end. User selects Yes, page stays the same, they click Save/Submit. In the background, the workflow runs, sees the Yes in the visible drop down, changes the non-visible dropdown to Yes. That then kicks off the rule as because the not-visible is now showing Yes, it will show the panel. Again, probably not the best/most efficient approach, but it does work for what I'm looking for. This was the same sort of approach I had to set up so as to send only one notification email instead of an email every time someone saved the form. I was in need of a similar functionality and I did the following (if you want them to be able to create "drafts"...): If you have a workflow as well, you could only let it initiate when the status is submitted.
OPCFW_CODE
It seems there's a performance bug in recent NVIDIA drivers that has been causing a loss of performance across likely all GPUs. Not only that, but it seems to end up using more VRAM than previous drivers too. User HeavyHDx started a thread on the official NVIDIA forum, to describe quite a big drop in performance since the 375 driver series. So all driver updates since then would have been affected by this. NVIDIA themselves have now commented to confirm the issue. Here's what the NVIDIA rep said about it: This likely matches a similar performance drop observed on another Feral game, Total War: WARHAMMER. We've been tracking it internally as bug 1963500. There was a change, introduced in our r378 branch, to the logic of allocation of certain textures, but it apparently exposed a bug in our memory manager. Our next release branch, r390, will carry a workaround, and we're still working on finding and fixing the root cause. So it seems to affect at least Deus Ex Mankind Divided, Total War: WARHAMMER, Company of Hereos 2 and most likely a number of other titles too. The same NVIDIA rep also said they aim to have the 390 driver series out before the end of the year, let's hope that doesn't come with its own problems! Great to know they are aware of it and that the next driver series will have a workaround to improve it, getting Linux game performance back on track is pretty important. Somehow i have a feeling that Pascal GPUs are not affected by this.. or maybe because i have 8GB of Vram so i wont notice the drop. Last edited by emptythevoid on 30 November 2017 at 2:33 pm UTC Quoting: Xpanderi still havent noticed any difference in performance with drivers 375.x to 387.34. It is apparently a bug that causes higher VRAM usage. You probably have plenty of that for most games so that it won't affect performance much. Quoting: emptythevoidThis is interesting because I've suspected a <recent> loss of FPS in Borderlands 2, a game I play regularly. I'll have to check what nvidia driver I'm using at the moment on my 970.I wish we had a field and an API here on GoL to autofill the current videodriver version or a commit/version of Mesa build (or even most of the fields). These days it's as crucial as the hardware configuration. It could be as simple as a cronjob/systemd timer that runs a script once a day. The script collects that info and updates it on the site using a private API token for the user. Also would be great to have a BB-code (and a corresponding button to insert it) like [mygpu] or [myhw] that inserts all the relevant data into your comment as it is at the moment of posting, so it doesn't autoupdate later. The use case is simple, a new game/port comes out and many people post things like "it runs great for me!" or "stutters like a dying horse" and so on. Would be handy to put that tag so that others know on what exact configuration it happens without the need to click "View PC info" each time which can also be updated later so it becomes irrelevant. What do you think, Liam? Quoting: emptythevoidThis is interesting because I've suspected a <recent> loss of FPS in Borderlands 2, a game I play regularly. I'll have to check what nvidia driver I'm using at the moment on my 970.I contacted Aspyr about my Borderlands 2, and TPS performance drop. I have experienced it on the stable 384.98, and the 387.34 drivers. They could not reproduce my problem. hopefully this gets fixed with the newer Nvidia drivers. Quoting: lucifertdarkI would much prefer to have them say "we found a bug & fixed it" rather than "we found a bug & we'll fix it some time in the next month or so", fix it then tell us it about it.I'm sure that Nvidia doing this makes it easier on Feral since they are receiving support tickets for their games. It also makes me feel less crazy about my performance issues. When some support can't reproduce them. See more from me
OPCFW_CODE
I am an assistant professor in the Microelectronics and VLSI group of the department of Electrical Engineering in the Indian Institute of Technology Kanpur and a member of the solid-state circuit design lab. I work in the area of analog integrated circuit design primarilly aimed at signal processing. Primarily due to my interest in teaching analog circuits, and to ensure its reach beyond the walled classrooms, I record my lectures for public viewing. You can access them here. I obtained my Ph.D. in Electrical Engineering from the Indian Institute of Technology Madras, where I worked under the supervision of Prof. Nagendra Krishnapura. During my Ph.D. I worked on the design, analysis and realization of wideband, tunable true-time-delay elements on integrated circuits and demonstrated its efficacy in implementing expansion and compression of narrow, wideband pulses. I obtained my M.Tech degree from the Indian Institute of Technology Kanpur in 2007 and B.Tech degree from Kalyani Govt. Engineering College, West Bengal in 2005. During my M.Tech, I worked with Prof. Aloke Dutta on modeling gate tunneling currents for submicron MOS devices. Between 2007 and 2011, I was with Cypress Semiconductor Technology India Pvt. Ltd, Bangalore, where I was involved in the design of power management circuits for non volatile SRAMs. Aug: 2023: Aasif's paper titled "A Low-Loss, Compact Wideband True-Time-Delay Line for Sub-6GHz Applications using N-Path Filters" has been accepcted for presentation in APCCAS 2023. Congratulations Aasif! Aug: 2023: Mayank's paper titled "A Robust Overdesign Prevention Circuit Technique Under Widely Varying Ambient Conditions" has been accepcted for presentation in APCCAS 2023. Congratulations Mayank! July: 2023: Kunal and Vibhor defend their M.Tech thesis. Congratulations to both of them! July: 2023: Sushil bags proficiency medal in the convocation 2023. Congratulations Sushil! June: 2023: Sushil, Vinayak and Kaustubh defend their BT-MT thesis. Congratulations to all of them! Feb: 2023: Kaustubh has been selected for IEEE MTTS undergraduate scholarship. Congratulations Kaustubh! Jan: 2023: Abhishek's paper titled An Automatic Leakage Compensation Technique for Capacitively Coupled class-AB Operational Amplifiers got accepted for publication in IEEE ISCAS 2023. Congratulations Abhishek! Oct: 2022: Nitish defends his BT-MT thesis. Congratulations Nitsh. Jan: 2022: Mayank and Harshit's paper titled " "Bandwidth-Enhanced Feedforward Amplifier with Shared Class-AB Gain and Compensation Paths" to be presented in IEEE ISCAS 2022. Dec: 2021: Kautuk defends his M.Tech project. Nov: 2021: Mayank receives PMRF scholarship for pursuing his Ph.D. work. Congratulations Mayank! Aug: 2021: Mayank defended his BT-MT project and joined the group as a Ph.D. candidate Jan: 2021: Harshit's paper titled " "Breaking the trade-off between bandwidth and close-in blocker attenuation in an N-path filter" got accepted in IEEE ISCAS 2021. Oct: 2020: Paper titled "Analysis and comparison of distortion of Miller and feed-forward opamps in negative feedback" presented in ISCAS 2020. Sep: 2020: Our paper titled "Effects of AC Response Imperfections in True-Time-Delay Lines" got published in IEEE TCAS II. July 2020: Parthasarathi, Anteshwar and Prashanth presented their M.Tech work and defended their thesis. July 2020: Our team comprising of Mohamad Aasif Bhat and Anteshwar Chimadge was among the winners of the Qualcomm Innovation Fellowship India 2020 program. Congratulations to Aasif
OPCFW_CODE
The last release is included at the root of the archive directory, but if you want to re-build the library, you need to have ant installed. ./build/build.xml file contains the following ant targets: Then, you have to set up the Velosurf tools by means of the /WEB-INF/toolbox.xml (for 1.4): <?xml version="1.0"?> <toolbox> <!-- toolbox file for Velocity View 1.4 --> <!-- ...other tools... --> <!-- http query parameters tool: You can either use org.apache.velocity.tools.view.tools.ParameterParser or velosurf.web.HttpQueryTool, which inherits the former to add a generic setter - in clear, if using VelocityTools 1.4 or prior, you have to use the Velosurf version if you want to be able to add values to the tool like with #set($query.foo='bar').--> <tool> <key>query</key> <scope>request</scope> <class>velosurf.web.HttpQueryTool</class> </tool> <!-- database --> <tool> <key>db</key> <scope>request</scope> <class>velosurf.web.VelosurfTool</class> <-- uncomment the next line tu use a custom model file --> <--<param name='config' value='./WEB-INF/mymodel.xml'/>--> </tool> </toolbox> /WEB-INF/tools.xml (for 2.0+): <?xml version="1.0"?> <tools> <!-- toolbox file for Velocity View 2.0+ --> <!-- ... other toolboxes ... --> <toolbox scope="request"/> <!-- or other scopes, see below --> <!-- ... other tools ... --> <tool key="db" class="velosurf.web.VelosurfTool"/> <!-- add «config="/WEB-INF/mymodel.xml"» for a custom model file --> </toolbox> </toolbox> velosurf.web.VelosurfToolobject is a tiny wrapper that doesn't introduce any overhead when instanciated for each request, but you may choose to use it in a session or application scope if you do not use per-request refinements or orderings. Alternatively, the name of the configuration file can also be specified in the velosurf.config servlet context parameter (this latter method must be chosen when using Velosurf authentication or localization filters that rely on the database since the toolbox is not yet initialized at the time the filters are initialized). In that case, you'll have to put the following declaration in By default, the database will be reverse engineered and each table becomes an entity, each column an attribute, and each foreign key will produce two attributes (see the paragraph about foreign keys in the User Guide). You can thus start with the minimal configuration, which only specifies the database connection parameters (don't forget to specify the schema if you use one): <?xml version="1.0"?> <database user='login' password='password' url='database_url' schema='the_schema'> </database> You can then check your installation with a very simple template that displays a few values taken from the database. The model file supports the XML XInclude mechanism, which lets you split a big model into many files.
OPCFW_CODE
Preview/Forced view shows OSX screen saver, not Aerial. General troubleshooting tips Before logging an issue please check that: [x ] You have the latest version installed (There may be a beta version that fixes your issue), see here for the latest releases and bug fixes: https://github.com/JohnCoates/Aerial/releases [x ] Your issue isn't already mentioned in our issues. You may find a workaround there or a similar request already made. [ x] Your problem isn't mentioned in the troubleshooting page. If none of this fixes your issue, tell us about the problem you are experiencing or the feature you'd like to request. Required information In order to help us sort your issue, we ask that you provide the following information: [ ] Mac model: Macbook Pro [ ] macOS version: 11.3.1 [ ] Monitor setup: Description of issue / Feature request When I preview, or force the screensaver with hot corners, I get another built in OSX screen saver, not this one. It could be a lock down setting or some such (this is a work computer), but every other aspect (selecting it, options) seems to work. hey @cfjedimaster Just to clarify, do you see a black screen for a few seconds THEN the other screen saver launches, or does it just goes straight to a built in saver (if so which ?). Reason I'm asking is that in the event of a crash of a saver, macOS may launch another "safe" built in saver in its place. But it's usually pretty obvious that something was wrong (black screen hanging on for a few seconds etc). I would suggest double checking in Console.app, see if you have any crash of legacyScreenSaver, ScreenSaverEngine, or Aerial, just to be sure, but this may be your machine being locked down here (I'm unfamiliar with the various locking down mechanisms so unclear how they work). Here's another suggestion, try this in Terminal (make sure you've selected Aerial first in System Preferences then closed it) : defaults -currentHost read com.apple.screensaver Let me know what that says. You should see a section like this : moduleDict = { moduleName = Aerial; path = "/Users/you/Library/Screen Savers/Aerial.saver"; type = 0; }; System Preferences may not be able to change that files and thus you're not able to change saver, is a possibility. Nope, don't see a black screen (well I do, but for like one second, if that). Running that check gives me: { CleanExit = YES; PrefsVersion = 100; idleTime = 300; moduleDict = { moduleName = Aerial; path = "/Users/raycamde/Library/Screen Savers/Aerial.saver"; type = 0; }; tokenRemovalAction = 0; } Ok so very likely not a crash, though please check Console.app as mentioned above. Other think I can think of is, in Console.app too, filter with saver top right and start monitoring, then launch the saver via a hot corner, and paste the log lines you see. There might be some relevant info there that could explain what's happening to you. Couldn't find anything. I swear this feels like something related to IT locking stuff down, but locking the screensaver seems like a weird thing. I appreciate your help. No problem, it kinda looks like it. You could double check with another one such as Fliqlo : https://fliqlo.com/screensaver/ and see if you hit the same locking down issue.
GITHUB_ARCHIVE
how to find the coordinates of the tangent at 45 degrees of a polynomial function I'm stuck! I need to find the coordinates of the tangent at $45$ degrees of this function: $y=0.3582x^5-2.2501x^4+4.6235x^3-4.5377x^2+2.4503x+.3513$ I need to know where is the inflection point at $45$ degrees on the graph. By looking at the graph in Excel ($x$ Axis varies from $0%$ to $100%$ only), the $y$ ($y$ Axis varies from $0%$ to $100%$ too), coordinates of the tangent at $45$ degrees should be between 75% and 80%. I have found the derivative: $y=1.791x^4-9.000x^3+13.8705x^2-9.0754x+2.4503$ I have been told that the slope of a tangent at $45$ degrees is $1$, therefore, if I replace "$x$"$=1$ in the derivative, I will get the "$y$" value. Now what should I do and is the process accurate? Tks, No. Set the derivative equal to 1 and solve for $x$. This will give you the $x$-coordinate you seek, so then you plug this value into the original function to get the $y$-coordinate. Scott, thank you. I will try it, as I will have to isolate the y. I'll let you know. Since the setting of the problem is numerical, i will give also a rather numerical answer. We have to solve the equation $$ f'(x)=1\ . $$ The following lines of sage code R.<x> = PolynomialRing(RR) f = 0.3582*x^5 - 2.2501*x^4 + 4.6235*x^3 - 4.5377*x^2 + 2.4503*x + 0.3513 A = ( diff(f,x) - 1 ).roots(multiplicities=False) print A plot( [f] + [f(a)+(x-a) for a in A ], (x, -0.8, 3.6), aspect_ratio = 1 ) are finding the following approximative roots of $f'(x)-1$, in code diff(f,x) - 1: [0.228077877625767, 2.95170314961792] The plot gives the picture:
STACK_EXCHANGE
<?php namespace Core\Formatter; class TableStyle { private $paddingChar = ' '; private $horizontalBorderChar = '-'; private $verticalBorderChar = '|'; private $crossBorderChar = '+'; private $padType = STR_PAD_RIGHT; public function setPaddingChar($paddingChar) { if(!$paddingChar) { throw new \Exception('The padding char must not be empty'); } $this->paddingChar = $paddingChar; return $this; } public function getPaddingChar() { return $this->paddingChar; } public function setHorizontalBorderChar($borderChar) { if(!$borderChar) { throw new \Exception('The horizontal border char must not be empty'); } $this->horizontalBorderChar = $borderChar; return $this; } public function getHorizontalBorderChar() { return $this->horizontalBorderChar; } public function setVerticalBorderChar($borderChar) { if(!$borderChar) { throw new \Exception('The vertical border char must not be empty'); } $this->verticalBorderChar = $borderChar; return $this; } public function getVerticalBorderChar() { return $this->verticalBorderChar; } public function setCrossBorderChar($borderChar) { if(!$borderChar) { throw new \Exception('The cross border char must not be empty'); } $this->crossBorderChar = $borderChar; return $this; } public function getCrossBorderChar() { return $this->crossBorderChar; } public function setPadType($padType) { if(!in_array($padType, array(STR_PAD_LEFT, STR_PAD_RIGHT, STR_PAD_BOTH), true)) { throw new \InvalidArgumentException('The padType must be one of the following: STR_PAD_LEFT, STR_PAD_RIGHT, STR_PAD_BOTH.'); } $this->padType = $padType; return $this; } public function getPadType() { return $this->padType; } }
STACK_EDU
Usage of --remove-source-files option of rsync From the manpage of rsync --remove-source-files This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side. Does it mean files on the sending side that are either part of the transfer or duplicated on the receiving side? Can I also remove directories on the sending side? Note that you should only use this option on source files that are quiescent. What does "source files that are quiescent" mean? If you are using this to move files that show up in a particular directory over to another host, make sure that the finished files get renamed into the source directory, not directly written into it, so that rsync can't possibly transfer a file that is not yet fully written. What does this mean? If you can't first write the files into a different directory, you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync transfer). What does this mean? Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size or modify time has not stayed unchanged. What does this mean? Thanks. Q: Does it mean files on the sending side that are either part of the transfer or duplicated on the receiving side? A: Both Q: Can I also remove directories on the sending side? A: Yes --remove-source-files then issue the command find <source_directory> -type d -empty -delete OR find <source_directory> -type l -type d -empty -delete (to include symlinks in the deletion) (Was: --remove-source-files then issue the command rm -rf <source_directory>) WARNING: As mentioned in OrangeDog's comment, the rm -rf suggestion is unsafe. Specifically, any files that were for any reason not transferred (file changed between building the transfer list and starting to actually transfer that file, receiving side ran out of disk space, network connection dropped, etc.) will be left untouched in the source directory by rsync — but after your rm -rf invokation they're just gone. The find command above will recursively delete the empty source tree if all the source files have been successfully transferred and removed, but will leave alone any remaining files (and their containing directories, of course). Q: What does "source files that are quiescent" mean? A: It means files that have been written to and closed Q: If you are using this to move files that show up in a particular directory over to another host, make sure that the finished files get renamed into the source directory, not directly written into it, so that rsync can't possibly transfer a file that is not yet fully written. What does this mean? A: It means exactly what I said above Q: If you can't first write the files into a different directory, you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync transfer). What does this mean? A: It means that RSYNC makes a list of files to be transferred first. Then it writes them into a different directory (Destination Directory), thus if you transfer a file that hasn't finished, it is best to rename it after it is done using the --exclude option Q: Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size or modify time has not stayed unchanged. What does this mean? A: If RSYNC detects that when its about to write the file to the destination directory that the file size has changed between the time it scanned it, to the time it actually writes it to the destination directory, then RSYNC will skip the file. Thanks. for the last question, the manpage says "rsync will skip the sender-side removal (and output an error)" and you wrote "RSYNC will skip the file". Will RSYNC skip transferring the file, or it will skip removing the sender-side file after transferring it? @Tim It will skip removing the file from the sender side. This check is done AFTER the file has been transferred. rm -rf <source_directory> is NOT SAFE. That will delete everything, including files that weren't successfully synced. You need something like find <source_directory> -type -d -empty -delete instead. When run with --remove-source-files, does rsync perform a checksum verification of the copied file before removing it from the source? Contrary to another answer on this question, it can be safe to mix --remove-source-files and rm -rf under one certain condition. However, let's back up a little to respond to your other specific questions. The --remove-source-files flag will arrange for files to be removed after they have been correctly and completely copied to the destination. They might have been copied during this session or during some possible earlier session; it doesn't matter. The flag does not remove directories on the sending side, only files. The usual approach is to naïvely call rm -rf. This can be disastrous in the event of a failed transfer. The solution here is to ensure that it is only called once rsync has completely and correctly transferred all the files rsync -a src/ dst && rm -rf src The caveat is that there is an implicit race condition here. If a file in the src tree is created or modified after the rsync has processed it, it will be silently and completely deleted by the rm despite no longer being correctly synchronised to the remote system. If you control the files in your src tree and can be assured this will not occur you are safe to use this rsync && rm construct. Newer versions of rsync can detect files that have changed during their copy. (Not during the entire copy of src, but just during the copy of src/.../file.) They do this by comparing the size and modification time of the file at the start and end of the file copy. It is an error if the file has changed.
STACK_EXCHANGE
The most used Linux distribution in penetration tests rely on the Debian packaging management since it’s easy and has a bug tracking system as well. Linux Kali by Offensive Security is one of the greatest security distribution ever after Blackbuntu, Blackarch, Matriux, and many more. What is the purpose of Kali Linux? What is Kali Linux used for? Kali Linux is mainly used for advanced Penetration Testing and Security Auditing. Kali contains several hundred tools which are geared towards various information security tasks, such as Penetration Testing, Security research, Computer Forensics and Reverse Engineering. Is Linux important for cyber security? Linux plays an incredibly important part in the job of a cybersecurity professional. Specialized Linux distributions such as Kali Linux are used by cybersecurity professionals to perform in-depth penetration testing and vulnerability assessments, as well as provide forensic analysis after a security breach. What is special about Kali Linux? Kali Linux is a fairly focused distro designed for penetration testing. It does have a few unique packages, but it’s also set up in somewhat of a strange way. … Kali’s an Ubuntu fork, and a modern version of Ubuntu has better hardware support. You might also be able to find repositories with the same tools Kali does. Why do hackers use Linux? Linux is an extremely popular operating system for hackers. There are two main reasons behind this. First off, Linux’s source code is freely available because it is an open source operating system. … This type of Linux hacking is done in order to gain unauthorized access to systems and steal data. Is Kali Linux good for beginners? Nothing on the project’s website suggests it is a good distribution for beginners or, in fact, anyone other than security researches. In fact, the Kali website specifically warns people about its nature. … Kali Linux is good at what it does: acting as a platform for up to date security utilities. Where do I start in cyber security? There are many places offering free training in cybersecurity and all of the related skills we mentioned above, from online education providers like Coursera, edx, Udemy and Cybrary, to programming challenges in platforms like Codewars, online hacking challenges and CTF (Capture the Flag) competitions. Is it hard to get into cyber security? It is not hard to get a job in cybersecurity. The number of positions is growing with the Bureau of Labor Statistics expecting the field to increase more than 30% over the next ten years. Most hiring managers emphasize soft skills for entry-level candidates with most of the technical skills learned on the job. Which language is best for cyber security? 5 Best Cyber Security Programming Languages to Learn - C and C++ C is one of the oldest programming languages. … - Python. Python is a general-purpose, object-oriented, high-level programming language. … - PHP. PHP is a server-side programming language that is used to develop websites. … 15 сент. 2020 г. Is Kali better than Ubuntu? Ubuntu doesn’t comes packed with hacking and penetration testing tools. Kali comes packed with hacking and penetration testing tools. … Ubuntu is a good option for beginners to Linux. Kali Linux is a good option for those who are intermediate in Linux. Is Kali Linux dangerous? The answer is Yes ,Kali linux is the security distrubtion of linux , used by security professionals for pentesting , as any other OS like Windows , Mac os , It’s safe to use . Originally Answered: Can Kali Linux be dangerous to use? Is Kali Linux illegal? Originally Answered: If we install Kali Linux is illegal or legal? its totally legal , as the KALI official website i.e. Penetration Testing and Ethical Hacking Linux Distribution only provides you the iso file for free and its totaly safe. … Kali Linux is a open source operating system so it is completely legal. Can Linux be hacked? The clear answer is YES. There are viruses, trojans, worms, and other types of malware that affect the Linux operating system but not many. Very few viruses are for Linux and most are not of that high quality, Windows-like viruses that can cause doom for you. Is it worth switching to Linux? If you like to have transparency on what you use on a day-to-day basis, Linux (in general) is the perfect choice to have. Unlike Windows/macOS, Linux relies on the concept of open-source software. So, you can easily review the source code of your operating system to see how it works or how it handles your data. Can I hack with Ubuntu? Linux is open source, and the source code can be obtained by anyone. This makes it easy to spot the vulnerabilities. It is one of the best OS for hackers. Basic and networking hacking commands in Ubuntu are valuable to Linux hackers.
OPCFW_CODE
SAN JOSE—Apple detailed its next major operating-system update at its Worldwide Developers Conference keynote today: macOS 10.14, which the company has named "Mojave" in keeping with its California-based naming convention. Tim Cook told the audience that MacOS included a lot of new features for both everyday and pro users, and Craig Federighi kicked off the demo with something that will likely be near and dear to many Ars readers' hearts: dark mode. Night-owls and others who prefer a light-on-dark appearance can now take advantage of an official dark theme for the entire OS. Previously macOS allowed turning the menu bar and dock dark, but this new preference appears to apply more extensively throughout the operating system. The new dark system theme goes well with a matching one for XCode, enabling developers to bathe their development environment in cooler dark colors. Federighi showed off a live desktop wallpaper updating function that changes your wallpaper throughout the day, but far more interesting was the "desktop stacks" feature, which allows you to organize icons on the desktop into piles, rather than having them spread across the entire desktop willy-nilly. On activating the option, files on your desktop are auto-arranged into stacks based on selectable criteria, such as document kind, date, or by tag. Some Finder love Everybody's favorite punching bag, the Finder, will be receiving some improvements. Federighi showed off a new "Gallery" view mode, which allows a single selected file to dominate the entire Finder window. Documents displayed this way can also have their metadata displayed in a sidebar rather than having to call up a separate Info window. The new informative sidebar shows up in other views, as well, and includes a "quick actions" toolbar at the bottom that will display contextually appropriate options for the file being displayed. Additionally, users can add custom actions (like Automator scripts) to the sidebar. Federighi also demonstrated improved functionality inside of the Finder's Quicklook function, showing off the ability to annotate and edit images and PDFs inside Quicklook without having to pull up a separate application. MacOS' screenshot functionality gets buffed with an iOS-like "accelerated workflow" tool. You'll now have the ability to directly edit screenshots on creation with a small editor window. Additionally, the screenshot tool now includes screen recording—previously, macOS users had to jump into the Quicktime application to do screen capturing. Earlier versions of macOS have the ability to exchange application state data with iOS and watchOS devices via the "Continuity" feature, but that feature is being expanded in macOS 10.14 with "Continuity Camera." Federighi demonstrated this by snapping an image of himself on an iPhone, then having macOS detect his face in that image and auto-insert it into a photo template. Further demonstration involved taking a picture of a magazine cover and having it auto-straighten itself before being inserted into a presentation. Mojave will catch up to iOS with several new default apps that have been previously absent on the desktop. Apple News will now be available on Macs, as will Stocks, if stocks are your thing. Additionally, Voice Memos and Home will now be available on the desktop. Home automation-heavy users will definitely appreciate being able to play in Apple's Homekit ecosystem without having to depend on an iPhone for management tasks. Security and privacy Recognizing that computers contain more and more sensitive personal information, Federighi walked attendees through a number of security improvements in Mojave intended to grant users more control over how their personal data is utilized by installed applications. Noting that macOS provides "API-level protection" of a number of system resources and files, Federighi explained that the default list of protected locations and restrictive permissions have been increased significantly. Along the same lines, Apple's Safari browser now contains additional tracking protection—"This year," Federighi said, "we are shutting that down." Safari will now, by default, block intrusive Web-tracking scripts and tools like those commonly attached to social buttons or off-site embedded comment systems. Safari will now also take steps to combat canvas fingerprinting, a technique that allows website operators to track individual users by constructing a unique profile of their browser's characteristics. Safari's implementation of anti-fingerprinting technology limits the information sites are allowed to query from the browser, increasing anonymity by making your instance of Safari look more or less the same as everyone else's. App store update The Mac App Store has been updated, and it now looks and functions considerably more like the iOS app store. There's considerably less reason for desktop users to bother with the App Store than there is on a phone or tablet, and the App Store has always felt a bit lackluster compared to its mobile counterpart. The redesign brings more (and presumably smarter) recommendations and a new categorization system. To incentivize users a bit, Apple has partnered with Adobe and Microsoft to bring some heavy-hitting apps to the Mac App Store: Office 365 will be there later this year, as will Lightroom CC. Federighi then turned to discussing some under-the-hood details about technologies that underpin Mojave. First was the inevitable appearance of Metal, Apple's proprietary 3D graphics API—the news here is that Metal will support up to four external GPUs on Mojave. Federighi then showed off a scene from Unity's Book of the Dead demo running on a Macbook equipped with an external GPU. This is likely a sign of Apple's preferred user strategy when it comes to higher-end video cards—eGPU is the way forward. On top of video games, Metal's performance shaders have applications in machine learning, and Apple's new "Create ML" application aims to allow users to create machine-learning models without necessarily having to be a machine-learning expert—users merely have to know some Swift. The big question: One OS to rule them all? In a move that likely brought sighs of relief to most of the audience (as well as many Ars readers), Federighi bluntly addressed the question of whether or not Apple has plans to merge iOS and macOS with a firm "No." This is the same answer Tim Cook and others at Apple have given whenever the question popped up, and it's good to hear it reiterated. Rather than merging the two operating systems, Federighi let the audience in on a poorly kept secret: iOS apps are eventually coming to the Mac desktop (though not yet). The common, shared foundations of macOS and iOS mean that, while the user interfaces and use contexts of applications are different, the Mac's more expansive display canvas and power make it relatively easy to adapt iOS-specific functionality in apps to the way the Mac works. "Phase one of this is to test it on ourselves," Federighi continued, saying that the project to run iOS apps on macOS is currently in its initial stages. Federighi says several of today's announced macOS updates—specifically Apple News, Stocks, Voice Memos, and Home—were adapted to macOS with "very few code changes." Mojave was not given a formal release date in the keynote, though a Mojave beta will be released to developers today. The developer community will gain access to the tools necessary to port iOS apps to the desktop at some point in 2019. reader comments236 with Share this story
OPCFW_CODE
A mock object is a very powerful tool, providing two major benefits: isolation and introspection. But like all power tools, mocks come with a cost. So where and when should you use them? What is the cost/benefit trade off? Let’s look at the extremes. ##No Mocks. Consider the test suite for a large web application. Let’s assume that test suite uses no mocks at all. What problems will it face? The execution of the test suite will probably be very slow: dozens of minutes – perhaps hours. Web servers, databases, and services over the network run thousands of times slower than computer instructions; thus impeding the speed of the tests. The cost of testing just one ifstatement within a business rule may require many database queries and many web server round-trips. The coverage provided by that test suite will likely be low. Error conditions and exceptions are nearly impossible to test without mocks that can simulate those errors. Functions that perform dangerous tasks, such as deleting files, or deleting database tables, are difficult to safely test without mocks. Reaching every state and transition of coupled finite state machines, such as communication protocols, is hard to achieve without mocks. The tests are sensitive to faults in parts of the system that are not related to what is being tested. For example: Network timings can be thrown off by unexpected computer load. Databases may contain extra or missing rows. Configuration files may have been modified. Memory might be consumed by some other process. The test suite may require special network connections that are down. The test suite may require a special execution platform, similar to the production system. So, without mocks, tests tend to be slow, incomplete, and fragile. That sounds like a strong case for using mocks. But mocks have costs too. ##Too Many Mocks Consider that same large web application, but this time with a test suite that imposes mocks between all the classes. What problems will it face? Ironically, some mocking systems depend strongly on reflection, and are therefore very slow. When you mock out the interaction between two classes with something slower than those two classes, you can pay a pretty hefty price. Mocking the interactions between all classes forces you to create mocks that return other mocks (that might return yet other mocks). You have to mock out all the data pathways in the interaction; and that can be a complex task. This creates two problems. - The setup code can get extremely complicated. - The mocking structure become tightly coupled to implementation details causing many tests to break when those details are modified. - The need to mock every class interaction forces an explosion of polymorphic interfaces. In statically typed languages like Java, that means the creation of lots of extra interfaceclasses whose sole purpose is to allow mocking. This is over-abstraction and the dreaded “design damage”. So if you mock too much you may wind up with test suites that are slow, fragile, and complicated; and you may also damage the design of your application. ##Goldilocks Mocks Clearly the answer is somewhere in between these two extremes. But where? Here are the heuristics that I have chosen: - Mock across architecturally significant boundaries, but not within those boundaries. For example, mock out the database, web server, and any external service. This provides many benefits: - The tests run much faster. - The tests are not sensitive to failures and configurations of the mocked out components. - It is easy to test all the failure scenarios generated by the mocked out components. - Every pathway of coupled finite state machines across that boundary can be tested. - You generally don’t create mocks that return other mocks, so your setup code stays much cleaner. Another big benefit of this approach is that it forces you to think through what your significant architectural boundaries are; and enforce them with polymorphic interfaces. This allows you to manage the dependencies across those boundaries so that you can independently deploy (and develop) the components on either side of the boundary. This separation of architectural concerns has been a mainstay of good software design for the last four decades at least. Good software developers pursued such separation long before Test Driven Development became popular. So it is ironic that striking the right balance of isolation and speed in our tests is so strongly related to this separation. The implication is that good architectures are inherently testable. - Write your own mocks. I don’t often use mocking tools. I find that if I restrict my mocking to architectural boundaries, I rarely need them. Mocking tools are very powerful, and there are times when they can be quite useful. For example, they can override sealed or final interfaces. However, that power is seldom necessary; and comes at a significant cost. Most mocking tools have their own domain specific language that you must learn in order to use them. These languages are usually some melange of dots and parentheses that look like gibberish to the uninitiated. I prefer to limit the number of languages in my systems, so I avoid their use. Mocking across architectural boundaries is easy. Writing those mocks is trivial. What’s more, the IDE does nearly all the work for you. You simply point it at an interface and tell it to implement that interface and, voila!, you have the skeleton of a mock. Writing your own mocks forces you to give those mocks names, and put them in special directories. You’ll find this useful because you are very likely to reuse them from test to test. Writing your own mocks means you have to design your mocking structure. And that’s never a bad idea. When you write your own mocks, you aren’t using reflection, so your mocks will almost always be extremely fast. Of course your mileage may vary. These are my heuristics, not yours. You may wish to adopt them to one extent or another; but remember that heuristics are just guidelines, not rules. I violate my own heuristics when given sufficient reason. In short, however, I recommend that you mock sparingly. Find a way to test – design a way to test – your code so that it doesn’t require a mock. Reserve mocking for architecturally significant boundaries; and then be ruthless about it. Those are the important boundaries of your system and they need to be managed, not just for tests, but for everything. And write your own mocks. Try to depend on mocking tools as little as possible. Or, if you decide to use a mocking tool, use it with a very light touch. If you follow these heuristics you’ll find that your test suites run faster, are less fragile, have higher coverage, and that the designs of your applications are better.
OPCFW_CODE
Press Ctrl / CMD + C to copy this to your clipboard. This post will be reported to the moderators as potential spam to be looked at Is this an option??. I have built a LOT of sites with Umbraco and would really like to get certified however I don't think I could learn much more than I already know by attending the level 1 school. $1850 is way to expensive for someone like me without the backing of a company to pay for it. I agree with Skiltz. Looking at the agenda for the course coming up in Auckland I really couldn't justify the cost as someone who is already pretty confident with Umbraco. It seems that at the moment course attendance serves as a psuedo practical test for certification as the exam at the end is quite short. Please correct me if I am wrong. I think I recall someone mentioning that there are plans in the future to develop a more in-depth exam that can be used independently of the couse? The certification exam is 10 random questions and you are allowed to miss 2 questions in order to pass. That can be good or bad. It requires a broader knowledge of umbraco "just in case" you are asked the question. At codegarden this year they discussed providing certification options without the class but I don't know if they have made any final decisions. There's no current plans of providing certification tests outside the course. We believe it's much better to expand the availability of the courses worldwide. The course serves as a part of the certification test as attendees solves 10-14 excercises along the way and combined with the last random multiple choice quiz it's a great screener. We've had a lot of people joining the training who considered them selves experts and only attended because their company sent them. Even the most skilled of them ended up learning quite a few tricks with the majority being surprised how much smarter they could build sites. I'd say that the productivity increase you'll experience combined with the leads and confidence you get from potential clients is well worth the cost (which is lower than competitors course fees btw). I like attending courses and even courses where I was proficient beforehand I have recieved some benefit. There is usually some trick or method that you aren't aware of and the contacts you make are just as valuable. And you get some time away from your job which is great when you're an employee ;) But now that I am self-employed I could not justify taking 2 days of projects for a course where I am 90% familiar with the material for a certificate none of my customers are aware of. I'm not knocking the course - I'm sure it will be great for some people. I just wanted to let you know where I am at. Some ideas that might make it more attractive to others in my situation: - have a preview of one of the more advanced sections of the course - offer a money-back guarantee of some sort. Very subjective and I'm not sure how you would word it. - Reduce the price in NZ. A 2 day SQL programming course costs about $1200 here I'd love to go its just to expensive. Hi Paul and Matthew. Thanks for your feedback. The price may seem high but you need to consider the value of what you are receiving from the certification. There are a number of developers who have signed up and paid for the course who are self employeed and have taken the step to formalise their commitment to using Umbraco through the certification process. You may "know" the material, and I can say that one of the developers who is attending is in the same situation. He has been using Umbraco for around 2 years and has worked both in the UK and NZ as a contractor but has chosen to get the certification just to boost his CV when applying for both contract positions and helping with picking up direct work. I can also speak from experience on this. Something else may not have considered is that the course is 100% tax deductible in New Zealand. You can either pay the tax man, or get a certification that will have number of benefits that outweigh the costs. Checking out the updated price is a shock. I mean Wow. A small car costs less, as well as a masters degree in NZ costs much less per course. Coming out of that would at least allow you to develop your own CMS. Come to think on it is this by any chance an hourly rate from a consultant? In that case why not go for the monitored exam only option? Unless your consultants really need the work. This is also a course that has a wide entry so trained professional software engineers can be lumped in with beginner developers at level one so it is a shame there is no check or step up option to the advanced level instead.That isn't to say that $1350 is a lot better than $1850, perhaps if there were also some details to help identify course content. I know this is an old topic... but unfortunately I am experiencing the same. I wanted to attend the training, and asked my bosses for the company to sponsor it. But it's just too expensive, even with the 30% discount currently (actually just ended yesterday). I was able to deploy our site to production and is working really well, but I wanted to learn about best practices directly from Umbraco and not just by browsing the forums/community which is hard to do unless you are searching for specific solutions to problems. Having a formal training would really help solidify my knowledge and hopefully make it a lot easier to resolve common problems. The training price point is really just a blocker. I'd agree that the cost of the course is prohibitive - particularly since I am based in South Africa where no courses are offered and I would have to fly to another part of the world in order to attend. (add on flights, accommodation etc. and considering the Rand vs Euro/dollar/pound exchange rate it's just not an option for me) I've been working with Umbraco 7+ solidly for 3 years now (after a break of a few years having worked with v3 and 4 a few years before that) and have built many sites, extended Umbraco, built packages and property editors, customised the backoffice etc etc...) so am pretty confident with Umbraco but also acknowledge that there are things I don't know and may be doing other things in a non-optimal way. Would Umbraco HQ consider running a course in South Africa, or an online course where physical attendance is not required ? I am from Canada and I was thinking of doing the same thing. Unfortunately it costs me $5348 to get all training :( this is the bundled version. If I take the online-north american version, I have to pay $1782/course. And what happens if you fail the exam? Do I have to pay up again? I've done 2 site in umbraco 4, 2 sites in umbraco 6, 1 site I upgraded to umbraco 7 and now 2 sites in umbraco 7 plus my personal site is umbraco 7. And countless times I have done customizations (especially hacking into the back office). I would love to know where my knowledge is at by taking a test and maybe get certified. is working on a reply... Write your reply to: Image will be uploaded when post is submitted
OPCFW_CODE
Good to know: Event Cloud source The Radar Source is an event source. This means that it sends data as events, which are behaviors or occurrences tied to a user and a point in time. Data from these sources can be loaded into your Segment warehouses, and also sent to Segment streaming destinations. Learn more about cloud sources. This source is supported in US data processing regions. The Radar source is only supported in workspaces configured to process data in the US region. Workspaces configured with data processing regions outside of the US cannot connect to this source. For more information, see Regional Segment. Radar is the leading geofencing and location tracking platform. You can use Radar SDKs and APIs to build a wide range of location-based product and service experiences, including pickup and delivery tracking, location-triggered notifications, location verification, store locators, address autocomplete, and more. The Radar Segment Source is a Cloud-mode event source. Instead of packaging Radar’s SDK using Segment as a wrapper, you include, configure, and initialize their SDK separately. Radar then sends all events that it detects and infers to Segment using its servers. As a result, only destinations that allow Cloud-mode are compatible with the Radar source. The Radar platform has three products: Geofencing and Place Detection, Trip Tracking, and APIs for Geocoding and Search. Geofencing and Place Detection: Radar geofencing is more powerful than native iOS or Android geofencing, with cross-platform support for unlimited geofences, polygon and isochrone geofences, stop detection, and accuracy down to 10 meters. Radar’s Places feature allows you to instantly configure geofencing around thousands of chains (i.e.: Starbucks, Walmart) and categories (i.e.: airports, restaurants) with the click of a button. Radar will generate a real time event on entry or exit of a custom or Places geofence to trigger messaging, drive behavioral insights, inform audience segmentation, and more. - Trip Tracking: Radar has a powerful trip tracking product that allows a brand to personalize the pickup and delivery experience for brick and mortar brands and logistics use cases. - Curbside Pickup and BOPIS - When a user places an order for pickup, offer them the option to share location to reduce wait times and, for restaurants, increase food freshness. Radar will track the user’s location while en route, provide staff with a real time ETA, and produce an arrival event which can trigger an Iterable notification to the user and/or staff. These features optimize operation efficiencies at the store and lead to a much stronger customer experience. - Delivery and Fleet Tracking - Track your in-house delivery team using a driver app to be able to send ETA updates and real time arrival notifications to the end user who is expecting the delivery. - Search and Geocoding APIs: Import and search your own location data, or tap into Radar’s best-in-class address and POI datasets. Use these APIs to power store finders, address autocomplete, forward and reverse geocoding, IP geocoding, and more. When you enable Radar as a Segment Source, you can forward Geofences, Places, Regions, and Trip Tracking data to your warehouse or destinations. The Radar source is currently in beta. Contact Radar to configure this source. Radar will send the following events to your Segment warehouses and destinations, depending on what products you enable in Radar. - Geofence Entered - Geofence Exited - Place Entered - Place Exited - Region Entered - Region Exited - Trip Started - Trip Updated - Trip Approaching Destination - Trip Arrived Destination - Trip Stopped For a complete view of the events that Radar passes into Segment, visit Radar’s Segment Events Mapping documentation. Radar User Traits Radar will also send the following user traits to Segment, depending on Radar user state when Radar events are sent to Segment. For a complete view of the user attributes that Radar passes into Segment, visit Radar’s Segment User Mapping documentation |radar_id||string||The ID of the user, provided by Radar.| |radar_location_latitude||float||The latitude of the user user’s last known location.| |radar_location_longitude||float||The longitude of the user’s last known location.| |radar_updated_at||string||The datetime when the user’s location was last updated. ISO string in UTC.| ||An array of IDs of the user’s last known geofences.| ||An array of descriptions of the user’s last known geofences.| ||An array of tags of the user’s last known geofences.| ||An array of external IDs of the user’s last known geofences.| |radar_place_id||string||The ID of the user’s last known place, provided by Radar.| |radar_place_name||string||The name of the user’s last known place.| |radar_place_facebook_id||string||The Facebook ID of the user’s last known place.| ||List of the categories of the place.| |radar_place_chain_name||string||The name of the chain of the user’s last known place.| |radar_place_chain_slug||string||A human-readable unique ID for the chain of the user’s last known place.| |radar_insights_state_home||boolean||A boolean indicating whether the user is at home, based on learned home location.| |radar_insights_state_office||boolean||A boolean indicating whether the user is at the office, based on learned office location.| |radar_insights_state_traveling||boolean||A boolean indicating whether the user is traveling, based on learned home location.| This page was last modified: 25 Apr 2022 Questions? Problems? Need more info? Contact Segment Support for assistance!
OPCFW_CODE
This section explains the process for installing the Identity Synchronization for Windows Core on Solaris, Linux, and Windows operating systems. Before you install Core, you should be aware of the following requirements: On Windows 2000/2003 systems: You must have Administrator privileges to install Identity Synchronization for Windows. You must install the program as root, but after installation you can configure the software to run Solaris and Linux services as a non-root user. (See Appendix B, Identity Synchronization for Windows LinkUsers XML Document Sample) You must install Core into a directory that has an existing server root managed by an Administration Server (version 5 2004Q2 or higher) or the installation program will fail. (You can install Administration Server using the Directory Server 5 2004Q2 installation program.) With Identity Synchronization for Windows 6.0, the installer checks for an existing Sun Java System Administration Server. If it is not installed, the installer will install Sun Java System Administration Server as a part of Core installation. When the Welcome screen is displayed, read the information provided and then click Next to proceed to the Software License Agreement panel. Read the license agreement, then select Yes (Accept License) to accept the license terms and go to the next panel. No to stop the setup process and exit the installation program. The Configuration Location panel is displayed, specify the configuration directory location. Provide the following information: Configuration Directory Host: Enter the fully qualified domain name (FQDN) of a Sun Java System Directory Server instance (affiliated with the local Administration Server) where Identity Synchronization for Windows configuration information will be stored. You can specify an instance on the local machine or an instance that is running on a different machine. Identity Synchronization for Windows allows Administrator Server to access the remotely installed instance of Directory Server. To avoid warnings about invalid credentials or host names, be sure to specify a host name that is DNS-resolvable to the machine on which the installation program is running. To enable secure communication, enable the Secure Port option and specify an SSL port. (Default SSL port is 636). Once the program determines that the configuration directory is SSL-enabled, all Identity Synchronization for Windows components will use SSL to communicate with the configuration directory. Identity Synchronization for Windows encrypts sensitive configuration information before sending it to the configuration Directory Server. However, if you want additional transport encryption between the Console and configuration directory, be sure to enable SSL for both Administration Server and the configuration Directory Server. Then, configure a secure connection between the Administration Server to which you will be authenticating the Directory Server Console. (For information, see the Sun Java System Administration Server 5 2004Q2 Administration Guide). Sun Java System Administration Server installed (and configured) as a part of the core components, is installed in a non-SSL mode. If the program could not detect a root suffix, and you have to enter the information manually (or if you change the default value), you must click Refresh to regenerate a list of root suffixes. You must specify a root suffix that exists on the configuration Directory Server. If you specify admin as the user ID, you will not be required to specify the User ID as a DN. If you use any other user ID, then you must specify the ID as a full DN. For example, cn=Directory Manager. If you are not using SSL to communicate with the configuration directory (see Installing Core), these credentials will be sent without encryption. When you are finished, click Next to open the Configuration Password panel. Be sure to remember this password as it will be required whenever you want to Create or edit a configuration Run any of the command line utilities For information about changing the configuration password see Using changepw. The Select Java Home panel is displayed (see Installing Core). The program automatically inserts the location of the Java Virtual Machine directory to be used by the installed components. If the location is satisfactory, click Next to proceed to the Select Installation Directories panel (Installing Core). If the location is not correct, click Browse to search for and select a directory where Java is installed, for example: On Windows: C:\Program Files\j2sdk1.5 Installation Directory (available only when you are installing Core on Solaris or Linux): Specify the path and directory name of the installation directory. Core binaries, libraries, and executable will be installed in this directory. Instance Directory (available only when you are installing Core on Solaris or Linux): Specify the path and directory name of the instance directory. Configuration information that changes (such as log files) will be stored in this directory. There is only one server root directory available on Windows operating systems, and all products will be installed in that location. If an Administration Server corresponding to the Configuration Directory Host and Port number provided in step 3 is not found, the installer Administration Server will install the Administration Server as part of the core installation. The default port number for the Administration Server port assigned would be the configuration directory port plus one. You should have installed Message Queue 3.6 Enterprise Edition before starting the Identity Synchronization for Windows installation. On Solaris systems: Do not install Message Queue and Identity Synchronization for Windows in the same directory. On Linux system: Do not install Message Queue and Identity Synchronization for Windows in the same directory. On Windows systems: You must close any open Service Control Panel windows before continuing, or the Core installation will fail. Enter the following information in the text fields provided or click Browse to search for and select available directories: Fully Qualified Local Host Name : Specify the fully qualified domain name (FQDN) of the local host machine. (There can only be one Message Queue broker instance running per host.) Click Next and the Ready to Install panel is displayed. This panel provides information about the install, such as the directory where Core will be installed and how much space is required to install Core. If the displayed information is satisfactory, click Install Now to install the Core component (where the installation program installs the binaries, files, and packages). If the information is not correct, click Back to make changes. An “Installing” message is displayed briefly, and then the Component Configuration panel is displayed while the installation program adds configuration data to the specified configuration Directory Server. This operation includes: Creating a Message Queue broker instance Uploading the schema to the configuration directory Uploading deployment-specific configuration information to the configuration directory This operation will take several minutes and may pause periodically, so do not be concerned unless the process exceeds ten minutes. (Watch the progress bar to monitor the installation program’s status.) When the component configuration operation is complete, the Installation Summary panel is displayed to confirm that Identity Synchronization for Windows installed successfully. You can click the Details button to see a list of the files that have been installed, and where they are located. Click Next and the program will determine the remaining steps you must perform to successfully install and configure Identity Synchronization for Windows. A “Loading...” message, and then a Remaining Installation Steps panel each display briefly, and then the following panel (Installation Overview) is displayed. This panel contains a “To Do” list of the remaining installation and configuration steps. (You also can access this panel from the Console’s Status tab.) The “To Do” panel will re-display throughout the installation and configuration process. The program greys-out all completed steps in the list. Up to this point, the To Do list will contain a generic list of steps. After you save a configuration, the program provides a list of steps that are customized for your deployment (for example, which connectors you must install). Next, you must configure the Core component, which you can do from the Sun Java System Console (the Start the Sun Java System Console option is enabled by default). If you are migrating from Identity Synchronization for Windows version 1.0 or SP1 to Sun Java System Identity Synchronization for Windows 6.0, you can import an exported version 1.0 or SP1 configuration XML document using the idsync importcnf command line utility. If you elected to use the Console, the Sun Java System Console Login dialog box is displayed (seeInstalling Core). User ID: Enter the Administrator’s user ID you specified when you installed the Administration Server on your machine. Password: Enter the Administrator’s password specified during Administration Server installation. hostname.your_domain.domain is the computer host name you selected when you installed Administration Server. port_number is the port you specified for Administration Server. After providing your credentials, click OK to close the dialog box. You will then be prompted for the configuration password. Enter the password and click OK. When the Sun Java System Server Console window is displayed, you can start configuring Core. Continue to Chapter 4, Configuring Core Resources for instructions.
OPCFW_CODE
I’ve worked on the web since its earliest days. From the early to mid-1990’s, before the first browsers were built. In college, I studied computer science at post grad level where I continued my passion for coding languages. My first job was working as a network engineer and part time AS/400 programmer for a big multinational. That started to get a bit dull after a while, and I was becoming more and more interested in the evolution of the web at that time. I then got a contract to develop a web presence for a large petroleum company and afterwards started my own company which focused on issuing invoices. That company was called Hugbubble. It has been a great vehicle for me to do interesting software projects ever since. We worked in a ‘guerilla’ fashion, jumping onto diverse projects that interested us. Sometimes Hugbubble has been my sole income, other times it has been just for fun. We worked on a project within Second Life (SL) for a psychology experiment with students in the Psychology department of Stanford University. SL is a great tool to look at learning due to the affordances it offers and the lost cost / low-risk nature of the environment. Second Life faded in popularity due to being too difficult for newcomers to learn the interface, but as the interfaces mature (think goggles, haptics, etc) then VR (Virtual Reality) will hopefully become more natural and easier to get into. I am the curriculum lead on the “Digital Technology Coders” Stream. I work on the course design for coding modules to map them to learning outcomes. We want to ensure that the content we teach our students includes languages and technologies that businesses are using, to make sure that they are employable graduates. We are very focused on the relevance of what we teach to what businesses need now and in the future. The people. Working with humans is great. A good balance to working with machines all day. It is a good contrast, and great to see people progress from novices to experts during their time at the academy. People who are motivated. Anyone can be a developer, it just takes perseverance and patience. It’s not normal to think in this way (as machines do) so it’s really important to be interested in the subject matter to help you over the initial frustration. Yes, it does encourage crossovers. Coding is very like punk rock. The DIY ethic is important. Don’t make assumptions, try something for yourself, and then leverage the open source community and build on what has already been shared. Eventually feeding back into the community to support it yourself. Working at Hugbubble also helps me to stay relevant and keep my languages up to date. I’m also doing an MSc in Digital Education which helps too, with great benefits in terms of assessment and course design. I also do capoeira (a Brazilian martial art), which is good for having a complete break, to get away and do something physical. It’s important to stay healthy and get away from the machine sometimes! Ambient intelligence will also be interesting to watch as it improves. There will be a benefit to the end user hopefully, rather than all the user's data being harvested for businesses benefit. I’m looking forward to a time when we have greater control over our own data and can opt in more easily to services that are on demand and always available. Wearables could also become more helpful to people, as well as the great possibilities of VR (virtual reality). We’re also just at the start of the Big Data revolution, you haven’t seen anything there yet! It has the potential to transform so many industries. Privacy is really important. If people are not using adblockers then they should really ask themselves why not. We basically haemorrhage our personal information onto the web every time that we use it and so far the end user is not benefitting from this, instead, we are simply fodder for the advertising companies. It will be good when people can benefit from all this data that’s being harvested. Not yet. We may reintroduce game development and game design in the near future perhaps but we are focussed on what’s hot in the industry and the market and are also constrained by the length of our programmes. We need to make our participants employable. The growth of VR (virtual reality) means that it could become relevant to the jobs market reasonably soon, but that’s still unlikely in the short term. For the moment we design our Honours and Ordinary Degree curriculum around the coding languages that are in high demand by employers and that will enable participants to build cutting edge digital products that will awe and inspire.
OPCFW_CODE
Remove method of STRtree In GEOS there are two methods STRtree::insert and STRtree::remove that are not currently implemented in pygeos. I would be really interested in having the remove method available. In strtree.c / strtree.h, I see that _geos accepts NULL variables, so it should be possible to call STRtree::remove and make NULL the corresponding removed objects in _geos (to know what the indices of those objects are, it may be needed to have an intersects query - as it is done internally in GEOS). I can try to make a PR if you want, but I'm not sure of its feasibility as I don't know all the code details. I am not that familiar with the STRtree internals, but the doc comments says: * A query-only R-tree created using the Sort-Tile-Recursive (STR) algorithm. * For two-dimensional spatial data. * * The STR packed R-tree is simple to implement and maximizes space * utilization; that is, as many leaves as possible are filled to capacity. * Overlap between nodes is far less than in a basic R-tree. However, once the * tree has been built (explicitly or on the first call to #query), items may * not be added or removed. So if that's correct, a remove method would only be useful directly after creating the tree. But in such a case you could also create the tree with a subset of the geometries to start with? I am not familiar with it either, and had actually missed this comment. However the doc doesn't seem to agree with the code : while you can't apparently insert after the tree has been built : void AbstractSTRtree::insert(const void* bounds, void* item) { // Cannot insert items into an STR packed R-tree after it has been built assert(!built); itemBoundables->push_back(new ItemBoundable(bounds, item)); } That doesn't seem to be the case for remove, the tree being actually built first : bool AbstractSTRtree::remove(const void* searchBounds, void* item) { if(!built) { build(); } if(itemBoundables->empty()) { assert(root->getBounds() == nullptr); } if(getIntersectsOp()->intersects(root->getBounds(), searchBounds)) { return remove(searchBounds, *root, item); } return false; } So do you think a remove implementation could be possible in pygeos ? I think we'd need to do more digging into how this would be handled in GEOS, given the disparity between the docstring and the implementation. As for pygeos, we make assumptions that the tree is immutable (e.g., using pointer addresses in the original array to determine indexes in the tree items during query, etc). Can you tell us more about your use case of why you need the remove functionality after the tree is built, and why that is necessary rather than building a new immutable tree after a change to the data to be indexed? Basically my use case can be described as follow: Take the closest point in the tree from another point A. Do some stuff with this closest point (such as computing the coordinates of point B) Remove that closest point from the tree. Take the closest point in the tree from point B. Do some stuff with this closest point (such as computing the coordinates of point C) ... Repeat until all the points have been removed from the tree. Performance wise, it wouldn't be acceptable to build a new immutable tree every time, whereas a remove should be fast. I am closing this as this cannot be supported by PyGEOS due to lack of support in the GEOS C API.
GITHUB_ARCHIVE
SQLite isolation clarification needed (1) By andreyst on 2021-02-14 08:17:13 [link] [source] Hi, I'm trying to better understand SQLite isolation guarantees. I'm reading https://sqlite.org/isolation.html and my understanding that transactions from the same connection should see each other updates, even uncommitted ones. I've made a small Go program trying to exhibit this behaviour — I'm expecting that TX 2 will see uncommitted update from TX 1, but it does not: https://gist.github.com/andreyst/30322c7df8af6f4969445ab85f7dcc74 Can you help me understand what am I missing? (2.1) By Keith Medcalf (kmedcalf) on 2021-02-14 10:11:07 edited from 2.0 in reply to 1 [link] [source] This is a GO-SQLite3 wrapper behaviour. That is, the issue you are seeing is entirely a matter of the GO wrapper you are using to interface go with SQLite3. Attempting to issue nested transactions in SQLite3 would throw an error. That is, if the connection were really in a transaction then attempting to start another transaction while one is already in progress would cause the library to return an error. From this and the behaviour that you are observing it would appear that db.Begin is returning a new connection which lives for the duration of the transaction. Determining why the go-sqlite3 wrapper is behaving the way it is can likely be best addressed by having recourse to the documentation for go and/or the wrapper being used. (3) By Keith Medcalf (kmedcalf) on 2021-02-14 10:20:01 in reply to 1 [link] [source] Transaction state is an attribute of a connection. A single connection can either be (a) not in a transaction or (b) in a transaction. If something is leading you to believe that more that N transactions are active at the same time (concurrently) then this means that there are (at least) N connections active at the same time (concurrently). (4) By Holger J (holgerj) on 2021-02-14 12:34:40 in reply to 1 [source] Just use the sqlite3 executable for your experiments, leaving out any language specific drivers and wrappers. Several transactions over the same connection cannot happen at the same time, only one after the other. (5) By anonymous on 2021-02-14 15:01:20 in reply to 1 [link] [source] This is because sql.DB represents a connection pool, not a single database connection. https://golang.org/pkg/database/sql/#DB Basically, when you call db.Begin the second time, the Go standard library creates a second connection to the database, because it recognizes that the first is still in use. (6) By andreyst on 2021-02-14 16:05:45 in reply to 1 [link] [source] Thank you all for prompt replies. I've added logging to Go's stdlib and confirmed that this is really a database/sql behaviour to create more connections, as suggested by anonymous (5). Code in question doing this is here: https://golang.org/src/database/sql/sql.go?s=47934:48006#L1303. Thank you Keith for explaining, now I understand the tx/connection mechanics a bit better. Thanks Holger for suggesting working with bare sqlite, I'll keep that in mind with further experiments. (7) By andreyst on 2021-02-14 16:07:12 in reply to 1 [link] [source] - suggested by Keith (1) and anonymous (5) of course, sorry.
OPCFW_CODE
If you were to learn Microsoft Word to discuss your future job prospects The current job security is very competitive and efficient the right technical skills can do you ahead of your readers. Also, TryWindows Vista, and Windows XP have ideas that allow you to go the color and style of your thesis. I will recommend this fairness series to other teachers who will be aware for training as they look for new avenues in the different future. This table describes most of them: Latest Edits that stick. You type your argument in the text area. Fifteenth Microsoft Certifications Windows Server Drive the skills to power the next decade of cloud-optimized networks, applications, and web sources. In Wordhow a cancer displays depends on the proper of your window, the size of your essay, and the resolution to which your essay is set. Select Pause to do narration. Python Edition Provided by Alternative Learn the argentinian skills and hands-on experience with the reader and research aspects of grievances science work using Python, from setting up a different data study to punctuation valid claims and inferences from data references. You may also find a dialog box fiction in the bottom-right lesson of a group. Click Beneath the lessons that matter, you will be torpedoed to "click" items and to help tabs. Your negatives are saved automatically. Both of these learners can be found in the unexpected sidebar at the top of the situation. This pow will show you a full spectrum page on your screen. We have you passed. You won't see a cracked scroll bar if the problem of your document fits on your writing. Before handing over any of your environmental-earned dollars, it is essential that you have your own preferred disbelief style and speed, and there what your goal for clarity Microsoft Word is. Directly multiple course options are listed for a particular, only one must be completed to zero the requirements for graduation. You can do what displays on the Status bar by decomposition-clicking on the Status bar and reuniting the options you want from the Rest Status Bar menu. Bomb screen lets me uncle and edit without using menus. Most popular Word paintings are supported, too. And then the very thing has to be updated every individual to edit your topic. The with appears below the Specific. When the Paper option is selected it offers in a contrasting color. Are you a good-learner who quickly picks up every programs, or do you need a more bit of extra TLC when it tell to mastering awareness. The office was able and impressive and well placed. Get Microsoft Word Lesson Plans Now If you want to cut to the good stuff, we have plenty of Microsoft Word lesson plans for you! We cover Microsoft Word, and Online shopping from a great selection at Books Store. Microsoft Word Tips Tricks and Shortcuts (Black & White Version): Work Smarter, Save Time, and Increase Productivity (Easy Learning Microsoft Office How-To Books) (Volume 1). Word Courses & Training Learn Microsoft Word fundamentals; how to write, edit, and design documents, format text, use spell check, perform mail merges, track changes, and more. A Beginner's Guide to Microsoft Office Microsoft Word is a word processing program that was first made public by Microsoft in the early s. It allows users to type and manipulate text in a graphic environment that resembles a page of paper. The fastest way to learn about Microsoft Office Suite is to get to know the icons. Many of the icons are the same from program to program. Icons are the little pictures that represent a function. E-Learning Office is a provider of interactive, award-winning, multilingual elearning training courses and learning technologies for Microsoft OfficeOfficeOfficeOffice (Word, Powerpoint, Excel, Outlook, Skype for Business, SharePoint), Windows 7 and WindowsLearning microsoft word
OPCFW_CODE
################################################################################ # This module calculates the PMI distribution of item stem and the options. # Note that this module does not consider the count of n-gram per stem/option. # This means that the PMI of each n-gram is equally weighted for the calculation # of the distribution no matter how many times the n-gram appeared in the stem/ # option. # Parameters df_ac_ngram_q: input pandas.DataFrame of n-grams, it should have, # at least, n-gram count columns with the 'AC_Doc_ID's # as the index of the DataFrame # ngram_clm_start: integer column number (starting from zero) # specifying the starting point of n-gram # count columns in the question DataFrame, from # the point to the end, all the columns should be # the n-gram count columns # df_ac_pmi: pandas.DataFrame of the PMI value of each term # gram = 'bigram': specify bigram or trigram # decimal_places = None: specify the decimal places to round at # Returns Result: pandas.DataFrame reporting each 'AC_Doc_ID's PMI stats ################################################################################ def ac_bi_trigram_pmi_distribution(df_ac_ngram_q, ngram_clm_start, df_ac_pmi, gram = 'bigram', decimal_places = None): import pandas as pd import numpy as np df_ac_buf = df_ac_ngram_q[:] df_ac_buf_ngram = df_ac_buf.iloc[:, ngram_clm_start:] df_ac_buf_ngram_sum = pd.DataFrame({ 'Ngram_sum' : df_ac_buf_ngram.sum(axis = 1) }) df_ac_buf_ngram_sum = df_ac_buf_ngram_sum.dropna() df_ac_buf_ngram = pd.concat([df_ac_buf_ngram_sum, df_ac_buf_ngram], axis=1, join_axes=[df_ac_buf_ngram_sum.index]) df_ac_buf_ngram = df_ac_buf_ngram.drop('Ngram_sum', axis=1) if gram == 'bigram': pmi_ngram = list(df_ac_pmi['Bigram']) else: pmi_ngram = list(df_ac_pmi['Trigram']) pmi_dic = {} for i, x in enumerate(pmi_ngram): pmi_dic[x] = df_ac_pmi.at[df_ac_pmi.index[i], 'PMI'] t = df_ac_buf_ngram.shape clm_lgth = t[1] ac_ngram_q_buf_columns = df_ac_buf_ngram.columns #df_ac_pmi_diagonal = pd.DataFrame(np.empty((clm_lgth, clm_lgth), # dtype=np.float64), ac_ngram_q_buf_columns, ac_ngram_q_buf_columns) df_ac_pmi_diagonal = pd.DataFrame(np.zeros((clm_lgth, clm_lgth), dtype=np.float64), ac_ngram_q_buf_columns, ac_ngram_q_buf_columns) for i, x in enumerate(ac_ngram_q_buf_columns): if x in pmi_dic: df_ac_pmi_diagonal.iloc[i, i] = float(pmi_dic[x]) else: df_ac_pmi_diagonal.iloc[i, i] = float(0.0) #df_ac_pmi_diagonal = df_ac_pmi_diagonal.fillna(0.0) # the count of n-gram per stem/option is not considered df_ac_buf_ngram_no_weight = df_ac_buf_ngram / df_ac_buf_ngram df_ac_buf_ngram_no_weight_fillzero = df_ac_buf_ngram_no_weight.fillna(0.0) df_ac_ngram_q_pmi_mtx = df_ac_buf_ngram_no_weight_fillzero.dot(df_ac_pmi_diagonal) # PMI matrix to be with N/A cells df_ac_ngram_q_pmi_mtx = df_ac_ngram_q_pmi_mtx * df_ac_buf_ngram_no_weight if gram == 'bigram': if decimal_places != None: df_ac_ngram_q_pmi_mean = pd.DataFrame({ 'PMI_Bigram_Mean' : df_ac_ngram_q_pmi_mtx.mean(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_sd = pd.DataFrame({ 'PMI_Bigram_SD' : df_ac_ngram_q_pmi_mtx.std(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_max = pd.DataFrame({ 'PMI_Bigram_Max' : df_ac_ngram_q_pmi_mtx.max(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_min = pd.DataFrame({ 'PMI_Bigram_Min' : df_ac_ngram_q_pmi_mtx.min(axis=1).round(decimal_places) }) else: df_ac_ngram_q_pmi_mean = pd.DataFrame({ 'PMI_Bigram_Mean' : df_ac_ngram_q_pmi_mtx.mean(axis=1) }) df_ac_ngram_q_pmi_sd = pd.DataFrame({ 'PMI_Bigram_SD' : df_ac_ngram_q_pmi_mtx.std(axis=1) }) df_ac_ngram_q_pmi_max = pd.DataFrame({ 'PMI_Bigram_Max' : df_ac_ngram_q_pmi_mtx.max(axis=1) }) df_ac_ngram_q_pmi_min = pd.DataFrame({ 'PMI_Bigram_Min' : df_ac_ngram_q_pmi_mtx.min(axis=1) }) df_ac_ngram_q_pmi_mean['PMI_Bigram_SD'] = df_ac_ngram_q_pmi_sd['PMI_Bigram_SD'] df_ac_ngram_q_pmi_mean['PMI_Bigram_Max'] = df_ac_ngram_q_pmi_max['PMI_Bigram_Max'] df_ac_ngram_q_pmi_mean['PMI_Bigram_Min'] = df_ac_ngram_q_pmi_min['PMI_Bigram_Min'] else: if decimal_places != None: df_ac_ngram_q_pmi_mean = pd.DataFrame({ 'PMI_Trigram_Mean' : df_ac_ngram_q_pmi_mtx.mean(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_sd = pd.DataFrame({ 'PMI_Trigram_SD' : df_ac_ngram_q_pmi_mtx.std(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_max = pd.DataFrame({ 'PMI_Trigram_Max' : df_ac_ngram_q_pmi_mtx.max(axis=1).round(decimal_places) }) df_ac_ngram_q_pmi_min = pd.DataFrame({ 'PMI_Trigram_Min' : df_ac_ngram_q_pmi_mtx.min(axis=1).round(decimal_places) }) else: df_ac_ngram_q_pmi_mean = pd.DataFrame({ 'PMI_Trigram_Mean' : df_ac_ngram_q_pmi_mtx.mean(axis=1) }) df_ac_ngram_q_pmi_sd = pd.DataFrame({ 'PMI_Trigram_SD' : df_ac_ngram_q_pmi_mtx.std(axis=1) }) df_ac_ngram_q_pmi_max = pd.DataFrame({ 'PMI_Trigram_Max' : df_ac_ngram_q_pmi_mtx.max(axis=1) }) df_ac_ngram_q_pmi_min = pd.DataFrame({ 'PMI_Trigram_Min' : df_ac_ngram_q_pmi_mtx.min(axis=1) }) df_ac_ngram_q_pmi_mean['PMI_Trigram_SD'] = df_ac_ngram_q_pmi_sd['PMI_Trigram_SD'] df_ac_ngram_q_pmi_mean['PMI_Trigram_Max'] = df_ac_ngram_q_pmi_max['PMI_Trigram_Max'] df_ac_ngram_q_pmi_mean['PMI_Trigram_Min'] = df_ac_ngram_q_pmi_min['PMI_Trigram_Min'] df_ac_ngram_q_head = df_ac_ngram_q.iloc[:, :ngram_clm_start] df_ac_buf_ngram_pim = pd.concat([df_ac_ngram_q_head, df_ac_ngram_q_pmi_mean], axis = 1, join_axes=[df_ac_ngram_q_head.index]) return df_ac_buf_ngram_pim
STACK_EDU
Loops in Unix script Currently, I have written my script in the following manner: c3d COVMap.nii -thresh 10 Inf 1 0 -o thresh_cov_beyens_plus10.nii c3d COVMap.nii -thresh 9.7436 Inf 1 0 -o thresh_cov_beyens_plus97436.nii c3d COVMap.nii -thresh 9.4872 Inf 1 0 -o thresh_cov_beyens_plus94872.nii c3d COVMap.nii -thresh 9.2308 Inf 1 0 -o thresh_cov_beyens_plus92308.nii c3d COVMap.nii -thresh 8.9744 Inf 1 0 -o thresh_cov_beyens_plus89744.nii c3d COVMap.nii -thresh 8.7179 Inf 1 0 -o thresh_cov_beyens_plus87179.nii c3d COVMap.nii -thresh 8.4615 Inf 1 0 -o thresh_cov_beyens_plus84615.nii c3d COVMap.nii -thresh 8.2051 Inf 1 0 -o thresh_cov_beyens_plus82051.nii c3d COVMap.nii -thresh 7.9487 Inf 1 0 -o thresh_cov_beyens_plus79487.nii c3d COVMap.nii -thresh 7.6923 Inf 1 0 -o thresh_cov_beyens_plus76923.nii c3d COVMap.nii -thresh 7.4359 Inf 1 0 -o thresh_cov_beyens_plus74359.nii c3d COVMap.nii -thresh 7.1795 Inf 1 0 -o thresh_cov_beyens_plus71795.nii c3d COVMap.nii -thresh 6.9231 Inf 1 0 -o thresh_cov_beyens_plus69231.nii But I want the values in the form of some array like x=[10,9.7436,9.4872...,6.9231] And I want the script to be called as follows: x=[10,9.7436,9.4872...,6.9231] c3d COVMap.nii -thresh x[0] Inf 1 0 -o thresh_cov_beyens_plus10.nii c3d COVMap.nii -thresh x[1] Inf 1 0 -o thresh_cov_beyens_plus97436.nii c3d COVMap.nii -thresh x[2] Inf 1 0 -o thresh_cov_beyens_plus94872.nii c3d COVMap.nii -thresh x[3] Inf 1 0 -o thresh_cov_beyens_plus92308.nii ... c3d COVMap.nii -thresh x[14] Inf 1 0 -o thresh_cov_beyens_plus87179.nii Could someone please suggest a method to loop this? If you use bash, you can do arrays arr=(10 9.7436 9.4872 ... 6.9231) for x in ${arr[@]}; do c3d COVMap.nii -thresh $x Inf 1 0 -o thresh_cov_beyens_plus${x/./}.nii done Just make sure the elements in the array are separated by a space instead of comma and use parentheses instead of square bracket. The ${arr[@]} will expand as the elements of the array separated by a space. The ${x/./} will remove the decimal point from the element to make the filename suffix. You could actually do it without an array at all by just putting the values separated by spaced in place of the ${arr[#]}. for x in 10 9.7436 9.4872 ... 6.9231; do c3d COVMap.nii -thresh $x Inf 1 0 -o thresh_cov_beyens_plus${x/./}.nii done or perhaps a little cleaner by using a normal variable values="10 9.7436 9.4872 ... 6.9231" for x in $values; do c3d COVMap.nii -thresh $x Inf 1 0 -o thresh_cov_beyens_plus${x/./}.nii done This works because expanding $values without surrounding quotes (ie "$values") will cause BASH to parse each word inside of the variable. So it's effectively the same as the previous code example. Thank you, I am going to try this now :) Thank you very much!
STACK_EXCHANGE
if does not evaluate moughanj at tcd.ie Fri Jun 11 05:02:11 CEST 2004 hungjunglu at yahoo.com (Hung Jung Lu) wrote in message news:<8ef9bea6.0406100710.1a17b11b at posting.google.com>... > moughanj at tcd.ie (James Moughan) wrote: > > I still have to restrain myself from writing PyHaskell now and then. > Maybe it's time to try with some simple prototype. > > Secondly, Lisp's syntax doesn't parse well into the way people think, > > or vica versa. Python's does; in other words, it's executable > > pseudocode. Lisp, fundamentally, cannot change in this regard. > But Python does have serious problems when one goes to the next level > of software development. I guess that's why Io and Prothon were born. > Python is at the stage where you need a fresh re-incarnation to go to > the next level. I'm curious as to what those problems are, and what the 'next leve' is; there are definitely warts, and some, like the scoping rules, get in the way occasionally, but by and large there's nothing that can't be worked around. It lacks severely in compile-time metaprogramming, but has some very nice run-time features, which are arguably better in a language which isn't too focussed on performance. > There are a few directions that need to be explorered, and I am not > sure creating a real new language is the way to go. I've seen it in > Io, where once you set things in stone, it becomes just another Python > with problems that will stay forever. I guess right now it's not the > moment of creating more languages and set things in stone. It's > probably better to have some toy languages or language prototypes to > explorer ideas. Collect enough ideas and experience, and probably > leave the rest to the next generation of people. > Frankly, I see a few camps: (a) Lisp and AOP folks, (b) Functional > folks, (c) Prototype-based folks. Because these are very specialized > fields, very few people seem to be native speakers of all three of > them. The next killer language will have to incorporate lessons learnt > from all three camps. It's a daunting task. It's best to have some toy > languages... like scaled-down model airplanes, or even just the model > airplane parts, before one actually build the real thing. There are a lot of interesting ideas out there - personally I'd like to see Stepanov cease being so notoriously lazy and write the language C++ could have been. :) We'll probably find different paradigms being useful in different contexts, and to different people. After all, how boring would the world be if you only ever needed to learn one > Hung Jung More information about the Python-list
OPCFW_CODE
This page briefly outlines the installation of Lasso Professional 8.6 for each supported platform. You can download Lasso Professional 8.6 for FREE and run Lasso in "developer mode", which is fully functional but limited to 5 IP's and 200 connections per minute, so it will not support full deployment. Come back and buy the license to unlock its full power when you are convinced that Lasso gives you everything you need—and more! Recommended reading before you install Lasso is a middleware product which relies on third-party web servers and databases for smooth operation. For late-breaking documentation updates, please consult the Errata directory. View table of supported operating systems and versions Read the full installation guide The latest release of Lasso Professional Server 8.6 for Mac OS X 10.5 and 10.6 can be found here: Download Lasso_Pro_184.108.40.206_Mac.tar.gz (79 MB) Once downloaded, expand the archive and run the installer which will guide you through the installation process. To install Lasso Professional 8.6 on Mac OS X 10.7 or 10.8 (client or server editions), manual installation of the Apache Connector is required: sudo chmod 755 Lasso8ConnectorforApache2.2.so sudo cp Lasso8ConnectorforApache2.2.so /usr/libexec/apache2/ sudo apachectl configtest And if the output is "Syntax OK", run: sudo apachectl restart If the output is not "Syntax OK", please contact LassoSoft support. The current version of Lasso Professional 8.6 (220.127.116.11) needs its config file moved to the new Apache base directory. After installing, run these commands in Terminal: sudo mv /etc/apache2/users/lasso8.conf /Library/Server/Web/Config/apache2/other/ sudo apachectl restart The latest release of Lasso Professional Server 8.6 for Microsoft Windows can be found here: Download Lasso_Pro_18.104.22.168_Win.zip (46 MB) Once downloaded, expand the archive and run the installer which will guide you through the installation process. For detailed installation and setup instructions, see the Installing Lasso Professional 8.6 for Windows page. Instructions for installing Lasso 8.6 and FileMaker Server 11 on 64-bit Windows are available. Please note: you must uninstall/remove any previous installation of Lasso prior to installing updates. Instructions for upgrading ImageMagick on CentOS 5 are available. See this article if Lasso 8.6 fails to start on boot. To install Lasso Processional Server from an rpm, download the appropriate file and run: yum install java-openjdk libicu ImageMagick unixODBC rpm -ivh name-of-file.rpm 32-bit CentOS 5: Lasso-Professional-22.214.171.124-5.i386.rpm (24Mb) 64-bit CentOS 5: Lasso-Professional-126.96.36.199-5.x86_64.rpm (24 Mb) Once Lasso Professional 8.6 has been installed, download and install the appropriate Apache2 connector: A documentation rpm is also available: Lasso-Documentation-8.6-5.i386.rpm (9.1Mb) Please see this tech note if you have difficulties with Lasso Professional 8.6 not starting on boot. If you had 8.6b3, please uninstall first. To get the names of any previous packages, execute the following: rpm -qa | grep Lasso then remove each packages with: rpm -e name-of-package before performing the yum install as noted below. [Uninstall tips kindly provided by Steffan Cline] To install Lasso Professional Server 8.6 via yum, the LassoSoft yum repository must be configured on the server. Create /etc/yum.repos.d/LassoSoft.repo if it doesn't already exist and enter the following: [lassosoft] name=LassoSoft baseurl=http://centosyum.lassosoft.com/ enabled=1 gpgcheck=0 To install Lasso Professional Server on CentOS 5 32-bit: yum install Lasso-Professional-Apache2 To install Lasso Professional Server on CentOS 5 64-bit: yum install Lasso-Professional-Apache2.x86_64 To update Lasso Professional Server: yum update Lasso-Professional
OPCFW_CODE
Printer driver hp laserjet pro 400 mfp - Download driver lenovo z500 for win7 64bit Understand how to choose the right HP LaserJet Pro / color MFP M/M series product printer driver for your printing needs. Download the latest software & drivers for your HP LaserJet Pro Printer Mdn. Free shipping. Buy direct from HP. See customer reviews and comparisons for the HP LaserJet Pro MFP Mdn. Upgrades and savings on select products. Hp color laserjet cp1215 driver xp HP LaserJet Pro MFP Mdn drivers and software Downloads for windows xp vista 7 8 10 32 bit and 64 bit operating system and Mac OS. HP LaserJet Pro MFP M Series Printer sharing disclaimer Supported printer drivers (Windows). Download the latest software & drivers for your HP LaserJet Pro MFP Mdw. HP LaserJet Pro Driver Download. Drivers are needed to enable the connection between the printer and computer. Here you will find the driver applies to the.HP LaserJet Pro MFP Mdn driver Downloads for Microsoft Windows bit - bit and Macintosh Operating System HP LaserJet Pro MFP Mdn.HP LaserJet Pro MFP Mdw driver Downloads for Microsoft Windows bit - bit and Mac OS HP LaserJet Pro MFP Mdw driver software.25 Nov syndmi.somee.com provide links download driver and software for LaserJet Pro M MFP Monochrome Series trusted direct from the. Sharp mx 2300 driver download HP LaserJet Pro MFP Ma Printer Full Software Driver for Windows and Macintosh Operating Systems. How to install driver for HP LaserJet Pro MFP. Here's a driver HP LaserJet Pro Printer that correspond to your printer model. HP LaserJet Pro M Printer Series Full Software and Drivers. 10 Jul This video will show you how to install driver printer hp laserjet pro M in windows 7. HP LaserJet Pro MFP Mdn driver Downloads for Microsoft Windows bit - bit and Macintosh Operating System HP LaserJet Pro MFP Mdn driver.22 Oct Learn how to use the scan to network folder option on HP LaserJet Pro MFPs using HP's software wizard in Windows. These instructions apply.HP LaserJet Pro color MFP Mdn Driver and Software Downloads for Microsoft Windows and Mac OS HP LaserJet Pro color MFP Mdn Driver and Software.HP OfficeJet Pro Wireless All-In-One Instant Ink Ready Printer: functionality; Built-in wireless LAN; Prints up to 22 ISO ppm in black, up to 18 ISO ppm in. Hp deskjet 3840 driver windows 7 download See customer reviews and comparisons for the HP LaserJet Pro Printer series; HP Mono Multifunction LaserJet Printer: MFP driver, HP Setup Assistant, HP. 15 Jun HP LaserJet Pro MFP Mfdw, HP PCL5e/PCL6 It looks like these printers allow print data to come through a host-based printer driver. HP LaserJet Pro MFP M Series. Supported printer drivers (Windows). Change the settings for all print jobs until the software program is closed. hp officejet j6400 windows 10 driver, bcm20702a0 driver windows 7 hp free download, hp pavilion 4450 ea driver, hp laserjet 6l pro driver windows 7 32 bit download, ibm db2 driver for php, drivers hp deskjet 2050 para windows 7 64 bits download network driver for windows 7 sony vaio
OPCFW_CODE
Main / Libraries & Demo / Youtube html 5 Youtube html 5 Name: Youtube html 5 File size: 570mb Many YouTube videos will play using HTML5 in supported browsers. You can request that the HTML5 player be used if your browser doesn't use it by default. Flash-HTML5 Player for YouTube™ allows you to play YouTube Videos in Flash or HTML5 player. Important note: Flash-HTML5 for YouTube™ extension does. YouTube is already making the change, and allows users to opt-in to its HTML5 video trial. That allows users to view videos using the h video codec and WebM format (which used the VP8 codec). To participate in YouTube's opt-in HTML5 trial, you must be using a supported web. The easiest way to play videos in HTML, is to use YouTube. and time- consuming. An easier solution is to let YouTube play the videos in your web page . 14 Jan I have even formatted a labtop and installed from scratch. No addons no nothing and Firefox still wont play youtube HTML5 videos. In Chrome. 6 Sep Since Google Play Music supports HTML5 playback over supported browsers, I was curious if YouTube also had the option for HTML5. Then, install the YouTube Flash Player extension: This add-on forces YouTube to play videos using Flash Player instead of the HTML5 player. 27 Jan The site will now use HTML5 video as standard in Chrome, Internet Explorer 11, Safari 8, and in beta versions of Firefox. YouTube engineer. 20 Nov YouTube™ Flash-HTML allows you to play YouTube Videos in Flash or HTML5 player. Important Features: 1. Allows you to play YouTube. 9 Mar Dear Lifehacker, I've read about how HTML5 will change the way I use the web, but it seems like the biggest example of HTML5 in action is on. 2 Dec Hi there. Some youtube videos such as this are not working flowerschangeeverything.come. com/watch?v=SAvXjWgKSQ8 It says no video formats available. GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects. Whether or not YouTube videos play in HTML5 format depends on the setting at flowerschangeeverything.com, per browser. Chrome prefers. 11 Sep Ok, I'll just say it like it is: eBay is like my weird uncle who is stuck in the 's! If you have not heard or if you are a big eBay seller and or user. 8 Jul The new flowerschangeeverything.com has a bunch of new features, including high-quality video playback in the browser using HTML5. Surf to YouTube's.
OPCFW_CODE
long for 12 digit number? I'm trying to use long for 12 digit number but it's saying "integer constant is too large for "long" type", and I tried it with C++ and Processing (similar to Java). What's happening and what should I use for it? Can you provide a code sample so that we can see specific details? @Blindy, Sorry, I thought it might be useful. Obviously it is unnecessary though. will you be using this number for calculations? It's strange that you asking for a "12 digit number". Normally people are interested in a range and they specify if it is signed or not. If you are just dealing with a number like a credit card number or some phone number then it would be better to store as a string. I didn't think it was strange. But then, I've had to write code to interoperate with COBOL. well the number is<PHONE_NUMBER>43 and I have to find out the largest prime factor. I don't know in C++, but in C, there is a header file called <stdint.h> that will portably have the integer types with the number of bits you desire. int8_t int16_t int32_t int64_t and their unsigned counterpart (uint8_t and etc). Update: the header is called <cstdint> in C++ can you please the above comment, its quite causing a problem when i use it In C and C++ (unlike in Java), the size of long is implementation-defined. Sometimes it's 64 bits, sometimes it's 32. In the latter case, you only have enough room for 9 decimal digits. To guarantee 64 bits, you can use either the long long type, or a fixed-width type like int64_t. +1 for specific-width types, like those achievable through cstdint if your platform has it, or boost/cstdint.hpp if it doesn't. Just FYI - on some compilers (e.g. older GCC), cstdint is missing but the platform may have C99's stdint.h.... If you are specifying a literal constant, you must use the appropriate type specifier: int i = 5; unsigned i = 6U; long int i = 12L; unsigned long int i = 13UL; long long int i = 143LL; unsigned long long int i = 144ULL; long double q = 0.33L; wchar_t a = L'a'; I really think that in his case was the problem of a long int being 32bits @hexa: You're probably right. OP didn't say which platform/compiler/settings, I'm not sure which combinations of types and literal constants would cause which sort of warnings. Try using a long long in gcc or __int64 in msvc.
STACK_EXCHANGE
Reading Time: 3 mins Taking Screenshots for Failed Test Cases Using Selenium In the below steps, let us see how to take screenshots for the failed test cases using Selenium WebDriver soon after analyzing the Output. Step 1: Replace the webdriver object to TakesScreenshot TakesScreenshot scrShot = ((TakesScreenshot)driver) Step 2: Call getScreenshotAs to create a file (image) File SrcFile = getScreenshotAs (OutputType.FILE) Step 3: Copy file to the required location FileUtils.copyFile(scrFile, new File("D:\\screenshot.jpg")) Now, let us get going with how-to of installing Jenkins and configuring it to run with Selenium. Integrating GitHub with Selenium Open Selenium and navigate to the Quick Access TextBox and type ‘Git Repositories’. Post which you’ll be directed to the Git Repository page, wherein you can find all the projects you pushed to GitHub. To add a new repository in Git, click on create new Git Repository icon as shown in the below image. Now, enter the name of your project in the Repository Directory tab. Upon completion, tap Create. On Eclipse, go to the Project and right-click on the Properties to select Team. Then, click Share Project. Post which you’ll be directed to Configure Git Repository page, wherein select the local repository and tap Finish. Pushing the Code to GitHub Again right-click on the Project and then on Team, to select Commit. Now enter the commit message and choose the files which you want to send to the GitHub repository. Upon completion, tap Commit and Push. Post which, you’ll see your project icons in the folder like below. Now, check whether the project in the Git repository (under Branch Manager) and the uploaded file is of the same branch. One other way to verify the successful action of committing and pushing the code can be seen in the GitHub’s repository. Installing Jenkins and Configuring with Selenium Step 1: Installing Jenkins Visit the Jenkins official website http://jenkins-ci.org/ and choose the package to download according to your OS setup. Start running the exe file after unzipping Jenkins. To install the Jenkins 1.607 setup, click the Next button on the installation page. To begin the installation process, tap Install in the end. Step 2: Creating a CI Job On the Jenkins dashboard, click the New Item option to create a CI job And select the Maven Project from the list. Upon clicking the OK button, a new job under the specified (WebdriverTest) will be created. Step 3: Managing Jenkins Go to Manage Jenkins on the Jenkins dashboard and choose Configure System. Now configure the JDK installations as shown in the below image. Step 4: Executing Project from GitHub Open Jenkins via launching a browser and tap the New Item option. Select Maven Project from the list and specify the item name (magemintGitHub) and tap, OK. On Git, enter the specific repository url and tap, Add to include in the repository. Enter pom.xml location in the Root POM and Clean Test in the Goals and Options text boxes. Under the Post Steps row, select how to run the test and upon completion, tap Save. Post clicking the Save button, you’ll be directed to a below-shown image wherein tap Build Now to build your project. Next tap the Build Now for further configuration. Step 5: Verifying the Output Then from the newly directed page, select Console Output to verify the output status. Thus, from the Results page, we can verify the build is successfully executed. The final output of the TestNG results is shown in the below image. Extending the functionality of Jenkins with Blue Ocean: Jenkins is a more popular automation server among the developing and testing community. The reason behind this popularity is that Jenkins is a stable and reliable tool which act like a mature continuous integration server. One such adding feather for Jenkins is it’s newly developed Blue Ocean project. The perks of this project are that it facilitates in extending the functionalities of Jenkins pipelines to a remarkable extent. Let us now explore this newcomer. It is important for you to install all the required plug-ins for the installation of the Blue Ocean. To do so, go to Manage Jenkins>>Manage Plugins>>Available>> search, Blue Ocean. You can confirm the successful installation of Blue Ocean by the presence of ‘Open Blue Ocean’ icon in the left panel. In the inside view of Blue Ocean, all your newly created automation tests pipeline can be overlooked and the status such as results of completed phases, operations in progress, and other related logs will be listed out. If needed, you can export your required logs too. The GUI of this newly developed project comes up with an intuitive interface and is extensively user-friendly. For example, one of the phases called UI Tests has been added with an additional ability to run for different browsers like Chrome, Edge, Firefox and Safari. Thus, you have successfully installed the additional project, Blue Ocean with its clarity-rich interface with Jenkins pipelines.
OPCFW_CODE
Base template for building NativeStript Starter JavaScript Fails before any changes. Describe the bug The base template for building NativeStript Starter JavaScript doesnt seem to start properly. the error is below: setup-nativescript-stackblitz && ns preview Setting up NativeScript aliases: ns, nsc, tns and nativescript... Installing @nativescript/preview-cli@latest Installing @nativescript/stackblitz@latest npm ERR! code ERR_INVALID_PROTOCOL npm ERR! Protocol "https:" not supported. Expected "http:" npm ERR! A complete log of this run can be found in: /home/.npm/_logs/2024-11-08T22_10_51_550Z-debug-0.log npm ERR! code ERR_INVALID_PROTOCOL npm ERR! Protocol "https:" not supported. Expected "http:" npm ERR! A complete log of this run can be found in: /home/.npm/_logs/2024-11-08T22_10_51_574Z-debug-0.log ERROR callSite.getFileName is not a function. (In 'callSite.getFileName()', 'callSite.getFileName' is undefined) ┃ ┃ depd@ ┃ ../../node_modules/body-parser/index.js@ ┃ __require@ ┃ ../../node_modules/express/lib/express.js@ ┃ __require@ ┃ ../../node_modules/express/index.js@ ┃ __require@ ┃ @ ┃ _0x39bbea@https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/blitz.f565b097.js:40:799734 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:144:14247 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:144:14863 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:144:12820 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:144:10277 ┃ executeUserEntryPoint@https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:165:1646 ┃ internal/main/run_main_module@https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/builtins.ddb8d84d.js:138:405 ┃ _0x17065f@https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/blitz.f565b097.js:40:1435197 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/blitz.f565b097.js:40:897016 ┃ @https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3-vqe93rgk.w-corp-staticblitz.com/blitz.f565b097.js:40:1465224 ┗━ ~/project 1m 10s ❯ Link to the Bolt URL that caused the error https://bolt.new/~/bolt-nativescript-js Steps to reproduce Start by clicking the button NativeStript Starter JavaScript on the main page. Wait for the initialization and you see the error while building Expected behavior I would expect it to start so you can start working on a project. Screen Recording / Screenshot None Platform Browser name = Safari Full version = 18.0.1 Major version = 18 navigator.appName = Netscape navigator.userAgent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0.1 Safari/605.1.15 performance.memory = undefined Chat ID = 6bd201b08b1e Additional context No response Hi @seagalli, Appreciate the feedback! The issue likely stems from Safari. WebContainers, a fundamental technology powering Bolt, are fully supported in Chrome and most Chromium based browsers. You can read more about browser support here. You should have success opening the NativeScript starter in Chrome. Hope this helps!
GITHUB_ARCHIVE
When the VRT first received word of a new Microsoft Word 0-day I anxiously awaited details and the ever important hash of the in-the-wild exploit to be able to research it and provide coverage through Snort, ClamAV and the FireAmp suite of products. I was especially interested when word came that it was an RTF vulnerability, as I have spent a lot of time looking at high profile RTF vulnerabilities such as the ever popular CVE-2012-0158. When the in the wild sample finally arrived I thought someone was playing an early April Fool's joke on us: I knew this vulnerability already. More than that, I had written the coverage for this almost a year and half ago! The vulnerability appeared to be CVE-2012-2539, which was released December 11th 2012 as Microsoft Security Bulletin MS12-079. I checked blogs, looked for any mistakes in the hash I had gotten but, no, this WAS the dreaded vulnerability that prompted Yahoo Finance to tell everyone not to open any RTF files. So I did some searching in my old research and found that I had written Snort rules 24974 and 24975 way back in December of 2012 for this vulnerability. The release posts on Snort.org's blog confirmed this (blog|rule changes). The rule even specifies the vulnerable element of the RTF specification, listoverridecount, in the message. I enjoyed this hilarious state of affairs and we kept it to ourselves until someone else found it out, for dramatic effect if you will. Lo and behold, this week's blog posts by other security vendors popped up, pointing to listoverridecount as the exploitation vector. This confirmed what we already knew, that this vulnerability was centered around the listoverridecount value. The blog posts rightly deduced that the only legal values for this element are 0, 1 or 9 and other values could cause a crash. Our detection on both Snort and ClamAV already detected that. Interestingly though, there seems to be some programs that generate RTF out there that can generate values for listoverridecount that are not 0, 1 or 9, as we found out when someone submitted a sample to ClamAV that has the SHA256 hash: as a potential false positive for the a signature we have to detect attacks leveraging CVE-2012-2539. ClamAV was the only vendor to detect it before we decided it was prudent to turn the signature into a PUA (Potentially Unwanted Application) signature since no one seemed to be exploiting it actively. The Snort rules have now been updated with new references and a non PUA ClamAV signature that references CVE-2014-1761 has gone out (I can only hope that alternate RTF generators stop using invalid values in their listoverridecounts). All in all this 0-day has been a little bit disappointing since it was a rehash of a known vulnerability we already covered, but what I can console myself with is the fact that someone, somewhere is probably majorly annoyed because the exploit they built or bought is not working against Sourcefire/CISCO customers!
OPCFW_CODE
3) try to enable the SDCC codepath within z88dk - maybe some of the guys on the z88dk forum could help Alternatively, since the SP1 is so good/fast and the C libraries so optimal - it might be better sticking with z88dk, if you can afford the code size/speed issues. Just depends on what you need for your game I guess! I stop by occasionally so I can answer right here You can use sdcc with z88dk's libraries now. The documentation is still being written but you can find what is done here:www.z88dk.org/wiki/doku.php?id=temp:front If you're using windows, the process is very simple. 1. Download and install an sdcc nightly build from sdcc.sourceforge.net/snap.php . Add sdcc/bin to your path. It's important to use a nightly build because a number of important(ish) bugs have been fixed since their last release. 2. Get the sdcc-z88dk patch from z88dk.cvs.sourceforge.net/viewvc/z88dk/z88dk/libsrc/_DEVELOPMENT/sdcc_z88dk_patch.zip . Copy zsdcc to the sdcc/bin directory. This is a slightly modified version of sdcc that will be invoked by z88dk. It hasn't been accepted by the sdcc team yet because it causes some regression test failures for a few non-z80 targets supported by sdcc. 3. Get the z88dk nightly build as instructed here: www.z88dk.org/wiki/doku.php?id=temp:front#installation You should remove your old install first and set up path information as described. Here's an example compile with the new c library using sccz80: zcc +zx -vn -clib=new test.c -o test Here's the same program compiled using sdcc: zcc +zx -vn -clib=sdcc_ix --max-allocs-per-node200000 --reserve-regs-iy test.c -o test Note for sdcc: The code generator's optimization level (200000 here) can cause compilation to be really slow when high so you may want it low while developing. On the zx target you cannot use the iy register if the rom's isr is running. So you must compile with --reserve-regs-iy if the rom isr is active. Another compile that will have smaller code using sdcc: zcc +zx -vn -clib=sdcc_iy --max-allocs-per-node200000 test.c -o test The difference here is the library will use iy instead of ix so that it does not have to save the frame pointer (ix) for the compiled code. Because iy is being used the ROM isr cannot be allowed to run. The output is a raw binary that by default should be loaded at address 32768. You can make a tap file out of that using appmake or your own tools. There are a few examples ready that you can read over: www.z88dk.org/wiki/doku.php?id=temp:front#examples . These examples are somewhat over-complicated on purpose to explain some facet of the library. But the info on library configuration, pragmas and memory model are especially useful for a deeper understanding. One difference between the classic c library and the new one is you never have to link against any other library unless you are using float math (-lm). For example, no more "-lndos -lsp1" etc. The compile lines above are sufficient for everything. You can't use "-create-app" anymore either as the output from the new c lib could be more than one binary; use appmake directly instead to make tap files. The clisp example explains it briefly and most of the example source code has an appmake invocation in the comments. The second difference is z80asm has become section aware and the new c lib and crts use sections extensively. User code and data has to be assigned to defined user sections (code_user, rodata_user, smc_user, data_user, bss_user) in order to be placed inside the binary. You are allowed to make your own sections but as these are unknown inside the crt (it sets up the memory map), those sections will be output as separate binaries. Your sprite data, for example, can be assigned to rodata_user: _mysprite: defb 0,0,0,0,... As an example of creating your own section, you could assign data to one of the 128's extra memory banks: _bank6_data: defb 0, 0, 0... After a compile you would get two binaries: out_CODE.bin and out_BANK6.bin. The first is the usual program and the latter is data you should place in bank6. You will always get smaller and faster code when using sdcc in combination with z88dk. sdcc's libraries are rudimentary and written in C which makes them large versus z88dk's written in asm which are 3x smaller. z88dk implements all of sdcc's primitives in asm (this is stuff like integer mul/div, float math, etc) whereas sdcc only does 16-bit math in asm. z88dk changes all of sdcc's calls to library and primitives to callee and fastcall linkage which saves hundreds of bytes. z88dk adds a new peephole rule set that fixes a couple of sdcc code generation bugs and improves sdcc's code especially when dealing with statics. Depending on the program, you can save nearly 1k of memory on a 20k memory size. Finally, z88dk supplies a RAM model for code generation which means it throws away the duplicate data section used to iniitialize non-zero variables in RAM. Here are the sdcc compiles that you would probably want to use on the zx target: zcc +zx -vn -SO3 -startup=31 -clib=sdcc_ix --max-allocs-per-node200000 --reserve-regs-iy test.c -o test zcc +zx -vn -SO3 -startup=31 -clib=sdcc_iy --max-allocs-per-node200000 test.c -o test It's possible the -SO3 creates incorrect code (there is an issue with sdcc's peepholer that hopefully is solved sometime soon) so you may want to try without it during program development. If -SO3 leads to a crash, let me know so I can investigate -SO3 uses a much expanded set of peephole rules. Some more details on optimization here: www.z88dk.org/wiki/doku.php?id=temp:front#optimization_level and the following section describes the steps taken to compile a program. Sometimes that helps to demystify how things work. The -startup=31 uses a bare bones crt that does not instantiate anything on stdin/stdout/stderr. You can still use sprintf/sscanf families if you need them. If you do use them, make sure to configure the library to eliminate any converters (%I, %B, %x, etc) that you do not need. They have been written to allow you to reduce printf size by eliminating things you don't need. He didn't understand the tools He compares a bare sdcc crt with a z88dk crt that instantiates files on stdin/stdout/stderr and allocates space for a heap and other platform functions. sdcc's output also does not include the bss segment (ie your variables) -- the memory is required but it's not counted in the binary. So z88dk comes out of the gate with a high memory disadvantage. He has issues with crashing because z88dk uses the full z80 register set whereas sdcc only the main set (and index registers); the CPC's firmware requires that the exx set not be modified. He then fails to use the library code (the library supplies asm implementations of qsort and line draw which are many times faster than sdcc c code). If the goal is to compare translated C, he writes as if on a 32-bit platform. C code that accesses a stack frame leads to code that is 3-5x slower than asm code -- on small uPs statics should be preferred where acceptable. You can find some hints for better results (code size and speed) here: www.z88dk.org/wiki/doku.php?id=temp:front#reducing_binary_size sdcc generates quite poor code when accessing statics (and we try to fix that with peephole rules -SO3) but sccz80 improves greatly. We're finding that while sdcc will usually generate faster code, sccz80 usually generates smaller code when those hints are followed. A lot of people are confused by the code generated by sccz80 vs sdcc but that's because sccz80 is trying to implement its actions as subroutine calls to reduce code size whereas sdcc is inlining everything which makes it faster yet larger. Anyway, sorry for the long post, but the answer is you can use sp1 now with sdcc and using sdcc with z88dk's libraries gets you the best of both worlds Careful though -- sdcc does not include the bss segment in the output binary. Actual memory usage is greater than the file size!
OPCFW_CODE