Document
stringlengths
395
24.5k
Source
stringclasses
6 values
(function() { var _routes = {}; var Dispatcher = function() { return { dispatch: function(action, context) { var view = _routes[action]; if (!view) { throw 'Mob.RouteNotFoundError: ' + action; } var executeAfterShow = typeof view.afterShow === 'function'; if (typeof view.fetch === 'function') { var deferred = view.fetch(); deferred.done(function(data) { context.data = data; view.show(context); executeAfterShow && view.afterShow(); }); if (typeof view.onFetchError === 'function') { deferred.fail(view.onFetchError); } return; } view.show(context); executeAfterShow && view.afterShow(); } }; }; var View = (function() { var _copySuperProperties = function(superclass, subclass) { for (property in superclass) { if (!subclass[property]) { subclass[property] = superclass[property]; } } } var _extend = function(definition) { var _super = this; for (property in _super) { if (typeof definition[property] === 'undefined') { definition[property] = _super[property]; } } definition['extend'] = _extend; definition['_super'] = _super; _copySuperProperties(_super, definition); return definition; }; return { extend: _extend } })(); var Router = (function() { return { route: function(action, view) { _routes[action] = view; } } })(); module.exports.Dispatcher = Dispatcher; module.exports.View = View; module.exports.Router = Router; })();
STACK_EDU
A package deal commonly utilized by the programming Group will probable be obvious on GitHub. At some time of producing a seek out ggplot2 on GitHub yielded about four hundred repositories and Nearly two hundred,000 matches in dedicated code! Similarly, a offer that's been adopted for use in academia will are usually mentioned in Google Scholar (again, ggplot2 scores really well During this evaluate, with more than 5000 hits). I would like to understand R to carry out my position as I'm an item supervisor to get a program firm that interacts with R. I am now ready to understand R scripts and with any luck , lead some of my own. On these systems, text was typically routinely composed to generally be appropriate with these printers, For the reason that principle of unit motorists hiding these types of hardware specifics from the application was not still nicely designed; applications experienced to talk on to the Teletype equipment and follow its conventions. I look ahead to taking another study course on studies.com - a terrific way to continue Finding out in a structured fashion, but flexible ample to take part while Life proceeds. An assignment operation is actually a approach in very important programming during which diverse values are connected to a specific variable name as time passes. This system, in this sort of product, operates by modifying its condition making use of successive assignment statements. The class, Introduction to R Programming Aspect Two, taught by Joris Meys was Outstanding!!!Most of the program supplies have been particularly helpful in permitting me don't just to know the details of R programming but also to get a solid standpoint on the basics of R coding. a project (i.e. during the preparing stage, exactly where we are now), all you need to know is that it is completely essential to help make wise conclusions within the outset. If you don't, your project may have a peek here be doomed to failure of incessant rounds of refactoring. The Multics functioning process began growth in 1964 and made use of LF on your own as its newline. Multics utilised a tool driver to translate this character to whichever sequence a printer necessary (such as added padding people), and The only byte was a lot more convenient for programming. What now seems a far more apparent preference of CR wasn't applied, to be a basic CR furnished the valuable perform of overprinting just one line with One more to make boldface and strikethrough outcomes, and so it absolutely was practical to not translate it. hoopla is really a electronic media platform that provides usage of digital amusement content material from possibly cellular units such as smartphones and tablets and/or by using any browser. - Welcome to SAS Programming for R Customers, Statistical Modeling. At this time, you can import your data into SAS, change the information to fulfill your specs, and make graphics and summary figures to acquire a truly feel for the info. You're now Prepared to begin producing some statistical products. As an alternative to attempting a comprehensive remedy of The subject We are going to contact briefly on two or three ways of documenting your operate in R: dynamic studies and R deals. Giving Spanish language e-publications that may be go through on-line or downloaded to products, as well as language classes for Spanish, English, and "Fast Languages", a system in easy text and phrases in twelve languages. Also provides multimedia sections with artwork, songs and Personal computer instruction. Large projects involving dozens of men and women, Then again, need A great deal energy devoted to project management: standard meetings, division of labour plus a scalable project administration program to trace progress, difficulties and priorities will inevitably eat a large proportion of the project’s time. Fortuitously a large number of dedicated project management devices have been created to cater for projects across An array of scales. These incorporate, in rough ascending get of scale and complexity: Strategic thinking is very significant through a project’s inception: in case you produce a poor selection early on, it could have cascading detrimental impacts throughout the project’s full lifespan.
OPCFW_CODE
Let $f\in L^{\infty}$, its that true $Pf\in H^{\infty}$? My question is: Let $f\in L^{\infty}[S^1]\subseteq L^2[S^1]$, is that always true for $Pf\in H^{\infty}[S^1]\subseteq H^2[S^1]$? Here, $S^1$ is the unit circle in complex plane, and $H^{\infty}$ denote the subspace of $L^{\infty}$ consisting of functions whose $n$-th Fourier coefficients are all zero, $n<0$. Similarly defined $H^2[S^1]$ by the closed subspace of $L^2[S^1]$ consisting of functions whose $n$-th Fourier coefficients are all zero, $n<0$. $P: L^2\mapsto H^2$ denote the projection operator. Here are my thoughts. This question rises when I study Toeplitz operators on hardy-Hilbert space(GTM 237). It is equivalent to ask: for all $\phi\in L^{\infty}$, is that always true $T_{\phi}1\in H^{\infty}$? I guess the answer is not and try to make a proof by contradict. Let $U$ denote the right shift on $H^2$, if $T_{\phi}1\in H^{\infty}$. Calculation can shows $T_{\phi}U-UT_{\phi}$ has at most rank one, so we can obtain $T_{\phi}e^{in\theta}=T_{\phi}U^n1=U^nT_{\phi}1+F_n1$, where $F_n$ is rank one operator. This shows $T_{\phi}e^{in\theta}$ are all in $H^{\infty}$. But note that $H^{\infty}$ is dense but not closed in $H^2$, so I don’t know how to get a contradiction. This is in fact a problem in Fourier analysis. So I think it can be solved by use some knowledge in Fourier analysis. Any help or hint? Thanks! Take $f(\theta)=\sum_{n \ge 1} \frac{\sin n\theta}{n}=\frac{\pi -\theta}{2}, 0 < \theta < 2\pi$; $f \in L^{\infty}(S^1)$ (this could be proved easily summing by parts even if we do not know the result) However $2iPf=\sum_{n \ge 1} \frac{e^{in\theta}}{n}$ is obviously unbounded near $0$ Actually, even more, is true - if we denote by $\bar H_0^{\infty}(S^1)$ the space of bounded functions with Fourier series with only strictly negative indices coefficients (the conjugate of the functions in $H^{\infty}$ except that we let the constant term to be zero to avoid intersection), $H^{\infty}+\bar H_0^{\infty}$ is far from being dense in $L^{\infty}(S^1)$
STACK_EXCHANGE
Scandalous sexism in Silicon Valley looks at a controversy that reveals the old-fashioned sexism which permeates the high-tech culture of Silicon Valley software companies. THOSE OUTSIDE the Silicon Valley bubble may not have heard about "Donglegate," the big tech scandal of this past month. It involves lots of sexism, two firings and disillusionment for anyone who thought either women or employees in general were making progress in getting respect in the software industry. In brief, this is the story: Adria Richards, an employee of e-mail provider SendGrid, attended a tech conference called PyCon for her job. Two men sitting behind her at a session, developers for the gaming company PlayHaven, began making sexist puns to each other involving the technical terms "dongle" and "forking." Richards tweeted a picture of them. One was subsequently fired by PlayHaven. Richards then began receiving a flood of online vitriol, including rape and death threats--and was soon fired in turn by SendGrid. The problem of sexism in the tech industry--and the relationship between the male numerical dominance in the field and sexist "humor" in a professional setting--has received increasing acknowledgment recently. That's why Richards, in her tweets, asked for PyCon staff to intervene and got action. PyCon has a code of conduct that the jokes violated. As the conference organizers explained in a subsequent statement: "Both parties were met with, in private. The comments that were made were in poor taste, and individuals involved agreed [and] apologized." But while the PlayHaven developers were apparently willing to apologize--before there was any indication that their jobs were at stake--some men are less willing to give up on sexually harassing the women around them. The online mob that got Richards fired was part of a larger backlash, as Alice Marwick explained at Wired: The technology industry considers itself a meritocracy where the "good" ones--for example, talented engineers and programmers--will rise to the top regardless of nationality, background, race or gender... If we admit there are structural barriers to entry and a culture that actively discourages women and men of color from participating, then it logically follows that technology is not a meritocracy. And this threatens many dearly held beliefs...It suggests that the enormous wealth generated by tech startups and founders isn't justified by their superior intelligence. Almost no tech workers will ever see the millions or billions accumulated by people like Steve Jobs, Bill Gates, Larry Ellison, Sergei Brin or Larry Page. But that doesn't stop many from identifying with the people who make huge profits off their labor--and feeling threatened by any challenge to the macho-geek competitive culture that dominates tech. THE FIRINGS of Richards and the unnamed PlayHaven developer show who this institutional sexism really serves. There has been plenty of debate about whether what Richards did right or wrong--much of it putting an absurd burden on women to navigate a sexist culture. But amid all the attacks on so-called political correctness, it is nearly taken for granted that a company can, should and will fire an employee on a whim or to meet a transient public relations need. Many thought, with reason, that making a "dongle" joke shouldn't get you fired--but the public rage of many of them focused on Richards, who came out against PlayHaven's action, and not on PlayHaven's bosses, who actually made the decision to fire the dongle-joker. SendGrid, unlike PlayHaven, was the target of distributed denial-of-service attacks which disrupted its website and its email service, until it fired Richards, after which the attacks subsided. Companies like SendGrid and PlayHaven consistently take a public position in favor of gender equality and would certainly never officially condone the vile sexist language used ad nauseum against Richards on Facebook and Twitter. After all, they have some number of female customers and employees. But their bottom line might take a hit if there was any disruption to a culture in which male workers get to act like adolescents with adult money--in particular if that disruption involved reasonable working hours, a say in important decisions at work, or even any examination of the real relationship between them and their employers or investors. Silicon Valley can't afford for workers to stop believing the meritocracy myth. In discussions on social media or tech sites like HackerNews, PlayHaven and SendGrid tend to get the benefit of the doubt that is rarely extended to their fired employees. Perhaps PlayHaven had more reasons for its firing than the thin public ones? Did SendGrid have any alternative, given that Richards' job was outreach to developers and a section of her intended audience was in a misogynist fury? Who feels safe in a world where a bunch of random sexists in the Internet can get you fired, even if your employer agrees they're irrational? In another context, these questions might point to the problematic nature of a system which actually obligates many people in positions of power to make decisions on the basis of profits rather than principle. But in practice, arguments like these have too often been used to excuse the people who casually dumped two of their workers--workers who had produced a chunk of the value of their businesses--while blaming the victims. It would be nice to say people in tech have learned something from this. But indications right now are that the anti-feminists have won, and the rest of us have lost. PyCon is adding a "no public shaming" clause to its code of conduct, explicitly to prevent people from doing what Richards did. In practice, this will no doubt discourage some women from reporting harassment at all--and ensure that when others do, it will be dealt with as a problem of individual behavior, rather than something cultural, institutional or, dare we say it, political. The nonprofit Girls Who Code--in theory dedicated to "closing the gender gap" in tech--is taking donations from the proceeds of a "Fork my dongle" t-shirt. This only adds to the pressure on women in tech to distance themselves from the "complainers" like Richards and endure jokes like they are "just one of the guys." Every Silicon Valley start-up company advertises its flat management culture, the freedom of workers from petty oversight, its flexibility in work styles--not to mention its commitment to equality of opportunity. This self-congratulation isn't without its element of truth, compared to other industries. But SendGrid and PlayHaven have just shown us its limits.
OPCFW_CODE
Examples of <link> tag in a HTML using different 'rel' attributes? Could someone provide meaningful examples of HTML <link> tag with different values of "rel" attribute? Apart from the possible values explanation for the "rel" attribute, are there any practical uses for the different values (other than stylesheet value)? Do you mean the <a> tag? i never use rel attribute. but i am sure there is a use for it updated my question, highlighting <link> tag. I need your help. Please visit https://stackoverflow.com/q/66546478/14467588. To get a number of useful examples, I would recommend looking here: http://www.w3.org/TR/html4/struct/links.html. This looks like a very concise, slightly easier to navigate version of w3: http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#rel-alternate Yes, I got the explanations. Wanted to see practical examples. Thx. @zippymind Sorry for not getting the examples sooner, but if you haven't, please double-check the second link I posted. There are practical examples on that site. In fact, all the standard values are listed: http://www.w3.org/TR/html5/links.html#linkTypes @DiegoAgulló Like he said, he wants examples, not just explanations. The link you've posted is great, but has no tangible examples. Thanks @NathanWhite, the second link provides clear usage examples and supported browsers as well. There aren’t that many meaningful real-life examples of link elements, apart from rel=stylesheet. There are various lists of rel values that might be used, with definitions and ideas of how they might be used by browsers or search engines, but little actual evidence of such use. But some elements that have real effects in some cases are: <link rel=icon ...> defines a shortcut icon for the page; widely recognized by browsers and user by authors <link rel=alternate ...> is described in Google instructions, so Google presumably does something with it <link rel=canonical ...> suggested in other Google instructions <link rel=prefetch ...> causes the referenced page to be preloaded, in some browsers Addendum: rel/prev, author is only used by Google/G+ right now. At https://www.sitepoint.com/rel-html-attribute/ they state The "alternate" attribute value may also be used in the context of XML feeds, namely RSS or Atom, which are indicated with the type attribute Also, site navigation with rel="first|last|prev|next" is supposed to make navigation easier and is e.g. used by a specific firefox extension (https://addons.mozilla.org/de/firefox/addon/site-navigation-bar/?src=search). I Only know 'icon' value, you can set pavicon of your HTML page that appear on browser tab, using: <link rel="icon" type="icon" href="path/to/youricon.ico" /> hopely useful: http://www.w3schools.com/tags/att_link_rel.asp examples: <link rel="stylesheet" href="style.css"> <link rel="icon" href="image.png"> <link itemprop="URL" href="http://..."> <link rel="next" href="page3.php"> <link rel="author license" href="author_license.php"> <link rel="icon" href="favicon.png"> more examples: link element and rel attribute on link element
STACK_EXCHANGE
In the previous blog article we discussed dynamic scenario planning driven by disruption and digitization and where do supply chain scenarios come from. Scenarios must fall into step with the existing planning process and are subject to the same checks and approvals as a baseline plan. A scenario is a true parallel plan that exists separate from and concurrent with the baseline. The lifecycle of a scenario has four stages: - The initiation phase that focuses on data collection and validation - The planning phase determines the event impact and a preferred course of action - The scenario analysis and comparison phase In practice, supply chain practitioners manage several stages concurrently. During a single planning cycle, the scenario considered the “best” may change several times as new data comes to light. Some scenarios may never be seriously considered due to excessive risk or incomplete data. This phase builds a parallel plan which usually inherits all values, risks, assumptions, and process status from a baseline plan or another scenario. The biggest challenge in any scenario planning exercise is access to accurate data. This is especially true for unforeseen events where it may take some time to source and validate the appropriate data. Often this process is iterative as updates are received. This phase updates the demand and supply plans incorporating any new data. This may generate a scenario-specific set of exceptions and required plan approvals. All plans must be subject to the same planning process or workflow. Early scenario management techniques, often based on Business Intelligence technologies, make the mistake of copying Plan-A to Plan-B, changing some values, and performing a Delta. A planner cannot equally compare “Plan-A” to “Plan-B” if only one plan has been subject to an approval and exception management process. Each plan has its own set of assumptions and risks. The most important role of scenario management is analysis and comparison. Stakeholders must agree on the list of metrics that will be used to compare scenarios. They should also agree on the relative weighting given to each metric. The comparison metrics should be cross-functional including financial KPIs such as profitability and revenue, as well as regular supply chain metrics such as inventory, service level, and resource utilization. Of course, no plans come risk-free, so comparing the risk profiles of each scenario gives a qualitative data influence on each candidate plan. Beauty being in the eye of the beholder, the “best plan” is subjective to the bias of the stakeholders. Therefore the collaborative scenario analysis process must facilitate an auditable record of the decision-making process. The final step is to “Publish” the result. This establishes the new baseline plan and represents the activities that will be executed, providing of course another compelling event doesn’t take precedence. Scenario analysis is a continuous process. Minefields and Pitfalls Scenario analysis is a powerful tool that can help to guide a company through disruption, giving management information it can use to make the best possible decisions for the company. It can, however, consume resources and time. “Scenarios should be used only when there is, or soon will be, a genuine fork (or forks) in the road ahead.” Here are several pearls of wisdom from our experts. - Planners should be clear on the purpose of a scenario. What is the event and do we have access to the corresponding data points? - A single disruptive event may dictate several courses of corrective action and therefore multiple scenarios. - Scenario simulation and analysis is highly analytical. It requires competence in data literacy and risk management. - Collaborative qualitative insight is crucial to determining the scenario risk. Share and invite collaboration on each scenario to harness the collective knowledge of the stakeholders. - Not every scenario can be validated. Sometimes the data just is not available when you need it. In this case, it is more effective to record a Risk against the current plan rather than execute a scenario. The Risk creates a placeholder for mitigation or a later scenario. - Scenario planning should be used for all levels of planning including operational decision making. This is different from traditional techniques that focus on strategy and tactical planning. The frequency, concurrency, and scale of disruption in global supply chains will most likely continue post-pandemic due to natural, geopolitical, economic events. Preparation for the unknown provides at best a contingency and at worst a shock-absorber. In some cases, proper scenario planning negates an existential threat. Dynamic scenario planning is a prerequisite for supply chain agility.
OPCFW_CODE
The metaverse is a topic currently, though the concept has a long history. Twenty years ago, in the dotcom era, I was exploring this space, as I was recently reminded. Feeling nostalgic, I dug these projects out of the NAS archives. Tech has moved on, but there’s enduring relevance in what I learned. The UI consisted of two browser windows: a first person view and motion controls (using the now defunct Cosmo Player), and a map in a second window. The draggable compass needle, the checkpoints and the course logic (must visit checkpoints in order), and a widget that visualised your completed route as an electric blue string hovering 3 feet above the ground were all modular VRML Protos. The map and terrain for the only level ever created were generated together with a custom C++ application. I was pretty pleased this all worked, and it demonstrated some concepts for… 4DUniverse was a broad concept for virtual online worlds for socialising, shopping, gaming, etc, similar at the time to ActiveWorlds (which still exists today!), but again accessible through the browser (assuming you had a VRML plugin). I thought I’d have great screengrabs to illustrate this part of the story, but I was surprised how few I’d captured, that they were very low resolution, and that they were in archaic formats. The source artefacts from this post – WRL, HTML, JS, JAVA, etc – have lost no fidelity, but would only meet modern standards and interpreters to various degrees. Maybe I will modernise them someday and generate new images to do justice to the splendour I held in my mind! We authored a number of worlds, connected by teleports, with the tools we had to hand, being text editors, spreadsheets, and custom scripts. While a lot of fun, we came to the conclusion that doing the things we envisaged in the 4DUniverse wasn’t any more compelling than doing them in the 2D interfaces of the time. VRML eventually went away, and probably because no one else was able to make a compelling case for its use. At least I crafted a neat animated GIF logo rendered with POV-Ray. Less multiverse, and more quantum realm, I also generated VRML content at nanoscale with… NanoCAD was a neat little (pun intended) CAD application for designing molecules, which I extended with a richer editing UI, supporting the design of much more complex hypothesised molecular mechanisms. The Java app allowed users to place atoms in 3D and connect them with covalent bonds. Then an energy solver would attempt to find a stable configuration for the molecules (using classical rather than quantum methods). With expressive selection, duplication and transformation mechanics, it was possible to create benzene rings, stitch them into graphene sheets, and roll them up into “enormous” buckytubes, or other complex carbon creations. I also created cables housed inside sheaths, gears – built with benzene ring teeth attached to buckytubes – and other micro devices. If 4DUniverse was inspired by Snow Crash, NanoCAD was inspired by The Diamond Age. Nanocad could run in a browser as an applet and the molecules could also be exported as WRL files for display in other viewers. Comparing contemporary professional projects It’s nice to contrast the impermanence of these personal projects with the durability of my contemporary professional work with ANCA Machines. At the time, I was documenting the maths and code of Cimulator3D and also developing the maths and bidirectional UI design for iFlute, both used in the design and manufacturing of machine tools via grinding processes. Both products are still on the market in similar form today, more than two decades later. I wonder how I’ll view this post in another twenty years?
OPCFW_CODE
Any update on the M5Paper "v2" or "-S3"? fonix232 last edited by Ever since the release of the original M5Paper, the community has been pointing out certain issues with the approach and components used. A really good counter-example was LilyGo's T5-4.7, which presumably used the same (or very similar) EPD, with touch and BMS in tow, plus a number of extra features. It also had some downsides (e.g. using a port multiplexer to communicate with the EPD directly, taking away processing power from the ESP32). Since, LilyGo has updated their model with the ESP32-S3, with a slightly revamped design, and a similar price point (albeit without touch built in or a case). I would love to see a next-gen model by M5Stack to be frank. M5 products feel more finished, ready-to-use for tinkerers and DIYers, and the support tends to be better as well. Simply said, I could take an M5Paper and slap it on my wall, and with some minimal coding I'd have a home control panel. With LilyGo products, I'd need to work around their odd design choices (e.g. USB port location), 3D print a case, and so on. So to summarise, here's my "wish"list of what I'd love to see in an upgraded version of the M5Paper: - ESP32-S3 module as core - Direct USB connection (incredibly useful for e.g. CircuitPython!) - A better EPD controller, preferably open source, and flashable from the ESP core (an RP2040 would work quite well for this purpose I think) - A better BMS/fuel gauge (voltage based calculations are incredibly rudimentary) - Proper deep sleep on the ESP cores - USB-C port moved to the center of the side it resides in - The three-direction side button replaced with a potentiometer wheel, with an optional click (or alternatively, an "A Button" on the side Optional upgrades that would be welcome: - Extra sensors built-in (e.g. brightness sensor) - Front light on the EPD - Second USB-C port for host mode - POGO pins and magnetic mounting, similar to the "base" unit of a Core, or rather, the magnetic adapter, where the extra ports (Ports A, B and C) can be moved to (would be perfect for a wall mounted unit that can be pulled off for controls, and when reinserted, would charge and provide extra sensors). There's been rumours that you guys are working on it, but it's been nearly three years since the release of the OG M5Paper, and aside from two very minor iterations, we've seen no updates...
OPCFW_CODE
Unable to understand algorithm for Longest Increasing Sub-sequence I have been through many online resources to understand how the problem has the optimal sub-structure, but all in vain, I am unable to comprehend how the solution is obtained by solving smaller sub-problems in this case. I would be thankful if any explanation helps in understanding the solution. so far, I understand the optimal sub-structure property as follows: Example Factorial: So for a factorial of 40 ,fact(40), we can achieve the solution by calculating fact(39)*40, and so on for 39,38....2 ans as we know that fact(2) is 2 we can build it up from 2 to 40 in the same way. But I am not able to relate in terms of LIS, please help A full explanation of the solution would be nice, excluding the overlapping subproblems issue, as that can be handled later on. Thanks. Hello, Hiresh - are you confusing recursion and LIS, by chance? Typically, the algorithm for LIS (which may be recursive) involves a sequence as an input. (typically for sorting, to determine O). The factorial example given is recursion. Hi John, although the example is recursive, I think has a optimal substructure property, as the small problems are used to build up the final problem, pls correct me if I am wrong. Please see the answer below - the issue with N!, and its optimal substructure is that the very definition of N! (and the sequence derived) yields an LIS of length N. Again, a different question, for a different purpose. Before considering optimal substructure, you need to decide what are subproblems in the case of LIS. Let's use this definition: In an array a[N] of length N a subproblem LIS[k] is to find the length of a longest increasing subsequence from the initial index, which ends precisely at the element a[k]. It is important to understand the difference here: LIS[k] is not a solution to LIS on the first k elements; that would be Max(LIS[i]) for all is up to k. It is the length of the longest increasing subsequence that ends at the particular element. With this definition in hand, it is easy to construct a solution to LIS: For each i up to N: Set LIS[i] to 1 (in the worst case, a number is a subsequence of one) Search LIS[j] from the initial element up to i-1, inclusive, for js such that a[i] > a[j], and LIS[j]+1 > LIS[i]. It is easy to see that the above algorithm constructs a solution to LIS[i] given solutions to subproblems LIS[j] for all js below i in O(i). Since we can construct a solution to a k problem from solutions to k-1 sub-problems, the problem has optimal substructure. Note: The above can be further optimized by using binary search. The reasoning about subproblem and optimal substructure remains the same, though. I would like to request @dasblinkenlight to [http://stackoverflow.com/questions/43236912/how-does-finding-a-longest-increasing-subsequence-that-ends-with-a-particular-el](answer this, this will finally solve the issue for me) LIS for the sequence (problem) can be solved by using LIS for the smaller sequences (sub-problem)and that's why it is known to have the optimal sub-structure. Let me try to explain how this works with the example. Let's take a random sequence of 10 numbers: [21, 24, 13, 48, -3, 41, 36, 8, -10, 22] We will use smaller problems (smaller sequence) to solve this problem. The definition of the subproblem is this case is 'What is the longest increasing sequence of numbers that ends at a given element?' It is important to understand that this subproblem is NOT the same as LIS- the subproblem has a stricter definition that the original problem (LIS doesn't need to end at the last element). For readability purposes, I'll call the 'longest increasing subsequence that ends at given element': LIS*. The relationship between LIS* and LIS is that LIS takes Max of LIS* Let's start with just one number. [21] What is the longest increasing subsequence that ends at 21? It is simply '21'. Therefore the length of our sequence is 1. Sequence: [21] LIS*: 1 LIS: 1 Now, for the second element (24), according to the definition, we need to use the solution for the problem we already solved. We will use LIS for the first element, check whether second element is bigger (a[i] > a[j) and whether LIS[j]+1 > LIS[i]. The longest increasing sequence that ends at 24 is '21, 24' and has the length of 2. Sequence: [21, 24] LIS*: 2 LIS: 2 Let's take 3rd element (13). What is the longest increasing subsequence that ends at 13? Well, 13 is smaller than 21 and smaller than 24 so the condition a[i] > a[j] is not fulfilled for any of the previous elements.The longest increasing subsequence that ends at 13 is therefore just '13' and has the length of 1. LIS for sequence 21,24,13 is still 2. Sequence: [21, 24, 13] LIS*: 1 LIS: 2 Let's look at 4th element (48). We know that the solution for the 3-length sequence was 2. We can find previous element (24) that fulfils criteria a[i] > a[j] and LIS[j]+1 > LIS[i]. We know that solution for the previous element (smaller problem) was 2, therefore the solution here will be 2+1=3. Sequence: [21, 24, 13, 48] LIS*: 3 LIS: 3 We repeat the logic for all subsequent elements. Sequence: [21, 24, 13, 48, -3, 41, 36, 8, -10, 22] LIS*:            1    2    1    3    1    3    3    2    1    3 LIS:             1    2    2    3    3    3    3    3    3    3 As you can see, the solution to the original problem (long sequence) can be obtained by looking at sub-problem (smaller sequences) and therefore this problem is known to have the optimal sub-structure.
STACK_EXCHANGE
How to Calculate Power Consumption Cost Calculating the cost of power consumption is essential for both individuals and businesses to manage their expenses and make informed decisions regarding energy usage. Understanding how to calculate power consumption cost can help you budget effectively and identify potential areas for energy efficiency improvements. In this blog post, we will discuss the steps involved in calculating power consumption cost in a simple and straightforward manner. Step 1: Gather Information The first step in calculating power consumption cost is to gather the necessary information. You will need to know the following: - The power consumption of the device or appliances in watts (W) - The number of hours the device is used per day - The cost of electricity per kilowatt-hour (kWh) charged by your utility company Step 2: Calculate Daily Energy Consumption To calculate the daily energy consumption of a device, multiply its power consumption in watts by the number of hours it is used per day. For example, if a device consumes 500 watts and is used for 4 hours a day, the daily energy consumption would be 500 watts x 4 hours = 2000 watt-hours (Wh). Step 3: Convert to Kilowatt-Hours Most utility companies charge for electricity in kilowatt-hours. To convert the daily energy consumption from watt-hours to kilowatt-hours, divide the watt-hours by 1000. Using the previous example, 2000 watt-hours would be equivalent to 2 kilowatt-hours (kWh). Step 4: Calculate Daily Cost To calculate the daily cost of power consumption, multiply the daily energy consumption in kilowatt-hours by the cost of electricity per kilowatt-hour. For instance, if the cost of electricity is $0.10 per kWh, the daily cost would be 2 kWh x $0.10 = $0.20. Step 5: Calculate Monthly or Yearly Cost To determine the monthly or yearly cost of power consumption, multiply the daily cost by the number of days in a month or year. For example, if we consider 30 days in a month, the monthly cost would be $0.20 x 30 = $6.00. What factors can influence power consumption cost? Several factors can affect power consumption cost, including: - The number of devices/appliances in use - Their individual power ratings - The duration of usage - The cost of electricity charged by the utility company - Seasonal variations in energy usage By considering these factors and regularly monitoring your power consumption, you can better manage your energy usage and reduce costs over time. In conclusion, calculating power consumption cost is a relatively simple process that involves gathering relevant information, calculating daily energy consumption, converting to kilowatt-hours, and then multiplying by the cost of electricity. By following these steps, you can accurately estimate your power consumption costs and make informed decisions about energy usage. Additionally, considering the various factors that influence power consumption costs can help you identify areas for improvement and ultimately save on your energy bills.
OPCFW_CODE
Ubuntu Netbook Edition Now that we've gone over all of the wonderful things that Unity is supposed to be, let's see how it works in UNE 10.10. Installation of UNE 10.10 was a breeze, just like its Desktop Editon counterpart. Setup options were completed during installation, along with restricted packages and updates. Proprietary Broadcom drivers for the Mini10v's Wi-Fi card were already active upon the very first boot. From this point on, things took an abrupt turn for the worse. Is There A Desktop, Or Not? We noticed that, although there is no traditional desktop in UNE, a Desktop folder still exists in the Home directory. After opening a text file from a USB thumb drive in gedit, we attempted to perform a Save As. The Desktop folder was the default location--fair enough. So, we'll just have to get to the Desktop folder from the file manager. Unfortunately, the Files & Folders entry in the launcher lacks a Desktop folder! Documents, Music, Pictures, Videos, and even Downloads are present, but no Desktop (or Templates, Public, Examples). We changed the sorting in the Files & Folder screen from All Files to Other, and it displayed this: Nautilus, the GNOME file manager, can be accessed via a folder icon shortcut in the upper-right side of the Files & Folders screen. Another way to open Nautilus is by inserting a new volume like a USB thumb drive and selecting its icon in the launcher. And there is always the terminal. However you do it, pin Nautilus to the launcher when you get it open. Files & Folders is not a suitable file manager, and a poor file browser. You will need Nautilus. Keeping Up The Suspense When we switched back to gedit from the Files & Folders screen, the Mini 10v went into suspend mode--only the first of many times this would happen. The Mini 10v again went into suspend while switching desktops. Although beautiful, the new launcher partially disappeared on us several times as well. While it eventually came back, it required using the slow and anemic new home screen to navigate. The home screen isn't without problems either. Selecting the Web entry again put the Mini 10v into suspend. After we logged back in, closing Firefox caused the entire GUI to go black and rebuild, as if X was reset. This behavior happened on several other occasions. Though we're not sure what was causing the GUI to reset, disabling suspend and the screensaver seemed to have fixed the awkward issues. It seems that the system does not go into suspend at the set time interval, but if that interval passes, it will go into suspend the next time you click anything (in effect, when you want to come out of suspend). The Fail Train Keeps On Rolling The fact that Unity places the window buttons and file menu of the forefront application in the upper panel like OS X isn't necessarily a bad thing. When the window buttons fail to appear, it is. This alternative windowing paradigm is also a problem for the world's fastest growing Web browser. There are two ways that Chrome (or Chromium) handles this: using the built-in window buttons, or using the system theme buttons. Both options produce duplicate window buttons. We also noticed that newly-installed applications cannot be pinned to the launcher until the system is restarted. And then there was this: Mutter is the compositing manager for Unity, which is based on Clutter, the toolkit introduced in Moblin. To be honest, we're not entirely sure how this happened, or the circumstances surrounding the appearance of this error. So many problems surfaced in UNE that it became difficult to document each one before another cropped up. But it happened, and we grabbed the screenshot. The one aspect of Unity that I thought was bound to be imperfect, was the workspace switcher. In reality, that was the one aspect that actually worked... really well. The action on the Workspaces tool is surprisingly snappy, and we experienced no noticeable lag activating it. The zoom in and zoom out animations are very smooth, and moving applications between workspaces is fluid. Returning to an application or switching to another workspace is also very quick, but had a tendency to restart the GUI on occasion.
OPCFW_CODE
RTEMS C User's Guide The RTEMS RAM Workspace is a user-specified block of memory reserved for use by RTEMS. The application should NOT modify this memory. This area consists primarily of the RTEMS data structures whose exact size depends upon the values specified in the Configuration Table. In addition, task stacks and floating point context areas are dynamically allocated from the RTEMS RAM Workspace. rtems/confdefs.h mechanism calcalutes the size of the RTEMS RAM Workspace automatically. It assumes that all tasks are floating point and that all will be allocated the miminum stack space. This calculation also automatically includes the memory that will be allocated for internal use by RTEMS. The following macros may be set by the application to make the calculation of memory required more accurate: The starting address of the RTEMS RAM Workspace must be aligned on a four-byte boundary. Failure to properly align the workspace area will result in the directive being invoked with the RTEMS_INVALID_ADDRESS error code. <rtems/confdefs.h> will calculate the value that is specified as the parameter of the Configuration Table. There are many parameters the application developer can specify to <rtems/confdefs.h> in its calculations. Correctly specifying the application requirements via parameters CONFIGURE_MAXIMUM_TASKS is critical. The allocation of objects can operate in two modes. The default mode has an object number ceiling. No more than the specified number of objects can be allocated from the RTEMS RAM Workspace. The number of objects specified in the particular API Configuration table fields are allocated at initialisation. The second mode allows the number of objects to grow to use the available free memory in the RTEMS RAM Workspace. The auto-extending mode can be enabled individually for each object type by using the macro rtems_resource_unlimited. This takes a value as a parameter, and is used to set the object maximum number field in an API Configuration table. The value is an allocation unit size. When RTEMS is required to grow the object table it is grown by this size. The kernel will return the object memory back to the RTEMS RAM Workspace when an object is destroyed. The kernel will only return an allocated block of objects to the RTEMS RAM Workspace if at least half the allocation size of free objects remain allocated. RTEMS always keeps one allocation block of objects allocated. Here is an example of using #define CONFIGURE_MAXIMUM_TASKS rtems_resource_unlimited(5) The user is cautioned that future versions of RTEMS may not have the same memory requirements per object. Although the value calculated is suficient for a particular target processor and release of RTEMS, memory usage is subject to change across versions and target processors. To avoid problems, the user should accurately specify each configuration parameter and allow <rtems/confdefs.h> to calculate the memory requirements. The memory requirements are likely to change each time one of the following events occurs: Failure to provide enough space in the RTEMS RAM Workspace will result in the being invoked with the appropriate error code. RTEMS C User's Guide Copyright © 1988-2007OAR Corporation
OPCFW_CODE
I've been working in the IT industry for almost 10 years and for the last 3 years as a web developer. It's been a great time and I'm super happy about my career path but I've decided that it is the time for a new challenge! Digital entrepreneurs were a huge inspiration for me and it made me think that I want to try myself in this area. To be honest I know very little about the business side of things. I am a regular programmer. Most of the time I learn about things related to the IT world or somethings around. That's why it is going to be a great challenge for me. I have the skills to implement the ideas but I don't know how to validate, market, and sell the products. I am planning to learn on the way and achieve as much as possible for the next 12 months. Right now it's a perfect time for me to start: - I am finishing and defending my Bachelor's degree in October. - My current contract with my employer ends on October 14th. I am not going to extend it. - COVID-19 is still here so I don't travel much. - I "stuck" in Bali which is cheap so I don't spend much money on living. I decided to dive into the entrepreneur's world for the whole year until November 2021. Or until I realize it's not for me 😃 The main goal is pretty simple: make something profitable. Small goal is to reach ramen profitability which is about €1000-1500 MRR for me. The big goal is to make it profitable enough to exceed the developer's salary (Around €80-90k ARR). It is a huge load of money so I don't have many expectations about this. But it's great to have a big goal. I am planning to make all process public using the Makerlog community, posts in this blog, and my Twitter. I will be as open as I could and I want to make my products transparent as well. At least with open revenue and statistics following the open startup movement. I understand all the risks and if I can't achieve anything and don't earn even a few bucks I will be fine. The experience and the learnings will still be valuable. I don't have a problems to get back being software engineer. After I've decided to make this happen I've got a message from a friend from the blogging community. She wanted to create a group of people making the startups every month for 12 months in total. The inspiration comes from the Piter Levels challenge back in 2014 and we all know where he is now :) It was a perfect opportunity for me to stay accountable and get a small community to share all the bumps on the road. So I agreed to take part in it. Now me and 3 awesome people are doing this. You can follow our journey in here: https://12xstartup.com/. In theory we will ship 48 startups by the end of 2021. Of course it could be less or more, it doesn't matter that much. If you want to follow my progress you can follow me on Twitter or subscribe to my blog updates with the form below. Besides, I will add the progress report by each project on the website. If you have any questions, suggestion or wanna chat, Twitter is the best way to reach me out.
OPCFW_CODE
Last month (May 7th – 9th 2019), I had the opportunity to attend ProgressNEXT in Orlando, FL. The opportunity to attend was presented to me by my good friend Sam Basu (@samidip), Developer Advocate at Progress Software (@ProgressSW). For a long time, my experience with Progress Software has been focused on the user interface developer tools offered under the Telerik brand name. After some additional research, I learned that Progress Software has a broad range of tools and platforms that, from a developer perspective, present additional opportunities to deliver solutions to our end users and customers. The team at Progress Software is awesome. I was fortunate to meet Courtney Ferrucci and Danielle Sutherby who help me with the logistics of getting to ProgressNEXT. Their attention to detail is impressive especially given the fact that they were key participants in planning ProgressNEXT for over 500 attendees. They truly made me feel welcomed the entire time of the event and were wonderful hosts. Upon arriving, I was greeted by the excellent team operating the registration desk. Registration was flawless. My registration was located, badge printed, and swag bag presented in what felt like under a minute. Once registration was completed, I strolled over to the evening reception where attendees were presented with a wonderful selection of food and drinks. There was also a live band playing great music which was perfect for the evening and a live alligator welcoming us to Florida. The first day of ProgressNEXT began with a great opening session. Loren Jarrett (@LorenJarrett), Chief Marketing Officer, welcomed all of the attendees and built up our excitement for all of the value we were about to receive from the additional session speakers and other conference sessions. The next speaker was the CEO of Progress, Yogesh Gupta. He gave a wonderful presentation on Modern Application/Systems Architecture and very eloquently demonstrated how various tools from Progress can provide value when considering/designing these types of solutions. Once the general session ended, it was time to get into the details of the various technologies that were either a part of or could be leveraged within the Progress ecosystem. The first technical session for me was titled “Getting Started with NativeScript”. Having a background in web development, you would think that I would have naturally transitioned from Angular/Typescript to NativeScript for mobile development but that was not my chosen path. So I decided to attend this session to get a better understanding of what NativeScript was all about. Rob Lauer (@RobLauer) was the presenter and he did a wonderful job sharing with us the basics of NativeScript and how it matched up with other similar frameworks. We also built a simple NativeScript app and learned how NativeScript fits into the Kinvey Platform. So what is this Kinvey Platform? Well, I heard Kinvey mentioned in the couple of sessions I attended already and did not know anything about it so I thought it would be a great idea if I attended the “Getting Started with Kinvey” session. Tara Manicsic (@tzmanics) was the presenter and she did a wonderful job introducing the Kinvey Platform and how, as developers, we can leverage features such as storage, authentication, and serverless functions. The platform provides those core functions that just about every modern application requires. It was pretty easy to use and I will definitely try it out on some future projects to acquire some hands on experience. Building on my initial introduction to Kinvey, I attended a session led by Ignacio Fuentes (@ignacioafuentes), Progress Sales Engineer, that covered how to improve mobile app offline experience using Kinvey. It was a great session and demonstrated how to leverage Kinvey’s technology to provide offline data storage and synchronization. If you know me, then you are aware that one of my many technology passions is Xamarin and Xamarin.Forms. ProgressNEXT hit a home run by having Sam Basu (@samidip), Progress Developer Advocate, deliver a presentation on Xamarin.Forms. He touched on the standard target platforms: iOS, Android, and UWP but also cover other options for utilizing Xamarin.Forms: MacOS, Tizen, and web. Sam, as always, did a great job covering the latest features and opportunities for leveraging Xamarin.Forms to create cross platform applications. Following the Xamarin.Forms presentation, I attended a session led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact, that covered what’s coming in R2 2019 of Telerik and KendoUI. There is a lot of great things in this release which is now available and you can check out here: https://www.progress.com/kendo-ui. From my perspective, the most exciting updates were for UI for Xamarin (of course) and UI for Blazor. It is amazing how the Progress team is rapidly evolving the toolset, especially giving that Blazor is not a generally available product (at the time this was published) and UI for Blazor is available. After seeing all the “goodness” planned for KendoUI, I was fortunate to attend a session led by T.J. VanToll (@tjvantooll), Principal Developer Advocate at Progress. His session was titled “One Project, One Language, Three Apps.”. In this session, he focused on NativeScript and React Native and how they both can be used to build web, native iOS, and native Android applications. The demos were great, and he also covered when and when not to use each tool. The final ProgressNEXT technical session I attended was led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact. In this session, Carl covered Blazor, the client-side .NET framework that runs on any browser. (Yes, c# executing in the browser!) Blazor utilizes the Mono .NET runtime implemented in WebAssembly that executes normal .NET assemblies in a browser. Carl did an awesome job introducing Blazor and how a .NET developer can leverage the technology in building applications. I had a great time at my first ProgressNEXT conference. The Progress team did a wonderful job with all aspects of this event. The venue, food, entertainment, scheduling, general and technical sessions were excellent. As a developer who was only familiar with the UI/UX tools Progress creates, attending ProgressNEXT has greatly expanded my perspective and understanding of the Progress ecosystem. I highly recommend attending ProgressNEXT and hope to see you at ProgressNEXT20, June 14-17, 2020 in Boston, MA.
OPCFW_CODE
A guides/user.md => guides/user.md +54 -0 @@ 0,0 1,54 @@ title: User and mirror owner guide Because of IPFS caching mechanism, IPWHL users technically mirror the wheels they have downloaded. However, there are certain distinctions must be made between the workflow of a regular user and a mirror owner. # Setting up IPFS We recommend using the reference implementation go-ipfs, whose [installation procedure is well documented][install go-ipfs]. It is worth noting that several GNU/Linux distributions and BSD-based OSes may have already included it in their repositories. Afterwards, please follow the [IPFS quick-start] guide. Some downstream go-ipfs packages may also contains a init-system service to automatically manage the IPFS daemon. By default, the daemon opens a local IPFS gateway at port 8080. # Keeping updated Users and mirror owners should subscribe to the [announce mailing list], where new versions are announced along with the repository content ID (CID). These releases are backed by cryptographically signed tags in the [metadata Git repository]. # Using floating cheeses Floating cheeses are drop-in replacements for the cheese shop. In order to use an IPWHL repository of CID `QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn` (an empty repository used for demonstration) with a [PEP 503] client, simply replace the Python package index URL with a pointer to the repository through an IPFS gateway. For `pip`, this can be done by specifying the CLI option `--index-url` to or setting the equivalent environment variable or configuration key. While it is possible to use public IPFS gateways, they face similar security issues to that of the centralized cheese shop. Mirroring a repository release is as simple as pinning its CID, e.g. ipfs pin add QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn Thanks to IPFS deduplication, pinning an additional release only requires storage for the wheels not in the pinned releases. [install go-ipfs]: https://docs.ipfs.io/install/command-line [IPFS quick-start]: https://docs.ipfs.io/how-to/command-line-quick-start [announce mailing list]: https://lists.sr.ht/~cnx/ipwhl-announce [metadata Git repository]: https://git.sr.ht/~cnx/ipwhl-data [PEP 503]: https://www.python.org/dev/peps/pep-0503 M index.md => index.md +7 -2 @@ 2,10 2,15 @@ title: Floating cheeses Floating cheeses are collections of carefully handpicked cheeses from the [cheese shop](https://pypi.org) and other sources. * [User and mirror owner guide](guides/user.md) Analogies aside, the interplanetary wheels (IPWHL) are platform-unique, single-versioned Python binary distributions backed by [IPFS](https://ipfs.io). It aims to be a *downstream* wheel supplier in a similar fashion to
OPCFW_CODE
Tree trunk quads and powerful pythons, it's a recipe for pleasure in this scissors/bearhug battle! "Look at this. You got earrings and an anklet; that's cute," mocks Iceman18. Loki is not intimidated flexing as the vet grabs his quads of steel, "So puffy looking, you're like a live Michelin Man. Do you actually do anything with those things, or is it all for show?" Loki is pissed, "Puffy? Want me to show you what they can do?" The muscle beast TACKLES Iceman18 and wraps his tree trunk quads around him in a massive body scissors. A power struggle erupts as both hunks tussle on the mat. The vet tries feverishly to escape as Loki changes positions flexing his quads tighter! "With your puffy legs and your fat body, now it hurts!" groans Iceman18. "Stay down!" Incredibly, the vet gets to his knees and picks up Loki again and again SLAMMING his back on the mat until the hold is released! The blonde muscle hunk stands to his feet and continues to slam Loki on the back of his head, "Your legs were kinda strong, but for now, feel mine!" The beast is completely gassed barely moving as Iceman18 body scissors his prized quads. "How's that stupid anklet working out for you now?" The torture continues with one-legged Boston crabs, a crippling camel clutch, even hair pulling as he screams in pain! Loki's chiseled back is stretched to its limits ready to break at any moment before he is slammed face first to the mat. He recovers, and a flex off ensues each comparing their powerful pythons. "Why are you so little? Want to see what arms looks like?" asks Loki. "You got some nice stretch marks there!" mocks Iceman18. "It's from growing. You don't know nothing about that!" Playtime is over as these behemoths try to break each other's ribs and backs in a grueling BEARHUG BATTLE! The vet lifts Loki in a brutal belly to belly bearhug, "You know how I can tell you're all buff and no strength; cuz you're so light!" Iceman18 swings the beast in circles like a child mocking the 220 pounder as he gasps for air and is dropped down. Loki sees red and returns the favor swinging the blonde beast back, "What were you saying about being light?" As time goes on, the bearhugs get tighter and tighter. Both behemoths struggle to breathe as their hulking muscles become limp in each other's powerful embrace and crumble to the mat! With the bearhug battle complete, Loki flexes thick double biceps when he is surprised with a brutal ab stretch! He groans in pain struggling under the intense pressure and finally is able to FLIP Iceman18 over his shoulder onto the mat. Loki mounts his victim's abs fighting to keep his arms pinned down as he battles back to escape. Quickly, the muscle beast rolls Iceman18 over in a SKULL-CRUSHING front facing head scissors; his face pulled deep into Loki's thick quads of steel as he groans in agony. Incredibly, Iceman18 struggles to his knees then stands to his feet picking up the bodybuilder and SLAMS him on his back breaking the hold. Loki writhes in pain on the mat and is forced up into a vicious standing head scissors! "This is real power!" yells Iceman18. "So squishy!" groans Loki fighting to stay conscious. "You're getting mixed up with your legs my friend!" The blonde behemoth takes his scissors down to the mat as Loki struggles to break free. Iceman18 SQUEEZES tighter and tighter; the beast's face is pure torture as his hulking frame is powerless to escape. With Loki's head in his lap, Iceman18 releases his scissors, pins his pythons down with his legs, and GRINDS his forearm into Loki's face, "How's that pretty face feel now?" The dominant beast slaps his victim's chiseled abs again and again as he screams in pain! What comes next is one of the HOTTEST submissions we have ever filmed as things are about to get up close and personal!
OPCFW_CODE
I have finally entered the world of 3D printing with the purchase of my Prusa i3 MK3S kit. It’s been a great printer so far, and I look forward to using it, mostly for printing functional and replacement parts. It has one annoying problem though, and that is a very noisy rattle coming from the bed during printing. I’ve come up with two solutions to eliminate the rattle. Stupidly, I did them both at the same time so I’m not actually sure which was most effective. Pack the Linear Bearings with Grease The bed of the Prusa i3 rides on LM8UU linear ball bearings. These bearings slide along 8mm smooth rods. There has to be a small amount of clearance between the linear bearing and the smooth rod, otherwise the bearings would seize up. This play in the bearings is the source of the bed rattle. From the factory, the LM8UU bearings come prelubricated with a light oil. The Prusa assembly instructions say that no additional lubrication is necessary. My theory is that packing the bearings with grease anyway will add enough “stickiness” to prevent the bearings from rattling I used a printable cap that fits tubes of Super Lube to pack grease into the bearings. I printed this in PLA. After packing, the bearings have noticeable drag when sliding on the smooth rods. This additional resistance might help reducing ringing, but I’m not sure. I ended up tearing the whole machine apart and greasing all the bearings, because it seemed like the right thing to do. Replace U-Bolts on Bed Frame The LM8UU bearings are held to the bottom of the bed frame by u-bolts. This is pretty janky, even more so than the zipties holding things together. The u-bolts concentrate clamping force in a small area of the bearings, making them sensitive to overtighting. I believe this connection is also too rigid, causing the bearings to rattle more than they would otherwise. I printed bearing holders to mount the bearings on the bed frame without u-bolts. I printed these in Prusa orange PETG. My theory is that the less rigid bearing holders will work like dampers to reduce rattle in the bearings. The design holds the bearings fully away from the bed frame, so there is no metal-on-metal contact. As a consequence of this modification, the bed sits about 2mm higher. This isn’t a big deal as far as lost print volume goes, but it does cause two additional problem. Firstly, the Y-axis belt no longer aligns with the tensioner on the bottom of the bed frame. This was an easy fix. I printed an extended Y-axis belt tensioner, which fixed the problem. This was also printed in the Prusa orange PETG. Secondly, the printer will now fail XYZ calibration, and might even crash the nozzle into the bed. This is because the Z-axis is now 2mm shorter than the printer expects. Fixing this problem requires modifying the printer firmware with a shorter Z-axis length. Making this modification was incredibly easy. Hats off to the Marlin and Prusa teams for creating a codebase that isn’t complete garbage. - Clone the Prusa firmware from GitHub. - Checkout out the latest version of the firmware. git checkout v3.9.1, in my case. - Follow the instructions in the README to set up your build environment. I use Debian GNU/Linux, so setup was as easy as sudo apt install gawk && ./build.sh. This automagically downloads all of the build dependencies and builds the firmware. - I’m not a big fan of build scripts that download their own dependencies, but I have to admit this is easy and convenient. - I don’t like that it creates build and environment directories in the parent of the git repository. These should be children of the git repository directory. - For the MK3S, copy the firmware variant file to the mv ./Firmware/variants/1_75mm_MK3S-EINSy10a-E3Dv6full.h ./Firmware/Configuration_prusa.h. This overwrites the destination file with the configuration for the MK3S. ./Firmware/Configuration_prusa.h. There is a parameter called Z_MAX_POS that defines the length of the Z-Axis. I changed mine to 208. You might need to fiddle with this if calibration still fails or the nozzle still crashes into the bed. - Compile the firmware using ./build.sh. The hex file is output to - Update the printer firmware using the usual utility in PrusaSlicer. - Perform an XYZ calibration on the printer. Make sure to watch the calibration in case the nozzle crashes into the bed! If this happens, or the printer still fails calibration, decrease the value of Update Printer Settings in PrusaSlicer You could probably skip this step unless you plan to print things utilizing the full printer build height. Without it, PrusaSlicer will allow you to slice objects that are too tall for the printer by an amount equal to however much Z_MAX_POS was decreased (2mm in my case). In PrusaSlicer, go to the “Printer Settings” tab. Change the value of “Max print height” to your value for Z_MAX_POS (208 in my case). Save, and repeat for other profiles. The results of all this work is that the bed rattle is gone! I’m still not sure which solution fixed the problem, but I am sure that I like my nice, quiet printer. I’m also glad that I got an opportunity to dip my toes into the firmware modification waters. I’ve had very frustrating experiences with firmware for other devices in the past, and likely would have never touched the firmware for my printer if it wasn’t for this project. Now that I see how easy it is, I’m likely to make other modifications in the future.
OPCFW_CODE
Recently I had the opportunity to go through the cycle of augmenting an automation engineer to join our team. It had been a while since I used to do this (much frequently) few years ago. This time I stumbled upon few ‘new’ realizations, especially for hiring automation engineers, which never occurred to me before. With many folks I see struggling to find suitable candidates for this title, I thought of jotting down some lessons learned through the process. The best part, its one step really (to being with) Cut the Crap Candidates undervalue their skill set (especially the ones you want to hire). Furthermore, seeing a huge job description with all the buzz words the hiring manager could find on the internet, scares away potential candidates from applying. There are usually a few key skills for any given position which are vital for the team. With a lot of ‘name dropping’, candidates get confused and hesitate to apply due to lines like ‘Expert with 3+ years experience in SQL server’, which in many cases, there is a 5% probability the new hire will be writing complex queries. Our job ad was not performing well. The candidates we got were not even a match on paper. HR suggested to have another look at the job description, and she was right. Hesitantly I reviewed and found the job add was scary to say the least. No doubt the position had considerable requirements, but the essentials were few. While cutting down the content, the few skills I felt were essential for an automation engineer / SDET (NOT a lead position) are presented here. Not ‘10+ years of experience in Java’, who could code a talking parrot. If your project is in Java, don’t necessarily look for a Java guru. If a person has the aptitude of constructing algorithms and can demonstrate good programming skills with any language, he/she can learn the new language or framework. When the technology changes (which it will), they will be most comfortable to adapt, more willing to learn new tools / languages and leave their comfort zone. A term I use referring to a tester’s mind set. Who is able to craft test scenarios covering different aspects of the AUT, having the process related knowledge, the skill to extract requirements, to tell a great story when writing up an issue and so on. I always believe an automation project is as good as the scenarios being automated. If the scripts are checking trivial functionality, it’s not going to create the difference we want. The counter argument I’ve heard, since test cases are being given, automation folks don’t need in depth testing experience or knowledge. Well, a person with technical insight and a tester’s eye will be more suitable to suggest what areas are not being tested and how to test them. Even if a team has Unit tests, integration tests and UI tests, still someone needs to create scenarios for the system level tests verifying the business logic on system level. The one thing an automation team might never get rid of is ‘technical problems’. Each day is going to be a new day. Either you will make mistakes and learn (for the most part) or you would be wise and spend considerable time in learning to avoid making the mistake. Through all these endeavors, the only thing to keep you going is the right attitude. In any job having the right attitude is the most fundamental aspect, specially true for automation folks. They never run out of problems, and they can never get tired of solving them! Under non-technical skills there are many qualities hiring managers aspire to, this one which I found lacking in some cases and most relevant specially in this type of position. A term coined by Google’s “Smart creatives”, which is the enhanced version of Peter Drucker’s knowledge workers. I understood it as ‘A person with the fire to learn new things, who is technical and business savy and has a creative personality’. Imagine what a smart creative technical tester would do – Put development on DEFCON 1! Actually no, instead I feel he / she should bridge the gap between both camps and get best of both worlds. I had some candidates who were not very savy with the traditional automation frameworks, but had great programming and learning skills, could develop algorithms and had learned other programming languages. Those candidates were equally good in my dictionary and I would definitely hire such a person. If your project is in Selenium Cucumber, it would be unwise to hire someone who is already working on this technology, because there is no new learning for them. The increased salary will keep them content only for so long. Look for people who have the level of exposure you need, not necessarily in the same tool. For elementary positions a basic understanding of how generally UI automation works, common problems faced in automation and how the DOM works would suffice. This position is fundamentally hard to fill. QA folks with the ‘testing acumen’ have traditionally kept a distance between them and technical aspects of developing software. Development folks have the technical exposure but lack the testing acumen. Where to find this mix breed! Hence focus on the fundamentals. I feel by going deep into finding testing knowledge or technical skill set, the job gets even harder. Not to say hire someone with less of any of the two, instead hire someone with the ‘aptitude’, not necessarily +x years with a huge checklist. The job description should revolve around the fundamentals, not more than a hand full of bullets. As the founding smart creatives put it “Widen the aperture”. Employers tend to narrow down candidates with very specific backgrounds only. We had an interesting candidate when we widened the aperture. He had an accounting background and demonstrated great skill in different development technologies too. A lot of hidden gems are left out when the filters are too tight around filtering resumes. Care to share what else you would consider while hiring an automation engineer?
OPCFW_CODE
How do I ask for a higher compensation when the work I'm doing is meant for higher ranks? In the company I'm currently employed, there's a 5-degree career plan, with very distinct functions. Step 2 (Programmer) is supposed to do the programming per se. Analysts and higher (Steps 3, 4 and 5) define what must be done, programmers decide how (technically) and execute it. I'm no longer doing any development. I'm currently meeting with clients, documenting things and forwarding the development to a team of programmers. This is not something I decided to do, those are the orders I got from management. I also have to do lots of reports, including team reviews. Here's the catch: the company does not allow a promotion to analyst without a bachelors degree, which I don't have (yet! I'm working on it, but will take at least another 1.5 years to complete). I feel this is a little too convenient - I can do the work, but can't get paid accordingly. I'm going through some financial distress and I'd like to ask them to make an exception for my case. What they are doing is actually illegal where I live. I don't want to threaten with a lawsuit (obviously - I don't have any intention of doing anything about it), nor do I want to sound like I'll stop doing what I'm being told to do if I don't get promoted (even though this is what I'd like to do). How should I proceed without sounding like I'm entitled or just plain lazy and without risking losing my job? Do you want to be sure you are still employed there after you get your degree? Yes, definitely! If they followed the law, would the result be that you were paid more, or did a lower level of work? @joe, in Brazil there are two things called "attribute deviation" and "wage parity". The first one happens when an employer hires someone to do a subpaid work, but in reality ends up making him do what was supposed to be a higher paid job. The second one demands all employees with the same attributions get paid the same (regardless of titles, as long as they were hired within a 2-year time frame). Wife's a lawyer. @thursdaygeek either one would be fine. @joe, there are also other analysts (one was hired on the same day I was!) doing exactly the same thing I am (in the same project). We discuss everything before delegating tasks to the other programmers. This falls right into the "wage parity" thing. @PedroCordeiro - I am not familiar with Brazilian law, but since they stipulate a degree for your level of work, your legal grounds may not be as strong as you believe. @ComeAndGo, you'd be right if my profession was regulamented (like doctors, engineers, etc). Since it's not, the law says if I can't have the job (because I don't have a degree), I can't do the job either. If I'm already doing the job, I should be paid accordingly. @Pedro:Very nice, pay everyone the same. That is really motivating for those who would otherwise be very ambitious and excel at their job. Since that doesn't matter, people will do "just enough" to keep their job since it isn't going to help their pay for them to put in the extra effort. @Dunk Dude, I'm just reading it, I didn't make the rule. And if you think about it, it kinda makes sense. If you have two employees with different skill levels, they should not be doing the same things. You can also have productivity bonuses, but the base salary must be the same. At larger companies, most of the time, the employee proves they are ready for promotion by already successfully performing the duties of the higher level position before they are actually promoted. So your situation doesn't seem out of the ordinary if it doesn't go on for more than a couple of years. However, a degree matters. Some contract bid proposals require this information as they use it as a gauge for whether the contractor can really do the job or not. Also, some government contracts have separate billing rates for degreed versus non-degreed contractor employees. @Pedro:I didn't say you made the rule, I didn't know that rule existed anywhere and had to comment on the obvious downside. Anyways, the rule doesn't make sense unless you assume that people with the same credentials and background will perform at about the same level. That is absolutely not true. Now to the extent that productivity bonuses come into play, that could wipe out my entire concern as then workers would have reasons to excel and be motivated. Of course those bonuses better be really big to have any effect at all. @Dunk, your concerns are perfectly valid. But reality is that Brazil is a f@#ed up place, and if you don't regulate things, employers will end up hiring everyone as janitors and making them develop software, just because it's cheaper. It's the worst I'm-smarter-than-the-system culture in the world. Perhaps explain that you are experiencing financial hardship, and since you are doing the work of an analyst and working on the degree, maybe they can make an exception and pay you as an analyst for a stipulated period of time - for example until you can negotiate your financial difficulties (six months or a years, etc) or a reasonable amount of time for you to finish your degree and acquire the official Analyst title, so you can be duly compensated without a special exception. Do not ask for a "blanket exception", one that relinquishes you from all responsibility - IMO that doesn't smell right. Also make sure that you are indeed doing the work that demands higher pay, and that you aren't dealing with a perception problem: Doing more than you think you are. You can probably verify this by talking to co-workers, etc.
STACK_EXCHANGE
Since I intend to reduce fear and misunderstandings about mathematics, and since I am a mathematician, I will start with the most scary thing I know about mathematics: it has no end. I did my graduate work at Caltech, an excellent school, if it does not drive you insane. While I was there I helped teach an introductory course with the inspiring name Math 1. One of the students in my group, a very orderly fellow who always wore a starched white shirt and carried a briefcase, would often chat with me after we had resolved whatever questions the current homework had caused him to ask. During one of these chats he asked in a puzzled voice, “What do you people do around here?” It had just occurred to him that I and my office mates were available for about three hours a week to help him and surely we must have some other duties. The “here” to which he referred was the department of mathematics, in particular my office, which housed three graduate students at the time. I replied, “we figure out new mathematics.” This caused him to look at me in horror which he vocalized by saying, “but I’m a physics major!” I toyed with the idea of assuring him that this was a treatable condition, but since he actually looked upset I asked a question that I’ve been asking people my whole life: “why do you care?” “How can I do physics if we don’t know all the mathematics yet?” he replied. “I thought this one over for a minute and then said, “follow me.” A couple doors down the corridor from my corridor and across the hall was the reading room. It has large tables which are handy when you’re trying to track something down in five parts of three different books (a lost art, mostly, what with the interwebs and all). The tables were also quite useful for grading stacks of papers (this still happens. Way too much). I took my charge there and gestured at the floor-to-ceiling shelves that covered three walls. “This is all new math, within the last five years.” “ALL of it? There are hundreds of books in here.” “Actually they are bound collections of journals, and just the ones that are close to current faculty interests. The seventh floor of the library is all math, much of it recent.” “But physics depends on math!” “Well, only a few bit of math are relevant to physics. Dr. Simon actually fills in the math underneath some of the stuff the physics professors make up.” “He’s a mathematical physicist. Physicists are careless and just make up stuff that fits their observations or lets them justify an intuitive conclusion. One of the things we do is figure out if its possible to make what they did mathematically rigorous.” At this point, the student walked off looking dyspeptic. I hope you enjoyed this story. Department of Mathematics and Statistics University of Guelph, Ontario, Canada
OPCFW_CODE
Ashman Lab members - a community of scholar-educators Dr. Tia-Lynn Ashman Dr. Nathlia Streher Dr. Ashman’s research program mines the interrelationship between ecology and evolution and focuses on plant biology. It spans scales of single interacting populations to diffuse interactions within whole communities, and from evolutionary genomics to ecological genetics. Current projects in her lab are based in California, Hawaii, Oregon, Mexico and China revolve around four major foci: 1) The contribution of polyploidy to functional and genomic biodiversity; 2) Ecological and evolutionary studies of separate sexes and sex chromosomes; and 3) The influence of biotic and abiotic features on plant-pollinator interactions, and their effects on phenotypic evolution. 4) The role of plant-pollinator interactions in plant coexistence in biodiverse areas and in the face of shifting species compositions (extinctions/invasions) and climate change; 5) Urban plant ecology and the importance of pollinator-mediated interactions to plant fitness and biodiversity; 6)Pollinators as viral vectors. Dr. Ashman is a UPitt Chancellor’s Distinguished Researcher, EO Wilson Awardee, Outstanding Mentor Awardee and has published over 175 research papers. She has served on the Editorial boards of The American Naturalist, Ecology, and New Phytologist, as well as Secretary and Executive Council member of American Society of Naturalists, and Society for the Study of Evolution. She is deeply committed to outreach and expanding diversity within science. Vero is a 3rd year graduate student. She received her bachelor's from Wake Forest. She is exploring the effects of herbicide drift on plant, microbe and pollinator communities with and interest in eco-evo dynamics . Dr. Streher received her PhD from University Campinas in Brazil. She is interested in pollination and biodiversity. In the lab she is working on whether polyploidy leads to pollination generalism using herbarium specimens. Dr. Thomas Anneberg Dr. Anneberg received his PhD from Syracuse University where he studied the effects of whole-genome duplication, or polyploidy, on the nutritional needs of plants and how their biotic interactions with mutualistic fungi are affected, making predictions based on results of individual-level performance where these early generation polyploids should become established. However, since ecological establishment is a population-level process, Thomas is now conducting postdoctoral work on the importance of abiotic and biotic interactions on the establishment of polyploid populations in collaboration with the Turcotte lab. Dr. Ben Lee Liz graduated from the University of Pittsburgh in 2017 with a B.S. in Biology. She keeps us all from falling apart ! Ethan is a freshman Bio major. He is interested in plant genomics. Lizzie is a Biology major with minors in chemistry and studio arts. She is also a Golden Girl (majorette) in the University of Pittsburgh Varsity Marching Band and is an active member and business manager of the band’s service sorority Tau Beta Sigma. Dr. Lee received his PhD from University of Michigan where he studied phenology. He is a NSF Postdoctoral Scholar with Dr Heberling at CMNH as well as being hosted by our lab. Nevin is a 4th year graduate student. He received his bachelor's from Humboldt State University and his MS from SFSU. He is exploring the effect of metal accumulation on floral microbes and plant-pollinator interactions and speciation. Hannah 2nd year graduate student interested in invasion and urbanization. She is co-advised with Martin Turcotte. Amber is a 2nd year graduate student. She received her BS and MS from ETSU where she studied urban seed dispersal. She is exploring the effects of urbanization on plants and pollination networks. Jason is studying Ecology and Evolution with an environmental concentration Mikaela is a Biology major.
OPCFW_CODE
How can I learn to tank as a post-Shattering Death Knight? I'm planning on starting up a Death Knight with intent to be a tank, both for random dungeons levelling up and then with my group of friends at 85. This question is asked post-Shattering, meaning currently the Death Knight tanking spec is Blood. I noticed the top Google searches for DK guides out there haven't been updated to account for the Shattering. Short question: How do I tank? More detailed questions... What kind of talent decisions need to be made? How do I properly enter a fight? What sorts of ability rotations/priorities should I use? What procs do I look out for? What glyphs are useful? What are the differences between tanking lots of mobs and tanking just 1 boss? What tools can I use if things go wrong, like a ranged DPS pulls aggro or adds show up? What other questions should I be asking? The major changes to DK (and all classes), including blood becoming the dedicated tanking tree, happened at patch 4.0.1, not 4.0.3a ("The Shattering"). Blood Deathknights should be spending their runes to use Heartstrike (which they get as their Blood Specialization) and Deathstrike (which is the focus of their Mastery specialization). Talents Here is an example talent spec: http://www.wowhead.com/talent#jcbG0srMusd The core talents you want to be sure to pick up are Blade Barrier (reduces damage taken), Toughness (increases your armor), Bone Shield (tanking cooldown), Sanguine Fortitude (improves Icebound fortitude, a tanking cooldown), Rune Tap and Vampiric Blood (both tanking cooldowns) Improved Blood Presence (Crit immunity), and Dancing Rune Weapon (survivability AND threat cooldown). After that, you have a choice between dipping into either Frost or Unholy. Unholy is almost always going to be strictly better, as you do not want to dual wield to tank (dual wielding has an inherently higher miss chance than 2-handed weapons, so your threat is going to be much worse) nor are you using obliterate (as you should be using death strike). Unholy gives you your choice of reduced taunt cooldown (Deathgrip), free +hit% for your spells (so deathcoil won't have a chance to miss) and morbidity (boosts the damage and thus, threat, of your runic power dump: deathcoil). Taking frost as far as Lichborne, however, means you can activate Lichborne, and then cast deathcoil on yourself to get a free heal if you're low -- a neat trick, and handy if you're low on health after a big boss burst. Glyphs Prime Glyphs:I would go with glyphs of Death Strike, Heart Strike, and Deathcoil for your prime glyphs, though depending on playstyle you may prefer to swap the latter for Runestrike (though I'll always take +damage over +crit chance, since the former is more consistent) or Death and Decay if you want additional AoE threat. Major Glyphs: Dancing Rune Weapon is a no brainer, but beyond that your choices highly depend on your playstyle. Minor Glyphs: I'd Glyph Blood Tap and Death Grip, but these are minor glyphs, and won't have much effect at all in regards to your tanking. For further discussion, I'd point you toward the Elitist Jerks' Death Knight Forums, specifically the tanking thread, which can be found here. Tanking DKs should rune strike as a RP dump, not death coil. DC might find use in conjunction with lichborne for self healing as a tank and even then rarely, but threat is king for tanking, and rune strike is specifically a threat ability. Information as of 5.0.4 (pre-Mop patch, level 85) What kind of talent decisions need to be made? You could not choose talents and do just fine in non-heroic content. Your talent choices make your Death Knight play differently, but they do not make your Death Knight work. If you are overwhelmed with buttons already, I recommend going for passive talent choices: Roiling Blood, Purgatory, Death's Advance, Runic Corruption. How do I properly enter a fight? You have a few important jobs: Keep the mobs (particularly melee mobs) off the healer. Do this by establishing at least 1 point of damage on all mobs. BloodBoil/HeartStrike/DeathAndDecay are the tools for this. Provide the dps with a safe target. Mark it with Skull (I have skull on my middle mouse button. Hit that target with DeathStrike (a high burst damage ability). Survive - this might mean getting diseases (ScarletFever) up if you expect the fight to last more than 10 seconds. This might mean using a defensive cooldown to protect yourself from the full alpha strike of the pack. This might mean interrupting a spell. Gather the mobs for AOE opportunities. You may need to death grip (or use line of sight) to move a caster. If there are two casters, pull one to the other. Call for crowd control if there is some mob you don't want to tank or interrupt. Mark the CC target with Moon. Wait for someone to put it to sleep. Finishing a fight: Once you've established enough threat that no mob will break from the pack, and you know you'll survive the incoming damage... you might want to conserve Runic Power for the next fight. What sorts of ability rotations/priorities should I use? Understand global cooldowns. Most abilities cause a global cooldown (gcd), which prevents you from using other abilities immediately. This is why you can't put out 2 DeathStrikes in the same second, they must be spaced. You can, however, use a DeathStrike and a BoneShield in the same second, since BoneShield is off the global cooldown. Also understand Rune Recharging... Suppose you HeartStrike twice in a row (2 blood runes). The first HS will cause your first blood rune to start to recharge. The second HS will consume the second blood rune, but that rune won't start to recharge until the first blood rune has completed its recharge. My Rune recharge speed is 8.33 seconds (shown in the character stats screen under melee). This means, I will have to wait ~16 seconds for that second blood rune to recharge. You always want each flavor of Rune to be recharging (otherwise you are wasting the recharge). But you don't want to just spend all your runes and stand around. I recommend DeathStrike -> HeartStrike -> RuneStrike -> something else -> repeat. Something else can be Horn of Winter, Outbreak, a second RuneStrike if Runic Power permits, or a free BloodBoil. Or Something else can also be occasionally skipped, as you do have a second set of runes, or Runic Conversion may kick in and give you your runes back early. If you truly get stuck without Runes, that's what EmpowerRunicWeapon is for. What procs do I look out for? Will of the Necropolis (big central indicator) : If your health gets low, you get to use a free Rune Tap Crimson Scourge (bubbly side indicators) : melee attacks against diseased targets occasionally grant a free BloodBoil or DeathAndDecay. Those free BloodBoils are great for keeping your diseases up, thanks to ScarletFever. What glyphs are useful? I don't view any glyphs as critical for tanking 5-man normal. Maybe Glyph of DeathGrip, to be sure you're in range. If you're powering through groups quicky, Glyph of Outbreak might be good (you carry runic power from the previous fight). What are the differences between tanking lots of mobs and tanking just 1 boss? What tools can I use if things go wrong, like a ranged DPS pulls aggro or adds show up? Target switching. In a pack, a mob may break for the dps if they don't target your skull-marked safe target. You'll need to switch target to use DarkCommand/DeathGrip or some attack to get that mob back. What other questions should I be asking? Stats: Hit and Expertise are fine - use the tooltip to see if you need more to avoid missing. Parry and Dodge are tank stats - don't be shy about getting those. Haste, this lowers your Global Cooldown and Rune Generation speed. More abilities is a good thing. Crit - doesn't really matter. Addons... Go get TidyPlates. This will help with the target switching problem. In the rotation question above, I talked a lot about threat building abilities. Here's a list of non-threat building abilities that you should be familiar with. All of these abilities cost no runes and are off the global cooldown, unless noted. Defensive Cooldowns - Bone Shield, Rune Tap (1 Blood Rune), Icebound Fortitude, Anti-Magic Shield, Vampiric Blood, Dancing Rune Weapon (60 Runic Power) Interrupts - Strangulate (Global Cooldown), Mind Freeze Taunts - Dark Command, Death Grip Resource Generator - Horn of Winter (Global Cooldown), EmpowerRunicWeapon Think about positioning and facing - dragon breath goes forward, you don't want it on your group. Parry only works against attacks coming from your front. Keep HornOfWinter and BoneShield up at all times. Watch out for patrols (everywhere) - also Runners (if you're starting with Outlands dungeons).
STACK_EXCHANGE
What this essentially means is that instead of creating things like user interfaces and strings in code, you create an XML file to hold this information under the ‘res’ project directory.The Eclipse ADT plugin handles actually holding references to all these resources via the R class, which is re-generated whenever a new resource is defined or the project is rebuild and is held under the ‘gen’ project directory. For example, say I had defined a simple Activity layout called “maingui” and tried to reference this in my Activity start up code like so: – The first thing you might notice is that the file under your Eclipse project is missing.I even tried to update eclipse ADT Build: v22.6.2-1085508 but does not update maybe because is portable as is in the office) package com.example.last; import android.support.v7 I am running Eclipse 4.2.2 Juno on Windows 64-bit for development on Android SDK 17 with ADT. Just today, I cleaned a working project, only to find that the file would no longer generate. Creating a new "drawables" folder causes Eclipse to throw an error because it would technically lead to a duplicate class - hence the little red cross on your "drawables" folder. You should move all your images into one of these pre-existing folders then delete the drawables folder you created. Under Eclipse, go to: – to import your R resource into your project for easy referencing. However, this isn’t the R file including in your project package.R" statement, XML errors, corruption requiring cleaning and rebuilding, and checking the Android SDK in Project - Java Build Path / Libraries. The only remaining possibility within Gray's list seems to be that either my or Android is broken, so I have included them to make sure I didn't miss any errors. If so follow the below Right click on your project goto properties. Check the error console in your IDE to figure out what's specifically wrong.My main layout: First check if there are any errors in your resource file's. Common problems are: In addition to what Reghunandan said, I just had the same issue after updating to SDK 22 with Eclipse Indigo Win x64.This problem has a very divergent list of possible causes.User Gray, in response to the thread located here, listed a set of articles, all addressing different possible causes. Also goto android sdk manager and check that you have the android sdk build tools installed.He says: And then proceeds to list articles pertaining to different causes of this problem, the links to which I am unable to include, but can be found in the question linked to above. This many not be necessary but make sure you have android build tools installed.
OPCFW_CODE
Record indicators in non-stationary random variables Assume $X_1,...,X_n$ are independent, but not identically distributed continuous RVs. I am interested in the record indicators $R_j = \mathbf{1}_{X_j > max(X_1,...,X_{j-1})}$. According to Nevzorov (Records: Mathematical Theory) these $R_j$ are not independent. But he does neither give a proof or an intuition. (Note: In the stationary case, $P(R_j = 1) = \frac{1}{j}$ and the $R_j$ are independent.) Is there a simple counterexample where one can e.g. compute $P(R_j = 1)$ and $P(R_j = 1 | R_i = 1)$ explicitly and see how the independence is violated in the non-stationary case? Edit: I should have written continuous RVs because otherwise also the iid statement is not necessarily true. I edited above. Take $X_i$ uniformly distributed on $\{1,\dots,i\}$. Then $\{R_2=1\}=\{X_2=2\}$ hence $\mathbb P(R_2=1)=1/2$. Moreover, $$\{R_3=1\}=\{X_3>\max\{X_1,X_2\}\}=\{X_3>X_2\}=\{X_3=3\}\cup(\{X_3=2\}\cap\{X_2=1\})$$ hence $\mathbb P(R_3=1)=\mathbb P(X_3=3)+\mathbb P(\{X_3=2\}\cap\{X_2=1\})= \frac 13\left(1+\frac 12\right)=\frac 12$. Finally, $$ \mathbb P(R_3=1,R_2=1)=\mathbb P(X_3>X_2>X_1)=\mathbb P(X_2=2,X_3=3)=\frac 16\neq \frac 12\times\frac 12. $$ In the continuous case, take $X_1,X_2,X_3$ having exponential distribution with respective parameters $\lambda_1,\lambda_2,\lambda_3$. All the probability can be explicitely computed: we find $\mathbb P(R_1)=\mathbb P(X_2>X_1)=\lambda_1/(\lambda_1+\lambda_2)$, $\mathbb P(R_3=1)=\mathbb P(X_3>X_2>X_1)+\mathbb P(X_3>X_1>X_2)$ and $\mathbb P(R_2\cap R_3)=\mathbb P(X_3>X_2>X_1)$. All these probabilities can be computed and at the end, the equality we want to check is $$ \frac 1{\lambda_2+\lambda_3}= \frac 1{\lambda_1+\lambda_2}\left( \frac 1{\lambda_2+\lambda_3}+ \frac 1{\lambda_1+\lambda_3}\right) $$ which does not always take place by letting for example $\lambda_1$ to infinity. To complement the excellent answer by Davide Giraudo, I give a simple intuitive example, and another simple continuous example. Intuitive argument: Suppose that you are looking at temperature data, which is rising over time. Say $X_i=i+\epsilon_i$, where $\epsilon_i$ are independent standard Gaussians. We consider whether $R_1$ and $R_2$ are independent. A priori, the strongest 'competitor' for $X_2$ is $X_1$. Now if $X_1$ is not a record - in spite of the positive trend - this means that $\epsilon_1$ was likely low. This (probably) low value of $\epsilon_1$ makes it more likely that $X_2$ will exceed its strongest competitor $X_1$, too. Moreover, $X_2$ is likely to exceed $X_0$, even if it is a bit larger than usual. So, knowing that $R_1=0$ makes it more likely that $R_2=1$. Here is a similar, simple mathematical argument: Simple mathematical argument: We take $X_0=0$, and $X_1,X_2$ as independent continuous. $$ \mathbb{P}[R_2=1|R_1=0]=\mathbb{P}[X_2>0|X_1<0]=\mathbb{P}[X_2>0]. $$ The last step follows by independence. On the other hand: $$ \mathbb{P}[R_2=1|R_1=1]=\mathbb{P}[X_2>X_1|X_1>0]<\mathbb{P}[X_2>0|X_1>0]=\mathbb{P}[X_2>0] $$
STACK_EXCHANGE
Re: Announcing GNU ethical criteria for code repositories The FSF write: > Today, the FSF and GNU project announced the first version of > criteria for evaluating services that host free software source > code repositories for distribution and collaborative > development. Developed with the leadership of Richard Stallman > and GNU volunteers, the criteria provide a framework for code > repositories to ensure that they respect their users in a manner > consonant with the values of the free software movement, and for > users to hold these crucial institutions accountable. > The criteria emphasize protection of privacy (including > accessibility through the [Tor > network](https://www.torproject.org)), functionality without > compatibility with copyleft licensing and philosophy, and equal > treatment of all users' traffic. > [Published on > gnu.org](https://www.gnu.org/software/repo-criteria.html), the > criteria are directed at services hosting parts of the GNU > operating system, but they're recommended for anyone who wants to > use a service for publicly hosting free source code (and > optionally, executable programs as well). Moving forward, we will > update the criteria in response to technological and social > changes in the landscape of code hosting. I took a look at these and many of these seem to be the kind of things that Debian would care about too. I don't know if we want to adopt some set of principles like this and if so where we would document If we did, I think my personal view would be as follows. Services provided or endorsed by Debian. Including, but not necessarily limited to: - official Debian services; - services which are presently unofficial but intended to become official Debian services; - services hosted on Debian infrastructure; - services recommended by official Debian documentation (including documentation from packaging teams); - services which host official Debian resources including team packaging repos, etc. Server code and all of its dependencies are Free Software (by Debian's definition). [A1; implies most of C0] Ideally, server code is in Debian main. Read-only access available to the public [A+0] except for the kinds of cases where we already make an exception to our principles of Data exportable in a machine-readable format. [A+5] All important functions work without JS. [A0] Any software required to use the service must be in Debian main. No discrimination against classes of users or countries. [C2] Access permitted via Tor. [C3] No odious terms of service conditions. [C4] Sensible recommendations and defaults for licensing; all default and primarily-recommended licence(s) should be GPLv3+-compatible. [~C5] Support for https strongly recommended but not mandatory. [C6] No reporting of site visitors to third parties, so no third-party tracking tags or images. [B1] No per-user tracking of non-logged in users. No cookies or equivalent, except as required for the site to function (eg for login, and recording preferences of anonymous users). (And no stupid cookie popup banners.) [related to B1] Limit logging to what is required for audit and debugging. [Related to A+1, A+2] As accessible as possible [A+3, A+4 are relevant; I'm not qualified to say whether those exact standards are sensible]. (Notes in [ ] are references to the paragraphs in the FSF's criteria
OPCFW_CODE
Dodge, Charles (Malcolm) Dodge, Charles (Malcolm) Dodge, Charles (Malcolm), American composer and teacher; b. Ames, Iowa, June 5, 1942. He studied composition with Hervig and Bezanson at the Univ. of Iowa (B.A., 1964), Milhaud at the Aspen (Colo.) Music School (summer, 1961), and Schuller at the Berkshire Music Center in Tanglewood (summer, 1964), where he also attended seminars given by Berger and Foss. He then studied composition with Chou Wenchung and Luening, electronic music with Ussachevsky, and theory with William J. Mitchell at Columbia Univ. (M.A., 1966; D.M.A., 1970). He was a teacher at Columbia Univ. (1967–69; 1970–77) and Princeton Univ. (1969–70); was assoc. prof. (1977–80) and prof. (1980–95) of music at Brooklyn Coll. and the graduate center of the City Univ. of N.Y. In 1993–94 and again from 1995 he served as a visiting prof, of music at Dartmouth Coll. He was president of the American Composers Alliance (1971–75) and the American Music Center (1979–82). In 1972 and 1975 he held Guggenheim fellowships. In 1974, 1976, 1987, and 1991 he held NEA composer fellowships. With T. Jerse, he publ. Computer Music: Synthesis, Composition, and Performance (N.Y., 1985). Composition in 5 Parts for Cello and Piano (1964); Solos and Combinations for Flute, Clarinet, and Oboe (1964); Folia for Chamber Orch. (1965); Rota for Orch. (1966); Changes for Computer Synthesis (1970); Earth’s Magnetic Field for Computer Synthesis (1970); Speech Songs for Computer-Synthesized Voice (1972); Extensions for Trumpet and Computer Synthesis (1973); The Story of Our Lives for Computer-Synthesized Voice (1974; also for Videotape, 1975); In Celebration for Computer-Synthesized Voice (1975); Palinode for Orch. and Computer Synthesis (1976); Cascando, radio play by Samuel Beckett (1978); Any Resemblance Is Purely Coincidental for Piano and Computer-Synthesized “Caruso Voice” (1980); Han motte henne i parken, radio play by Richard Kostelanetz (1981); He Met Her in the Park, radio play by Richard Kostelanetz (1982); Distribution, Redistribution for Violin, Cello, and Piano (1983); Mingo’s Song for Computer- Synthesized Voice (1983); The Waves for Soprano and Computer Synthesis (1984); Profile for Computer Synthesis (1984); Roundelay for Chorus and Computer Synthesis (1985); A Postcard from the Volcano for Soprano and Computer Synthesis (1986); Song without Words for Computer Synthesis (1986); A Fractal for Wiley for Computer Synthesis (1987); Viola Elegy for Viola and Computer Synthesis (1987); Clarinet Elegy for Bass Clarinet and Computer Synthesis (1988); Wedding Music for Violin and Computer Synthesis (1988); Allemande for Computer Synthesis (1988); The Voice of Binky for Computer Synthesis (1989); Imaginary Narrative for Computer Synthesis (1989); Hoy (In His Memory) for Voice and Computer Synthesis (1990); The Village Child, puppet theater (1992); The One and the Other for Chamber Orch. (1993; Los Angeles, April 11, 1994); Concert Etudes for Violin and Computer Synthesis (N.Y., April 12, 1994). —Nicolas Slonimsky/Laura Kuhn/Dennis McIntire
OPCFW_CODE
After this patch applied: http://patchwork.coreboot.org/patch/2452/ And finnaly added " -p internal:boardenable=force" to the command line. It says VERIFIED, I think everything went well using the MSI 848P Neo-V motherboard. server-pc:/home/melroy# flashrom -p internal:boardenable=force -w A6788IMS.610 flashrom v0.9.3-r1247 on Linux 2.6.26-2-686 (i686), built with libpci 3.0.0, GCC 4.3.2, little endian flashrom is free software, get the source code at http://www.flashrom.org Calibrating delay loop... OK. No coreboot table found. Found chipset "Intel ICH5/ICH5R", enabling flash write... OK. This chipset supports the following protocols: FWH. NOTE: Running an untested board enable procedure. Please report success/failure to email@example.com with your board name and SUCCESS or FAILURE in the subject. Disabling flash write protection for board "MSI MS-6788-040 (848P NeoV)"... OK. Found chip "Winbond W39V040FA" (512 KB, FWH) at physical address 0xfff80000. === This flash part has status UNTESTED for operations: WRITE The test status of this chip may have been updated in the latest development version of flashrom. If you are running the latest development version, please email a report to firstname.lastname@example.org if any of the above operations work correctly for you with this flash part. Please include the flashrom output with the additional -V option for all operations you tested (-V, -Vr, -Vw, -VE), and mention which mainboard or programmer you tested. Please mention your board in the subject line. Thanks for your help! Flash image seems to be a legacy BIOS. Disabling checks. Erasing and writing flash chip... Done. Verifying flash... VERIFIED. The (latest) BIOS can be found at the MSI website. -- Kind regards, Melroy van den Berg sorry for the long delay. Your success report slipped through the cracks. Am 27.12.2010 00:48 schrieb Melroy van den Berg: It says VERIFIED, I think everything went well using the MSI 848P Neo-V motherboard. Thanks for the test, committed in r1327.
OPCFW_CODE
R Imagemagick "skips" annotations and turns monochrome I'm working on a composite graph in R, for which I export a ggplot and then load it into magick for further annotations and compositing another graph. Since alignment is a lot of trial and error, I will keep a version with all "final" annotations and use a second image to play around with. Usually, this works pretty well, but from time to time either or both of two errors will happen: The image turns into black/white (not even grayscale, just hard black/white) Some of the annotations are being "skipped", meaning they just don't show up in the preview I have tested the second error, and the sequence of commands matters. All annotations until a certain point will be applied and all thereafter won't. The only thing that reliably "fixes" the error is restarting the PC. Restarting the R session, clearing the cache, restarting Rstudio remain without effect. Once this error has occurred, it will keep persisting (rerunning the code will produce the exact same result) until a system reboot. A simplified version of what my code does: graph1 <- image_read(path1) graph2 <- image_read(path2) annotated_graph <- graph1 %>% # creating a saved version so I don't have to run all of the code every time image_border(color='white', geometry = '2000x1000') %>% image_annotate("text1", size = 100, color = "black", location = "+1000+4000") %>% image_annotate("text2", size = 100, color = "black", location = "+2000+4000") annotated_graph %>% # opens current "experiment" state in viewer image_annotate("text3", size = 100, color = "black", location = "+3000+4000") %>% image_composite(image_scale(graph2, "1000"), offset = "+1000+1000") To test if the viewer itself might be the problem, I have tested opening it in a browser (to no avail). I also tried opening it in the plot window with the code below, but it just shows the same error (in lower resolution): tiff_file <- tempfile() image_write(annotated_graph, path = tiff_file, format = 'tiff') r <- raster::brick(tiff_file) raster::plotRGB(r) I thought this might be a memory issue, but according to task manager there's several gigabytes of free RAM. I'd be really happy to resolve the issue altogether, but finding a way to restart whatever component needs to be restarted without rebooting the entire system would already be splendid! My specs: Win10 on AMD64, R 4.0.2, magick 2.4.0
STACK_EXCHANGE
import { createAction, isActionOf } from '.'; // fixtures const increment = createAction('INCREMENT'); const decrement = createAction('DECREMENT'); const add = createAction('ADD', (amount: number) => ({ type: 'ADD', payload: amount }), ); const multiply = createAction('MULTIPLY', (amount: number) => ({ type: 'MULTIPLY', payload: amount }), ); const divide = createAction('DIVIDE', (amount: number) => ({ type: 'DIVIDE', payload: amount }), ); describe('isActionOf', () => { // TODO: #3 // should error when missing argument // should error when passed invalid arguments // check object, empty array, primitives it('should succeed on action with correct type', () => { const isActionOfIncrement = isActionOf(increment); expect(isActionOfIncrement(increment())).toBeTruthy(); const isActionOfIncrementOrAdd = isActionOf([increment, add]); expect(isActionOfIncrementOrAdd(increment())).toBeTruthy(); expect(isActionOfIncrementOrAdd(add(2))).toBeTruthy(); }); it('should fail on action with incorrect type', () => { const isActionOfIncrement = isActionOf(increment); expect(isActionOfIncrement(add(2))).toBeFalsy(); const isActionOfIncrement2 = isActionOf([increment]); expect(isActionOfIncrement2(add(2))).toBeFalsy(); }); it('should correctly assert type for EmptyAction', (done) => { const isActionOfIncrement = isActionOf(increment); const isActionOfIncrement2 = isActionOf([increment]); const action = { type: 'FALSE' }; if (isActionOfIncrement(action)) { const a: { type: 'INCREMENT' } = action; } else if (isActionOfIncrement2(action)) { const a: { type: 'INCREMENT' } = action; } else { done(); } }); it('should correctly assert type for FluxStandardAction', (done) => { const isActionOfAdd = isActionOf(add); const isActionOfAdd2 = isActionOf([add]); const action = { type: 'FALSE' }; if (isActionOfAdd(action)) { const a: { type: 'ADD', payload: number } = action; } else if (isActionOfAdd2(action)) { const a: { type: 'ADD', payload: number } = action; } else { done(); } }); it('should correctly assert array with one action', (done) => { const isActionOfOne = isActionOf([increment]); const action = { type: 'FALSE' }; if (isActionOfOne(action)) { switch (action.type) { case 'INCREMENT': { const a: { type: 'INCREMENT' } = action; break; } } } else { done(); } }); it('should correctly assert array with two actions', (done) => { const isActionOfTwo = isActionOf([increment, decrement]); const action = { type: 'FALSE' }; if (isActionOfTwo(action)) { switch (action.type) { case 'INCREMENT': { const a: { type: 'INCREMENT' } = action; break; } case 'DECREMENT': { const a: { type: 'DECREMENT' } = action; break; } } } else { done(); } }); it('should correctly assert array with three actions', (done) => { const isActionOfThree = isActionOf([increment, decrement, add]); const action = { type: 'FALSE' }; if (isActionOfThree(action)) { switch (action.type) { case 'INCREMENT': { const a: { type: 'INCREMENT' } = action; break; } case 'DECREMENT': { const a: { type: 'DECREMENT' } = action; break; } case 'ADD': { const a: { type: 'ADD', payload: number } = action; break; } } } else { done(); } }); it('should correctly assert array with four actions', (done) => { const isActionOfFour = isActionOf([increment, decrement, add, multiply]); const action = { type: 'FALSE' }; if (isActionOfFour(action)) { switch (action.type) { case 'INCREMENT': { const a: { type: 'INCREMENT' } = action; break; } case 'DECREMENT': { const a: { type: 'DECREMENT' } = action; break; } case 'ADD': { const a: { type: 'ADD', payload: number } = action; break; } case 'MULTIPLY': { const a: { type: 'MULTIPLY', payload: number } = action; break; } } } else { done(); } }); it('should correctly assert array with five actions', (done) => { const isActionOfFive = isActionOf([increment, decrement, add, multiply, divide]); const action = { type: 'FALSE' }; if (isActionOfFive(action)) { switch (action.type) { case 'INCREMENT': { const a: { type: 'INCREMENT' } = action; break; } case 'DECREMENT': { const a: { type: 'DECREMENT' } = action; break; } case 'ADD': { const a: { type: 'ADD', payload: number } = action; break; } case 'MULTIPLY': { const a: { type: 'MULTIPLY', payload: number } = action; break; } case 'DIVIDE': { const a: { type: 'DIVIDE', payload: number } = action; break; } } } else { done(); } }); });
STACK_EDU
Solution: Is the computer you are using on the domain. Discover the IP address of the remote PC by opening the Search charm, typing "network and sharing" and clicking "Network and Sharing Center." How To Remotely Access Your home Pc or Laptop Through IP Address on Wireless Network usmanalitoo. find another ip … Using any other computer, tablet, or mobile device, you can remotely view and control your This IP address is a LAN (or Local Area Network) address that the router has assigned to the connected devices. The router retains the WAN IP for itself and shares that Internet connection to all devices connected to it. It does this using a function called NAT (or network address translation). Want to use remote desktop to access your computer from an Android device? Here are the best ways to do so easily. This IP address is a LAN (or Local Area Network) address that the router has assigned to the connected devices. The router retains the WAN IP for itself and shares that Internet connection to all devices connected to it. It does this using a function called NAT (or network address translation). Want to use remote desktop to access your computer from an Android device? Here are the best ways to do so easily. Learn more on how to access your Mac from another location, and enable to remote-control macOS from How to Access Another Computer Using an IP Address | Chron.com. На этой странице вы сможете скачать песню HOW TO CONTROL AND ACCESS ANOTHER COMPUTER USING IP ADRESS размером 6.71 MB и длительностью 5 мин и 6 сек в формате mp3. Steps To Remotely Access Another Windows Computer Using RDP: System Properties Computer Name Hardware Advanced System Protection Remote Remote Assistance Allow Remote Assistance connections to this computer Remote Desktop Choose an option. and then specify who can connect How to control and access another computer using IP Adress How to Connect different class IP's computer in LAN network How to access Unfortunately, this public IP address could change at any time, leaving you without access. So, we actually recommend you also use a service like DynDNS to create a very simple domain Software. How-To. Computers with dynamic IP addresses have problems configuring remote desktops or streaming video. More info on How Do I Access Another Computer Remotely? РЕКОМЕНДУЕМЫЕ: Нажмите здесь, чтобы исправить ошибки Windows и оптимизировать Software for remote access, remote desktop, remote administration and administration - Remote A dynamic IP address at your home means that your IP address is constantly changing and it is not fixed. Who we are Our website address is: . What personal data we collect and why we collect it Comments When visitors leave comments on the site we collect the Learn how to use Remote Desktop Connection to connect to a remote computer. 2. Find the IP Address of a Remote Computer Using Blogs and Websites. This method is for those who have their own blogs or websites. Your IP address is the series of series of numbers that will appear next to IP address under the "LAN Adapter" information in the MS-DOS window. How to Use Microsoft's Remote Desktop Connection | PCMag Connect to Remote PC With a Mobile Device displays panes on the side from which you can start another connection, How to Connect to a Remote Computer Using an IP - Techwalla Each computer on your network has an IP address assigned to it. An IP is the numerical address for the computer and is used to route messages across the How to Access Another Computer on the Same Network on On the computer you want to access remotely, click the Windows logo Start accessing another computer through Server Settings.. This will tell you what your universal IP address is. How to Access a Remote IP Address | It Still Works Remote Desktop Software for Distant PC Access & Computer Remote Desktop Software that allows to Access Remote PC from anywhere, quickly use remote desktop software that displays the desktop of another computer on the IP-address Connection: classical method of connection via IP address. Configure PC to Analyzer using Crossover LAN Configure PC to Analyzer using Crossover LAN. This procedure shows how to directly connect the PC to a PNA, ENA, Select Use the following IP address. In this Video i'm showing to people how to access other client or home computer or laptop via remote desktop through ip address when you will make internet a local area network then you need to access client pc across the ip address and server in this video we using remote desktop Before you can access another computer's desktop, you will need to set up the "host" computer to enable a remote desktop connection, then you can Incoming remote access is only allowed on Professional, Enterprise, and Ultimate versions of Windows. $57.99. Dismiss Notice. how to remotely access another computer. How do I remotely access another computer Windows 10? Enable Remote Desktop for Windows 10 Pro. The RDP feature is disabled by default, and to turn the The remote computer (the computer that i want to connect to) is using dynamic IP and i don't have access to the router connected to that computer. I got some resource , that i can use dynamic dns services like dynu,no-ip etc, but most of these services need me to alter some setting in the router (for Remote Computer Access Text Chatting with partner Video Calling Remote File Transfer Screenshot Capturing Lock computer screen Steps To Remote So how to remotely view your security cameras using the Internet? What to do if you can view the IP camera through your LAN IP address but fail to access it from another computer network or via different WiFi connections?
OPCFW_CODE
Determining Exchange Server 2007 Placement Previous versions of Exchange essentially forced many organizations into deploying servers in sites with greater than a dozen or so users. With the concept of site consolidation in Exchange Server 2007, however, smaller numbers of Exchange servers can service clients in multiple locations, even if they are separated by slow WAN links. For small and medium-sized organizations, this essentially means that one or two servers should suffice for the needs of the organization, with few exceptions. Larger organizations require a larger number of Exchange servers, depending on the number of sites and users. Designing Exchange Server 2007 placement must take into account both administrative group and routing group structure. In addition, Exchange Server 2007 introduces new server role concepts, which should be understood so that the right server can be deployed in the right location. Understanding Exchange Server 2007 Server Roles Exchange Server 2007 introduced the concept of server roles to Exchange terminology. In the past, server functionality was loosely termed, such as referring to an Exchange server as an OWA or front-end server, bridgehead server, or a Mailbox or back-end server. In reality, there was no set terminology that was used for Exchange server roles. Exchange Server 2007, on the other hand, distinctly defines specific roles that a server can hold. Multiple roles can reside on a single server, or multiple servers can have the same role. By standardizing on these roles, it becomes easier to design an Exchange environment by designating specific roles for servers in specific locations. The server roles included in Exchange Server 2007 include the following: - Client access server (CAS)—The CAS role allows for client connections via nonstandard methods such as Outlook Web Access (OWA), Exchange ActiveSync, Post Office Protocol 3 (POP3), and Internet Message Access Protocol (IMAP). CAS servers are the replacement for Exchange 2000/2003 front-end servers and can be load balanced for redundancy purposes. As with the other server roles, the CAS role can coexist with other roles for smaller organizations with a single server, for example. - Edge Transport server—The Edge Transport server role is unique to Exchange 2007, and consists of a standalone server that typically resides in the demilitarized zone (DMZ) of a firewall. This server filters inbound SMTP mail traffic from the Internet for viruses and spam, and then forwards it to internal Hub Transport servers. Edge Transport servers keep a local AD Application Mode (ADAM) instance that is synchronized with the internal AD structure via a mechanism called EdgeSync. This helps to reduce the surface attack area of Exchange. - Hub Transport server—The Hub Transport server role acts as a mail bridgehead for mail sent between servers in one AD site and mail sent to other AD sites. There needs to be at least one Hub Transport server within an AD site that contains a server with the Mailbox role, but there can also be multiple Hub Transport servers to provide for redundancy and load balancing. - Mailbox server—The Mailbox server role is intuitive; it acts as the storehouse for mail data in users' mailboxes and down-level public folders if required. It also directly interacts with Outlook MAPI traffic. All other access methods are proxied through the CAS servers. - Unified Messaging server—The Unified Messaging server role is new in Exchange 2007 and allows a user's Inbox to be used for voice messaging and fax capabilities. Any or all of these roles can be installed on a single server or on multiple servers. For smaller organizations, a single server holding all Exchange roles is sufficient. For larger organizations, a more complex configuration might be required. For more information on designing large and complex Exchange implementations, see Chapter 4. Understanding Environment Sizing Considerations In some cases with very small organizations, the number of users is small enough to warrant the installation of all AD and Exchange Server 2007 components on a single server. This scenario is possible, as long as all necessary components—DNS, a global catalog domain controller, and Exchange Server 2007—are installed on the same hardware. In general, however, it is best to separate AD and Exchange onto separate hardware wherever possible. Identifying Client Access Points At its core, Exchange Server 2007 essentially acts as a storehouse for mailbox data. Access to the mail within the mailboxes can take place through multiple means, some of which might be required by specific services or applications in the environment. A good understanding of what these services are and if and how your design should support them is warranted. Outlining MAPI Client Access with Outlook 2007 The "heavy" client of Outlook, Outlook 2007, has gone through a significant number of changes, both to the look and feel of the application, and to the back-end mail functionality. The look and feel has been streamlined based on Microsoft research and customer feedback. Users of Outlook 2003 might be familiar with most of the layout, whereas users of Outlook 2000 and previous versions might take some getting used to the layout and configuration. On the back end, Outlook 2007 improves the MAPI compression that takes place between an Exchange Server 2007 system and the Outlook 2007 client. The increased compression helps reduce network traffic and improve the overall speed of communications between client and server. In addition to MAPI compression, Outlook 2007 expands upon the Outlook 2003 ability to run in cached mode, which automatically detects slow connections between client and server and adjusts Outlook functionality to match the speed of the link. When a slow link is detected, Outlook can be configured to download only email header information. When emails are opened, the entire email is downloaded, including attachments if necessary. This drastically reduces the amount of bits across the wire that is sent because only those emails that are required are sent across the connection. The Outlook 2007 client is the most effective and full-functioning client for users who are physically located close to an Exchange server. With the enhancements in cached mode functionality, however, Outlook 2007 can also be effectively used in remote locations. When making the decision about which client to deploy as part of a design, you should keep these concepts in mind. Accessing Exchange with Outlook Web Access (OWA) The Outlook Web Access (OWA) client in Exchange Server 2007 has been enhanced and optimized for performance and usability. There is now very little difference between the full function client and OWA. With this in mind, OWA is now an even more efficient client for remote access to the Exchange server. The one major piece of functionality that OWA does not have, but the full Outlook 2007 client does, is offline mail access support. If this is required, the full client should be deployed. Using Exchange ActiveSync (EAS) Exchange ActiveSync (EAS) support in Exchange Server 2007 allows a mobile client, such as a Pocket PC device, to synchronize with the Exchange server, allowing for access to email from a handheld device. EAS also supports Direct Push technology, which allows for instantaneous email delivery to handheld devices running Windows Mobile 5.0 and the Messaging Security and Feature Pack (MSFP). Understanding the Simple Mail Transport Protocol (SMTP) The Simple Mail Transfer Protocol (SMTP) is an industry-standard protocol that is widely used across the Internet for mail delivery. SMTP is built in to Exchange servers and is used by Exchange systems for relaying mail messages from one system to another, which is similar to the way that mail is relayed across SMTP servers on the Internet. Exchange is dependent on SMTP for mail delivery and uses it for internal and external mail access. By default, Exchange Server 2007 uses DNS to route messages destined for the Internet out of the Exchange topology. If, however, a user wants to forward messages to a smarthost before they are transmitted to the Internet, an SMTP connector can be manually set up to enable mail relay out of the Exchange system. SMTP connectors also reduce the risk and load on an Exchange server by off-loading the DNS lookup tasks to the SMTP smarthost. SMTP connectors can be specifically designed in an environment for this type of functionality. Using Outlook Anywhere (Previously Known as RPC over HTTP) One very effective and improved client access method to Exchange Server 2007 is known as Outlook Anywhere. This technology was previously referred to as RPC over HTTP(s) or Outlook over HTTP(s). This technology enables standard Outlook 2007 access across firewalls. The Outlook 2007 client encapsulates Outlook RPC packets into HTTP or HTTPS packets and sends them across standard web ports (80 and 443), where they are then extracted by the Exchange Server 2007 system. This technology enables Outlook to communicate using its standard RPC protocol, but across firewalls and routers that normally do not allow RPC traffic. The potential uses of this protocol are significant because many situations do not require the use of cumbersome VPN clients.
OPCFW_CODE
VB6 Collections/Object References I was wondering if someone could tell what happens with regards to memory when the following happens: Dict = New Dictionary --- Col = New Collection Dict.Add Key, CustomClassOne Dict.Add Key2, CustomClassTwo Dict.Add Key3, CustomClassThree Dict.Remove Key3 At this point is Key3 removed from memory or would I have to Set Dict.Item(Key3) = Nothing to remove it from memory? Set Dict = Nothing '// will this remove All the above added custom class objects? Set Col = Nothing '// Same question as above Ugh VB memory management.... TY for your time, - Austin VB is reference counted. The rules of when an object is released from memory is simple.. it happens when there are no more references to that object. Each time an object goes out of scope (such as the end of a function) its reference count is decreased; which may in turn cause any objects which were referenced by this object to have their reference counts decreases too; and if their reference counts get to 0, they too are released from memory. This is why there is usually no need to set an object's reference to Nothing... that will decrease its reference count, but that will also happen when it goes out of scope. So to answer your question: Dict.Remove Key3 is all that is required to remove CustomClassThree and Key3 from memory (as long as you don't have other references pointing to this object). Set Dict = Nothing will remove everything from memory, but this would happen anyway when it goes out of scope (again assuming there are no other references pointing to the objects it contains). Col doesn't seem to have much to do with the other statements and would be removed from memory when it goes out of scope without needing to set Col = nothing Note: The purpose of setting a reference to nothing is only really useful if you have objects which both have references to each other. Look up circular references for the details It is simplistic to say setting to Nothing isn't useful, because there are many times when you re-use a reference variable before it goes out of scope. A Form-level Collection might be used to hold lists of data or objects that get repopulated and reprocessed for different data MANY times before the Form is unloaded. You may also be "done with" a module-scope Dictionary/Collection LONG BEFORE the Form, Class, etc. is unloaded. Hanging onto all of that data for the life of the module instance doesn't always make sense. There is no simple "no brainer" rule for when to release references. I see your point, but I think its quite rare to have a form level collection which you set to nothing in order to release the memory... Surely you wouldn't have a form level collection, instead you'd declare it in the method body (and hence it would go out of scope). Having a form level collection set to nothing is probably a collection not worth having at all! With both Scripting.Dictionary and Collection instances when the last reference to the object is gone then the object references they hold are released. Whether or not the objects themselves are deallocated depends on whether or not another variable holds a reference to the same object. Think of each reference as a rope holding a rock above an abyss. Until the last rope is cut the rock doesn't drop out of existence. Removing an item from a Dictionary or Collection cuts that one rope.
STACK_EXCHANGE
Reading /dev/mem with Python at max 1MBps, how can I speed it up? I am currently working with Petalinux project which I am trying to read/write data from-to /dev/mem actually connected to 2 BRAM modules. DMABRAM1 = "/amba_pl@0/axi_bram_ctrl@a0000000"; DMABRAM2 = "/amba_pl@0/axi_bram_ctrl@a0004000"; axi_bram_ctrl@a0000000 { xlnx,single-port-bram = <0x01>; xlnx,bram-inst-mode = "EXTERNAL"; compatible = "xlnx,axi-bram-ctrl-4.1"; xlnx,bram-addr-width = <0x0a>; axi_bram_ctrl@a0004000 { xlnx,single-port-bram = <0x01>; xlnx,bram-inst-mode = "EXTERNAL"; compatible = "xlnx,axi-bram-ctrl-4.1"; xlnx,bram-addr-width = <0x0a>; def read_addr(mem, addr, length): global MAP_MASK #which is mmap.PAGESIZE - 1 mem.seek(addr & MAP_MASK) val = 0x0 for i in range(length): val |= mem.read_byte() << (i * 8) return val BRAM_1_BASE = 0xa0000000 f = os.open("/dev/mem", os.O_RDWR | os.O_SYNC) mem = mmap.mmap(f, mmap.PAGESIZE, mmap.MAP_SHARED, mmap.PROT_READ | mmap.PROT_WRITE,offset=BRAM_1_BASE & ~MAP_MASK) #timer starts here.. while BRAM_1_BASE < 0xa0004000: read_addr(mem, BRAM_1_BASE, length=128) BRAM_1_BASE = BRAM_1_BASE + 0x80 #timer ends here.. I tried using threads and coroutines, also changed length and BRAM_BASE increment but the maximum speed I can get is close to 1MBps. I also tested the speed with dd command from dev/mem to dev/zero and got 1.3GBps. I divide 1 by time interval and calculate the multiplication with 16384 to get speed in KBps unit. I strongly believe that I am doing something wrong but can not solve the problem. Thanks.
STACK_EXCHANGE
#pragma once #include "pipeline.h" #include <queue> namespace vortex { class IBuffer { private: std::queue<pipeline_trace_t*> entries_; uint32_t capacity_; public: IBuffer(uint32_t size) : capacity_(size) {} bool empty() const { return entries_.empty(); } bool full() const { return (entries_.size() == capacity_); } pipeline_trace_t* top() const { return entries_.front(); } void push(pipeline_trace_t* trace) { entries_.emplace(trace); } void pop() { return entries_.pop(); } void clear() { std::queue<pipeline_trace_t*> empty; std::swap(entries_, empty ); } }; }
STACK_EDU
Grade distributions are collected using data from the UCLA Registrar’s Office. One of my favorite classes I have ever taken. The information you learn in this class is useful no matter which major you are. It is not difficult to understand and it changes the way you view the world. If you are at all interested in product design, economics, or anything to do with materials, this class is perfect. I cannot recommend this more as an easy GE that is interesting. Probably one of the most unique classes I took at UCLA as an Econ major. I loved Prof. Alex's teaching style - he's a super chill guy, very funny, made use of cool demonstrations and media to supplement the material, and the book for the class was an interesting read. His slides don't have that much writing on them, but as long as you attend class you should be fine, the exams are easy and he clearly emphasizes what to focus on for the exams, as well as reviewing the previous lecture's main points at the start of the next class. He also brought in a guest lecturer who was an expert in his field (Explosives) for a very engaging lecture. I really enjoyed taking this course with Alex! Alex is an engaging lecturer and the course is super relatable to our everyday life! Also, the cheatsheet makes exams easy - memorization is not required. Since enrolling in this course as a freshman in Fall Quarter, it's still one of my favorite classes I've taken to date. In terms of workload, there is not much required work outside of class -- we had one midterm, one final, a Wikipedia project, and an outside reading (not a textbook, though it was said the final would test some knowledge from the book). I found the book, Stuff Matters by Mark Miodownik, to be very interesting and digestible for people of all majors; I've even recommended it to a couple of friends. The Wikipedia project was a group assignment, where you researched a prominent scientist who didn't have a Wikipedia page yet and wrote/designed one for them. For instance, our assigned scientist, Tehshik Yoon, is still active in his research and has a life story and accomplishments that would only be found on UW-Madison and publication websites without Wikipedia. Professor Spokoyny (prefers to be called Alex) allowed us full-page cheat sheets for both the midterm and the final. Thus, even though the class covers a wide variety of topics and examples that may be hard to recall in full, the cheat sheets can be used as a reference to jog one's memory. The material itself (pun unintended) is very understandable, and Alex presents it in an engaging way when he lectures (full disclosure: I am a STEM major, though I hadn't taken any classes at UCLA yet). He uses a lot of real-world examples, often delving into historical events and analyzing society as a whole in order to convey a concept. We also had some cool demos in class every week that we could play around with. I remember his tests being very doable if one pays attention in lecture (attendance counts, by the way) and the averages being quite high. He's more invested in making sure you understand the course material than in making the tests difficult. Lastly, Alex and his TA were very open to helping the students and spending extra time with us to make sure we understood the material and were on track with our assignment. The office hours setting was very casual, and you could ask about anything you wanted to, really. This class is smaller than many GE's; when I took it there were around 30 students. It made it much easier to interact with the professor and TA, to ask questions, and to be engaged during lecture. Overall, I think this is a great class to consider if you have extra units to fulfill or need a physical science GE. It's very laid-back, but you do need to pay attention in lectures and do some practice problems to exceed on the tests. The material is delivered so that it's understandable for students of all majors, and the professor is very open to helping you through it. I'd say the topics were pretty engaging, ranging from empty space to food storage methods to colors. If you get the chance to, I'd say take it! This was one of my favorite classes at UCLA. Professor Spokoyny (Prefers to be called Alex) was super nice and extremely knowledgeable. The class material itself was very interesting and very important to know. There was one midterm, one final, and a twitter assignment. Class is fairly easy if you pay attention. He even prints out the slides for you! Attendance is also taken so make sure to go to class. I would highly recommend this class! I really enjoyed this class. I’m a Spanish Major and just took this class because I needed extra units. The class consists of a midterm, a final, and a Wikipedia project. It’s not an extremely difficult course but you need to pay attention in class because there is no text book or extra materials, so the exams are based off of what he says in lecture. He provides a print-out of the PowerPoint so you can take notes. He also replaces the midterm with the final if you did not do too well, which is nice. He’s really passionate about his work and he makes classes interactive. He also requires you to attend office hours to elaborate on the Wikipedia project, which was helpful. I learned a lot of interesting facts throughout the course. He mostly cares that his students learn. He doesn’t make his exams hard, he just wants to make sure that you are learning the concepts of the course. I think this is probably one of my favorite class I’ve taken at UCLA, coming from a north campus major. Did this review contain... Thank you for the report! We'll look into this shortly.
OPCFW_CODE
The geometry of this touch screen example is taken from reference and consists of three driving electrodes in the XY plane at z = 0 and four sensing electrodes in the XY plane at z = 254 μm. The structure shown in Figure 1 is generated in XFdtd using the built-in geometry tools. One of the lines is first created using a 2D sheet body, and then the other identical lines are created by copying, pasting, and relocating. Once the geometry has been created, the electrodes are set to have a conductivity of 1x10^4 and the dielectric permittivity for the upper, middle, and lower substrates is set to 7. Figure 1: Touch screen geometry with three driving electrodes and four sensing electrodes. The 7x7 capacitance matrix of the touch screen is simulated using the Electrostatic Solver in XF. The seven electrodes are identified by associating a Static Voltage Point with each of them. The simulated SPICE capacitance matrix for the unloaded case is listed in Table 1. Self capacitance values correspond to values along the diagonal while mutual capacitance values are the off diagonal values. Table 1: The unloaded SPICE capacitance matrix (fF). When the screen is loaded, the capacitance will change. In this example, a 1 mm stylus was added and assigned a static voltage of 0V to represent being grounded. The stylus touches the screen above the D2S3 node, as shown in Figure 2. A second electrostatic simulation was run and the loaded SPICE capacitance matrix was computed. The change in capacitance between the loaded and unloaded cases is shown in Tables 2 and 3. In Table 2, the mutual capacitance is highlighted and the location of the stylus can be easily identified in red because it is the only positive change. In Table 3, the self-capacitance values are highlighted. Here, the negative changes indicate the stylus location at electrodes D2 and S3. Figure 2: Grounded stylus at the D2S3 node. Table 2: Change in mutual-capacitance between loaded and unloaded case (fF). Table 3: Change in self-capacitance between loaded and unloaded case (fF). The voltage distribution along D2 for is shown in Figure 3. The grounded stylus has a noticeable impact on the voltage and thus causes the change in capacitance. Figure 3: The voltage distribution along D2 when 1V is applied to the electrode. - T. H. Hwang et al., “A Highly Areas-Efficient Controller for Capacitive Touch Screen Panel Systems,” IEEE Transactions on Consumer Electronics, Vol.56, No.2, pp.1115-1122, May 2010.
OPCFW_CODE
Adds ref and SHA as inputs, and sarif-id as output Merge / deployment checklist [X] Confirm this change is backwards compatible with existing workflows. [X] Confirm the readme has been updated if necessary. [X] Confirm the changelog has been updated if necessary. This PR might be useful for those facing #796 by allowing to specify a ref, but is NOT a direct fix for it. Please include the compiled sources (*.js and *.map files) in this PR. Because of the way actions are run, all build artifacts must be checked into the repo. Hi @aeisenberg! Thanks for your feedback, I've included the compiled sources and fixed most inline suggestions. I am trying to fix the integration tests, in the meantime there is one question remaining in inline suggestions. FYI, Integration tests run in my fork. I had to enable workflows, then I created a draft PR in there just so workflows are executed. This information might be useful for future contributors. All checks are green in my Draft, so as soon as running workflows is approved I think this can go forward. Everything looks good, but the new CI jobs are failing. I'm digging in to try to figure out why. sendStatusReport doesn't respect TEST_MODE like some other functions do, not sure why it is getting called here but it appears that the test doesn't have permissions to do a PUT against https://api.github.com/repos/github/codeql-action/code-scanning/analysis/status sendStatusReport doesn't respect TEST_MODE like some other functions do, not sure why it is getting called here but it appears that the test doesn't have permissions to do a PUT against https://api.github.com/repos/github/codeql-action/code-scanning/analysis/status Should I add a TEST_MODE for sendStatusReport or something? In the nonofked version, I see that the permissions block is more permissive, which explains why it is passing there. Now things are starting to become clear. If I explicitly add security-events: write then the failing workflows will pass. The difference between the failing workflows and the passing ones is that the failing ones use a value for the ref property of the upload status report that is different from that of the PR. Upload status report is an internal API endpoint for telemetry into code scanning. I don't know if it is a bug that the endpoint sometimes accepts requests from user agents that do not have security-events: write permissions. I'm communicating with the team that maintains this part of the API. Regardless, I think our test workflows should have this permission anyway. Here is what I propose: I will create a new PR that will add the security-events: write permission to all of our generated workflows. Can you rebase your PR, @cw-acroteau, on top of the one I will create? And make sure to regenerate the two integration test workflows you created. Also, make sure to fix the merge conflict in CHANGELOG.md. Thanks for helping me find this bug (or at least odd inconsistency) in the action. Can you rebase your PR, @cw-acroteau, on top of the one I will create? And make sure to regenerate the two integration test workflows you created. Also, make sure to fix the merge conflict in CHANGELOG.md. Hi! I just did that, hope everything is fine. I'm waiting to confirm that the worflows will pass, if there's an issue I'll try to fix it. Thanks! Hi again @aeisenberg ! Can you please check if the checks are okay in a copy similar to https://github.com/github/codeql-action/pull/902 please? It fails here because of "RequestError [HttpError]: Resource not accessible by integration", but it might be due to the fork. Oh sorry I did a git rebase, and not changed the base branch. I thought this was what you meant. No problem. I wasn't clear about it. I changed the branch and it all looks good. Yes, the reason why the new workflows are failing is because the security-events permission is always downgraded to read for forks. I don't think it will be possible to get it to pass from a fork. I'm in discussion with the API maintainers to figure out if we should avoid uploading status reports for integration tests. This should allow your integration tests to pass. They are in Europe, so I probably won't hear anything until tomorrow. PRs from forks will always have their security-events permission downgraded to read. This makes sense because we don't want malicious PRs to come around and tamper with the security alerts for refs they are not associated with. There is an exception to this. a PR from a fork can write security events to resources associated with that PR only. This allows the PR to report security concerns about itself without giving it the ability to overwrite other refs. What this means is that using the standard PR workflow, these new ref and sha inputs will not be able to work in forks. We will need to document this accordingly. It looks like there are ways around this using pull_request_target or possibly [Send write tokens to workflows from pull requests[(https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#enabling-workflows-for-private-repository-forks). I haven't tried out either of these solutions yet, but they will probably work. However, there are added security considerations with this. What this means is that we need to rebase on the latest tip of the aeisenberg/permissions in this PR (I'll do that shortly) and update the documentation of the inputs to point out the limitation with forks. This PR can then be merged. Later, we can work on some proper documentation on how to use these inputs in forks. I created a PR in your repo for these changes. If you are happy with them, please merge and then I can merge here. This PR should not have been closed. It should have been retargeted to main. Let me see what happened. @cw-acroteau I'm very sorry, but could you create a new PR targeting main? For some reason when #902 was merged, this PR was closed. Instead it should have been retargeted against main. @cw-acroteau I'm very sorry, but could you create a new PR targeting main? For some reason when #902 was merged, this PR was closed. Instead it should have been retargeted against main. @aeisenberg Please see #904
GITHUB_ARCHIVE
What is the easiest way to upgrade a large C# winforms app to WPF I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort? if you are leaking in C#, you'll leak other things in WPF. Fix your code before thinking a new tool will magically fix all your problems. You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :) Do you use a lot of User controls for the pieces? WPF can host winform controls, so you could piecewise bring in parts into the main form. You can add your user controls to a WPF form. Then translate them as needed to WPF WPF allows you to embed windows forms user controls into a WPF application, which may help you make the transition in smaller steps. Take a look at the WindowsFormsHost class in the WPF documentation. I assume that you are not just looing for an ElementHost to put your vast Winforms app. That is anyway not a real porting to WPF. Consider the answers on this Thread What are the bigger hurdles to overcome migrating from Winforms to WPF?, It will be very helpfull. There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see Evolving toward a .NET 3.5 application Paper abstract: In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.
STACK_EXCHANGE
|Top Previous Next| User interface > Debugging and executing > Variable-Inspector By the variable-inspector you can look at the contents of the variables of the actual scope of a debugging session. If you are in the debugging mode (not during a look-ahead, see below), you get the window of the variable-inspector by the button or by the according item in the start menu You can either write the name of a special variable into the field of the combo box at the left top of the dialog or select one of the five predefined items: If you choose the item class variables, the source and target information are shown as well as all variables, which are defined on the element page. (The parser itself is denoted as this or (*this).) If you choose the item local variables, the values of all variables are shown, which are in the actual scope. These are the variables passed to the actual production and the variables, which are locally declared in the production. xState (parser state) If you choose the item xState, all elements of the parser state variable are shown, including all sub-expressions of the last recognized token. If you choose the item Plugin variables, all variables of the plugin are shown, especially the source and target specification and the state of the indentation and scope stack. If a DOMDocument has been created, it can be viewed here. Buttons of the toolbar Up in the hierarchy With the Back-button you can go a level higher in the hierarchy of the class elements or finally to a view of all variables of a visibility area (see above). After you have opened the variable-inspector, you have to actualize the value of an eventually already chosen variable by the actualize button. Choice of a single variable by double click If several variables are shown, you can select one of them with the left mouse button on the value side and view their content by the Detail-button or by a double click. So longer texts or the elements of containers or tree structures - like shown below - can be seen. In a tree view nodes can be selected with the right mouse button. Then you can expand or collapse the whole branch, if you chose the corresponding items in the pop-up menu. You also can move the root of the tree to it's parent node. Stay on top If the check box Stay on top is actualized, the variable-inspector remains visible on top of the screen during the complete debugging-session. After each debugging step the content of the selected variable is actualized automatically. If stay on top is not set, there will be no such actualization and the inspector will be deleted from the screen by each step. If the inspector is closed, while the check box is checked, it will be unchecked. If it is a complex variable, as for example the xState-variable, the values of its class elements will be listed. In some cases some properties of the variables will be added to the list of values. As containers (mstrstr, vstr) can contain very much elements, for them only the number of elements is shown. When debugging a look-ahead the variable inspector isn't shown because during a look-ahead no semantic actions are executed. This page belongs to the TextTransformer Documentation |Home Content German|
OPCFW_CODE
“Agents” in the TopDown Engine is a term used to describe any kind of characters, whether they’re playable characters, or enemies, NPCs, etc. There are a few core classes that make these agents work, and that you’ll need to get familiar with if you want to expand and create more character abilities for example. In the meantime, this page aims at presenting the basic concepts and allowing you to quickly create your own character (player controlled or AI based). The TopDown Engine is designed to work for both 2D and 3D projects. The components you’ll use to create characters for one setup or the other will be mostly the same, but some will be specific to 2D or 3D. When you encounter a component or an ability, if it doesn’t have 2D or 3D in its name, it’s safe to assume it’ll work for both. Otherwise, you’ll want to use components with 2D in their name for 2D only, and components with 3D in their name for 3D only. This page goes into more details about the mandatory components of a Character, but here’s a brief rundown. An agent in the TopDown Engine usually has these components : - A Collider : BoxCollider2D or CircleCollider2D in 2D, CharacterController in 3D, this collider’s size is used to determine collisions and where in the world the agent is. - RigidBody2D or 3D : only used to provide basic interactions with standard physics (completely optional) - TopDownController : responsible for collision detection, basic movement (move left / right), gravity, the TopDownController is your character’s motor class. It comes in both 2D and 3D versions, but both share the same methods and logic. Note that the 3D version requires a CharacterController component. - Character : This is the central class that will link all the others. It doesn’t do much in itself, but really acts as a central point. That’s where you define if the player is an AI or player-controlled, where its model and animator are, stuff like that. It’s also the class that will control all character abilities at runtime. - Health : Not mandatory, but in most games your agents will be able to die. The Health component handles damage, health gain/loss, and eventually death. - Character Abilities : So far all the previous components offer lots of possibilities, but don’t really “do” anything visible. That’s what Character Abilities are for. The asset comes packed with more than 15 abilities, from simple ones like CharacterMovement to more complex ones like weapon handling. They’re all optional, and you can pick whichever you want. You can of course also easily create your own abilities to build your very own game. However you create your character, one thing that is really important is to understand how to structure it. You’ll want to separate the logic (the RigidBody, TopDownController, Character abilities, etc) from the visuals (your model / sprite / Spine setup, etc). You can have many more layers (maybe your animator sits on its own layer, etc), but the recommended setup is the one on the image above : a top level with all the logic, and the model nested inside it. Make sure you link the Model to the Character class, same for the animator, from the inspector. Some other classes (Orientation abilities, Health) may also require you to tell them, via their inspector, where the Model or Animator is. How do I create an Agent ? There are many ways you can create a playable or AI character in the TopDown Engine. Here we’ll cover the 3 recommended ones. Note that if you prefer doing differently, as long as it works for you, it’s all fine. The “Autobuild Character” feature allows you to create a new character in a few seconds. Note that after that initial setup you’ll still have to setup animations and all that. Here’s how to proceed : For 3D : - open the MinimalScene3D demo scene (or any scene that meets the minimal requirements - create an empty game object, name it “Test”, position at 0,0.5,0 - under it, add a Cube, center it, scale it to 1,2,1, remove its BoxCollider - on Test, add a Character comp, drag the Cube in its CharacterModel slot - on the Character inspector, press Autobuild Player Character 3D - press play For 2D : - open the Minimal2D demo scene (or any scene that meets the minimal requirements - create an empty game object, name it “Test”, position it in the scene - under it, add a game object, center it, name it “Model”, add a sprite renderer to it and set its sprite - on Test, add a Character comp, drag the sprite in its CharacterModel slot - on the Character inspector, press Autobuild Player Character 2D - press play Of course it’d be the same thing for AIs, except you’d pick Autobuild AI Character 2D/3D. If you went for an AI character, it’s ready to get a Brain and some Actions and Decisions. You can now fine tune the various settings, remove the abilities you’re not interested in for this character, add animations, etc. Or you can leave it like that and start prototyping the rest of your game and levels. Another fast way to create an agent is to find one you like in the demos, and create yours from that. The process for that is quite simple : - Find an agent you like in one of the demos. - Locate its prefab (select the Agent in Scene View, and in its inspector, at the very top right under the prefab’s name and tag there’s a Select button) - Duplicate the prefab (cmd + D) - Rename it to whatever you want your Character to be called - Make the changes you want. Maybe you’ll just want to replace some settings, maybe you’ll want to change the sprite and animations. It’s up to you from there. You can also create a Character from scratch. It’s longer but why not? - Start with an empty gameobject. Ideally you’ll want to separate the Character part from the visual part. The best possible hierarchy has the TopDownController/Collider/Character/Abilities on the top level, and then nests the visual parts (sprite, model, etc). - At the top of the inspector, set the tag to Player if it’s a player character, or to anything you prefer if it’s not. Same thing for the layer. - On your top level object, add a Collider (either 2D or 3D, we recommend a Capsule for 3D characters, a box for 2D ones). Adjust its size to match your sprite/model dimensions. Then add a RigidBody2D or RigidBody component. - Add a TopDownController (2D or 3D) component. Set the various settings (see the class documentation if you need help with that), and make sure to set the various collision masks (platforms, one ways, etc). Add a CharacterController component too if you’re going for 3D. - Add a Character component. Check the various settings to make sure they’re ok with you. - Add the Character Abilities you want (it’s best to use the AddComponent button at the bottom of the inspector for that, and navigate there) - Optionnally, add a Health component, a HealthBar component, etc.
OPCFW_CODE
How to Play This article is currently in the process of a rewrite for StepMania 3.9 and 4.0 CVS, and an user is using the default themes of SM 3.9 or 4 CVS. - Game Start: Begin playing the game. Don't forget to configure your keyboard or dance pad the way you like it, if the defaults aren't your favorite. - Network Play (4.0 CVS/5): Allows you to connect to a StepMania Online server and play over the internet. Details are in the respective article. - Select Game: Allows you to switch between any currently-supported game modes (dance, pump, and para [4.0 CVS/5]). As of April 2nd, 2007, the latest build of 4.0 CVS does not include the information required to play pump or para. - Preferences: Allows you to set up StepMania in various ways. These include: - Config Key/Joy (3.0 only): Allows you to map keyboard or joystick buttons to actions in the game. - Input Options (3.0 only): Go here to change setting specific to input. - Machine Options (3.0 only): Go here to change settings for thing like "difficulty" and "number of songs per play". - Graphic Options (3.0 only): Go here to adjust settings like resolution and texture detail. If StepMania runs slowly on your computer, you'll want to tweak these options. - Appearance Options (3.0 only): Change the appearance of StepMania by switching announcers, themes, and NoteSkins. - Options (3.9 and up): This brings you to a screen where all of the above 3.0 options items are now placed, plus some new ones. - Edit/Sync Songs: Edit existing step patterns, or create new patterns. Also, if your songs are out of sync, you'll want to use this mode to correct them. You cannot create a new stepfile from within StepMania. - Edit Courses (not avalaible in some themes): Allows you to edit courses, but as with the song editor, you cannot make new courses from here. - Exit: Exits the program. - In the upper-left corner is the total number of songs found when StepMania perused your Songs folder on startup. - In the upper-right is your version of StepMania. This time, instead of setting any options, you get to DANCE. Select "Game Start" from the Title Screen menu. Each game has numerous "Styles", or modes of play. The Styles for the game "dance" are: - Single - One player, one pad, 4 panels. - Versus - Two players, each with their own pad, 4 panels per pad. - Double - One player using both pads, 8 panels total. - Couple (not avalaible in certain themes) - Two players, each with their own pad, 4 panels per pad. This mode differs from Versus in that it uses only patterns designed to be interesting for two players (e.g. battles, routines). - Solo (not avalaible in certain themes) - One player, one pad, 6 panels. Other game modes have most of those styles, excluding Solo. The number of panels will vary based on the game type (ie, most pump modes have 5 panels, and pump Double has 10 panels.) After you've selected your Style (and your theme), you choose your initial difficulty setting: - Beginner (3.9 default theme) - very easy patterns for clueless newbies - Light/BASIC (3.9 default theme) - easy patterns for people who have played before - Standard/DIFFICULT (3.9 default theme) - medium patterns for experienced players - Heavy/EXPERT (3.9 default theme) - hard patterns for expert players - Regular (only for SM5 theme and certain themes) - Play a certain number of stages. - Nonstop Course Mode (not avalaible in certain themes) - In this mode, you choose a course of several songs, and play all these songs in a row. - Challenge Course Mode (not avalaible in certain themes) - Similar to Nonstop, except the normal life bar is replaced by a "Battery" life bar. You have 4 lives, and getting a Good, Almost, Miss, NG, or Mine (Shock Arrow) pressed causes you to lose a life. Lose all of your lives and you fail. - Endless (not avalaible in certain themes) - In this mode, select a group of songs, and play them forever until you fail. - Magic Dance/Battle (not avalaible in certain themes) - This mode is always 2-player, as a result, you play against the computer when alone and cannot select this mode if you Double or Solo style. At the top, there is a "tug-of-war" life bar that shows who is winning. There are also two bars near the middle of the screen that indicate what modifier level you are at and how far you have to go to "level up". As you get Greats, Perfects, Marvelouses, and OKs, the bar raises; and falls if you get anything else. The player who "has" most of the life bar at the top in the end wins. On this screen, you will choose one of your song Group Folders (DDR 4thMIX theme, SM 3.9 and SM 4 default themes only). - Choosing "All Music" lets you chose any song in any group. - Choosing any group will limit your choices on the Select Music screen to just the selected group. Now, it's finally time to choose your song! - Pausing briefly on a song will play a brief sample of that song, giving you and idea of whether or not you might want to try it. - The red songs are used for the Extra Stage. - The BPM (Beats Per Minute) for the selected song is listed above the song's banner. Here's a hint, higher BPM means faster, which is usually harder. ;) If the BPM is red, that means the speed of the changes while playing. - In the bottom-left is the song's difficulty rating, shown in number (only for SM 5 default theme and many custom themes). The more number, the harder the song. Aqua text mean you are in Beginner mode, yellow text mean you're in Basic Mode, red text mean Difficult Mode, green text mean Expert Mode, and dark blue text mean Challenge mode. Also, gray text mean Edit mode. Press the Down or Up Twice to change the difficulty. - To change the sort (by group, title, BPM, etc.), hold down the left and right buttons and then press START. While still holding down the left and right buttons, keep pressing START to change the sort type. In 3.9 or newer, jump to change sort, and press Up-Down-Up-Down to enter the sort menu. In Recent DDR Themes (Supernova ownboards) for SM 3.9 and upper versions, holding down the left and right buttons to enter the sort menu. - Immediately after you select a song, push the START (rarely you see the text to enter to Options). If you would like to change Player or Song options, hold down the "Start" or push it twice. Read below for more information on Player and Song options. - Congratulations, you're about to play the game! The first page contains options specific to each player: - Speed - How fast the arrows move up the screen. - Acceleration - How fast the arrows move up the half screen. - Effect - Applies interesting effects to the arrows as they scroll. - Appearance - Changes the way the arrows appear on the screen. - Turn - Ignore this for now. - Insert - Ignore this for now. - Scroll - Change this to reverse to scroll the down instead of up. - Noteskins - Ignore this for now. - Holds - Turn this OFF to disable Frezze Arrows. - Mines - Ignore this for now. - Hide - FIXME - Perspective - FIXME - Steps - The last chance to select the difficulties. - Characters - 3D graphical dancers that appear behind the scrolling steps but in front of the background. Fun to watch for your audience, but a bit distracting for the player. The second page contains options that effect both players: - Life Type - Switch between the standard life bar and the battery life meter. - Bar Drain - Controls how the life bar will drain and refill. - Battery Lives - Changes how many "lives" you get when using the battery life meter. - Fail - What the game will do if you fail. - Assist Tick - Turn on ticks each time you're supposed to step. - Rate - Change the rate at which the song is played, 100% being normal speed. - Auto Adjust - FIXME - Save Scores - FIXME The concept of StepMania is pretty simple. The colored arrows will scroll up from the bottom of the screen toward the top. When a color arrows overlaps the grey arrow at the top of the screen, hit the button on your pad corresponding to the direction of that arrow. When you step, a grade will appear near the center of the screen saying how accurate your step was. - Marvelous timing - white flash - Perfect timing - yellow flash - Great timing - green flash - Good timing - light blue flash - Almost timing - purple flash - Miss - no color. You did not step anywhere near this note. Better luck next time. When you string together Great, Perfect, or Marvelous judgments, it makes a combo. Your current combo is shown in the middle of the screen just below the step judgment. If you get a judgment of Good (depending of the theme) or worse (or mine pressed), your combo counter is reset to 0. Higher combos are better for bragging rights :) (and improving your high score).
OPCFW_CODE
There are different formats for presenting the documents; one of them, which is popular as well appreciated worldwide, is the pdf format, and the pdf (Portable Document Format) is used to save and share documents with other users. Pdf documents are more reliable and can be opened in any operating system. The pdf files are non-editable and can be edited using some pdf readers. In this blog, we will elaborate on how we can merge different pdf files into one file, and the content is as follows. - Using the PDFtk Tool - Using the PDF Arranger Tool Let’s get into the first tool first. Method 1: Merge the PDF Files Using the PDFtk This section utilizes the PDFtk tool to merge PDF files in Linux. Furtherly, the process of its installation is also provided. How to Install the PDFtk on Linux? The installation of the package varies as per the Linux distributions. For Debian/Ubuntu-based Distros: $ sudo apt install pdftk-java The package will be installed in Ubuntu-based Linux distribution. For RPM-based Linux distributions: Download the rpm package and install it through the rpm manager as follows: $ wget https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/pdftk-2.02-1.el6.x86_64.rpm $ sudo rpm -i <package-name>.rpm How to Use the PDFtk on Linux? To use the PDFtk on Linux, the general usage syntax of the “PDFtk” to convert files is: $ pdftk [file1] [file2] cat output [combined file] Replace the file1 and file2 with the pdf file name you want to merge and then replace combine file with the single file name you suppose to get after merging the files. For example, we will merge our two pdf files: $ pdftk mypdf.pdf mypdf1pdf cat output New_pdf.pdf To confirm the new merge file, list down the contents: Similarly, to merge all the available pdf files into a single pdf file, use the command: $ pdftk *.pdf cat output ALL_COMBINED.pdf It will convert all the pdf files which are having the pdf extension in their names to a single new file. Method 2: Merge the PDF Files Using the PDF Arranger The PDF Arranger is a GUI tool that can be installed on various Linux distributions using the Flatpak Utility. Let’s get into the installation method first and then its usage. How to Install PDF Arranger on Linux? To install PDF Arranger on Linux distributions: $ flatpak install flathub com.github.jeromerobert.pdfarranger Or the following commands can be used to install it on Linux. $ sudo apt install pdfarranger $ sudo dnf install pdfarranger In the next section, we will find out how to use the PDF Arranger in Linux. How to Use the PDF Arranger in Linux? After installing the PDF Arranger in your Linux distribution, launch the application, then click on the “Open” icon to upload the PDF Files: Choose the PDF Files and open them: After selecting the Pages of the PDF File which are supposed to be merged, click on the ”Save as” icon: Save the file with some name: List down the contents to show the new merged pdf file: These are the methods by which the PDF files can be merged on Linux. To merge PDF files on Linux, pdftk, a command-line tool, and PDF Arranger a GUI tool can be installed on Linux. There are other tools, like ImageMagick and PDF Chain, but the popular pdf mergers are discussed in this blog. This article will demonstrate the usage of merging PDF files on Linux.
OPCFW_CODE
M: Ask HN: How do you name your servers? - mashmac2 I've heard (and worked at places with) lots of interesting stories about hostnames within an organization, but I'd love to see if there's some consensus.<p>So, HN, how do you name your servers?<p>To start things off, I've seen servers named for Transformers, One Piece characters, people in the Bible, and philosophers.<p>What are your best server name suggestions? R: weinzierl For anything serious I'd go with a numbering scheme, see [1] for examples and ideas. If it has to be names, the elements of the periodic system seem to be somewhat popular, see [2] for more lists. Apart from that: I named my second computer[3] after a character from a book I was reading at the time. I didn't yet know it was the villain when I chose the name, so the name was not what I intended it to be. I stuck with the convention anyway. It's always from a book I am currently reading or I have read recently and most of the time I choose the villains. Because virtual machines need names as well and I create them faster than I can read books I allowed characters from movies for anything virtual. The nice thing about it is that it connects the machines to a period of my life (when I was reading a certain book or seeing a movie for the first time). [1] http://serverfault.com/questions/17274/ [2] http://namingschemes.com/ [3] My first computer was a C64 and had no name, other than C64:-) R: memset A long time ago, I was fond of a young woman whose family was originally from Denmark. As the ultimate testament to my love, I named my computer 'danish'. Breakfast-themed computer names followed the in-joke: eggs, toast, waffle. Years later, having moved on, I switched to jazz musicians (I play the saxophone.) coltrane, gillespie, parker. R: dcolgan Mine is the setup with One Piece characters as OP mentioned. This also includes my dev computers. Our setup is thus: Prod for my LLC is Luffy since he is the captain. My desktop is Zoro since the desktop is the most powerful computer I have. My laptop is Usopp since I work with it the most and I sort of identify with that character. Another client server is Franky because that client was weird. A client server I setup once is Robin since that name wasn't taken. Each character is sort of vaguely descriptive of the machine they name. Another scheme I worked with once was physics words for movement. The dev machine was impetus, staging was acceleration, and prod was velocity. I once got burned with putting funny filler text on a live production site. Naming servers creatively is a safer way of having fun that is also more subtle. R: pg At Viaweb we used to name them after Tintin characters. R: cedsav Awesome. I hope you didn't have a Thomson for dev and a Thompson for production :-) R: 27182818284 When I can, I enjoy naming them things like "Megahertz" or "my RAM" so that when they crash, people sound silly. "My RAM is down" or "Megahertz isn't working" R: jamescun For StackBlaze servers effectively had 2 available hostnames, the actual logical hostname (<function><id>.<lan>.<datacentre>.net.stackserv.com. e.g. wb1.g1.rbx.net.stackserv.com, db2.g1.stb.net.stackserv.com), then a cname'ed nickname (We used pokémon for the nicknames. pikachu.net.stackserv.com, charmander.net.stackserv.com). R: xyentific Dan Schultz wrote a great article on some naming conventions he uses that was inspired by Starcraft. <http://slifty.com/2011/03/starcraft-network/> R: ahen This is brilliant. R: HarshaThota There used to be a thread on Server Fault[1] about this very topic which had some interesting ideas. We liked the idea of naming them after Mountains but ended up naming our servers after characters/places from BSG. [1] [http://www.stackprinter.com/questions/the-coolest-server- nam...](http://www.stackprinter.com/questions/the-coolest-server-names.html) R: johnmurch At a past job we used named of ski resorts (killington, okemo, snowbird,...). Although I always though it would be interesting to choose a music genre and go big - like 90s rap stars (2pac, jayz, Wu-Tang Clan, ..) Just my My 2 cents - Pick something that has a meaning to the person running it and some what relates to the organization. R: kaolinite Snakes here: mamba, taipan, rattlesnake and viper so far. At work we use British fighter jets. Unfortunately, unlike the US, the British have pretty lame fighter jet names ('puffin', for example). Just pick something with a lot of cool sounding names - makes announcing server changes way more badass. R: joshAg I name my personal computers np-complete problems. so far I've used: knapsack, k-sat, TSP, and ILP. here's the list I use when I need to name a computer: <https://en.wikipedia.org/wiki/List_of_NP-complete_problems> R: div I use names from Alice in Wonderland. whitequeen is my macbook, while my gaming rig is bandersnatch. The wifi id is throughthelookingglass, while the Timecapsule is called thecheshirecat, my iPhone is the dormouse. I've some unnamed devices laying around that I've been meaning to get to :) R: doctorosdeck I name my servers all after the muppets. There's something about typing ssh me@misspiggy that makes me smile. R: caw At a previous job we used scientific words, with the exception that our boss had to be able to spell it at 3am if he got paged. Lots of fun names there, but the best were the HA or failover pairs. Proton/Neutron, Concave/Convex, Diffusion/Effusion. R: p_4lexander Recently I've used the moons of Saturn (there are 60 of them): <https://simple.wikipedia.org/wiki/List_of_Saturns_moons>. They're beautiful and unusual names. R: cju A quote is missing in your URL. <https://simple.wikipedia.org/wiki/List_of_Saturn%27s_moons> is correct. R: godisdad In college: beer names (i.e. shiner-bock) At first job: characters from 'The Simpsons' Later on at first job: elements (i.e. Fluoride) At current gig: something amenable to Chef like component- worker-01.environment01.foo.employer.com R: Zecc Asterix characters: asterix, obelix, panoramix, ... Planets/Roman deities: venus, ceres, apollo, jupiter, ... Greek letters: alpha, gamma, zheta, ... First names of famous scientists: albert, marie, alexander, ... R: tjasko Haha, good ol' Star Wars planets here :p. I usually have them all start with the same letter... <http://starwars.wikia.com/wiki/List_of_planets> R: callmeed 1980's Baseball Players. So far, we have: Mattingly, Dawson, Puckett, Sandberg, Gwynn, Ripken, Saberhagen, Joyner, and Rickey. We also had Gooden, but it had problems and we had to shut it down. R: xm1994 Had a pair of extranet firewalls once... Romulus and Remus <http://en.wikipedia.org/wiki/Romulus_and_Remus> R: adamgray I use ancient Egyptian deities (Anubis, Ra, Osiris, Horus, Isis...) R: tripzilch Every time I named piece of hardware Nyarlathotep, it either died on me or turned weirdly buggy. True story. Don't name your machines after an avatar of the Outer Gods. R: dkuntz2 All of my computers are named after characters from Dune (thus far only the first book). Three of them were named after one character, with multiple names. R: LarryMade2 Paul, MuadDib and Usul? R: s-phi-nl See <https://news.ycombinator.com/item?id=1290106> for a former discussion. R: mryan Lately? ec2-xx-xx-xx-xx.compute-1.amazonaws.com In the past I've gone through Futurama, LoTR, The Simpsons, Saturn's moons and musicians. R: ajtaylor While I use fun names for my personal computers, work servers are always descriptive: mycocache1, mycodb1, mycodb2, mycoweb1, etc. R: nantes LotR characters: laptop is Legolas, brawny web server is Aragorn, dependable media server is Samwise, old laptop was Frodo, etc. R: ShonM Here at Chess.com we name them after Chess grandmasters (go figure). Capablanca, Glek, Krush, and so on. R: denaje Not servers, but here, we've named our database usernames from Futurama characters. R: codemonkeymike As A chem minor, I like anything chemical, Thorium is my most used VPS. R: baconhigh one cluster a few years ago had arcade game names. Pacman, bladerunner, alien storm etc. Another was names of metal bands. Anthrax, slayer, metallica etc. Naming servers is the best :) R: cl8ton We use the cast from Seinfeld (Kramer, Babu, SoupNazi etc...) R: amccloud Lately? compute001-fluffy-switch-nv Rinse and repeat! R: orangethirty Toho monsters (Godzilla, Mothra, etc.). R: gcb0 role-colo-index, e.g. wwwsf27.domain.com i'm boring as hell. R: mchadwick Likewise. Machine names are there to make your life easier. Cute names aren't fun when you have more than a couple. When your brain's as small as mine, you have to minimize your lookups. R: Sharma Gandalf,Isaac,Einstein R: LarryMade2 Guardian and Colossus
HACKER_NEWS
PS4 drastic connection speed drop recently Before I start, YES, I know that there are 101 pages of similar problems and similar error, but no clear steps are given, nor explanations. So: We have had our PS4 for 6 months, and in that time i have occasionally checked internet speeds. Our modems average, via Ookla Speed Test, is ~14Mb/s down and ~750Kb/s up. The PS4 network test (Again I will save myself pain: I know it is unreliable, I simply compare it to itself) usually comes up with ~10Mb/s down and ~650Kb/s up. Now, that was until recently. A few weeks back, a local server went down (Telstra is our ISP, an Australian one for anyone who doesn't know), and we had no internet for 24 hours. Since sometime between then and now, the PS4 speed has dropped to ~2.5Mb/s down although ~700Kb/s up - yes, that has apparently increased. We now get irritatingly persistent lag issues, and while still playable we are constantly reminded by Battlefront of our 'weak connection' in the top right of the screen. However it is important to note that the Ookla Speed Test still gives the same results, so it is our PS4. Notes: During the PlayStation network test, it occasionally says we have NAT type 3, rather than 2 (only rarely) It claims that our router doesn't support IP fragmentation, which it never used to say. We use wireless rather than cable, ethernet isn't available (only one port in the whole house). We're in Australia if you couldn't tell, which is why 14 is actually a great speed for us - Australia has shocking speeds. Attempted Fixes: Changed DNS servers to <IP_ADDRESS> and <IP_ADDRESS>, didn't change anything Changed PS4 MTU size to 1470, incredibly fractional increase in DL speed Changed router MTU to 1500 and 1450, neither helped. Something perhaps important: our 2.4/5Ghz networks were both set to half their maximum bandwidth, but increasing them to full did nothing For the IT Crowd fans, yes I tried turning it off and on again, both the PS4 and router. What I would like is a succinct answer, with (preferably) every possible unmentioned fix in order of likely effectiveness, and why it is done - mostly because every other page seems to be a reddit page which requires an hour of reading with no benefit, and I feel like there should be a good list of options out there for everyone. Thank you in advance. AND if this is the wrong site for this question please just tell me rather than reporting it, I wasn't sure which to use. Have you tried restoring PS4's default factory settings? I don't know if that will help but it definitely helped me when my DS4 refused to connect to my PS4. You could also try moving the PS4 and the router closer together or make a tin foil signal booster for your router. Best of luck to you! The only thing is that my PS4 connects fine to the router, it just seems to be a speed thing specific to the PS4. For what its worth, i'll try the foil signal booster, got a link? sorry, didn't see the link, thanks João Sounds like your internet is being filtered, or you've got some extra management happening at the ISP end. @Frank that sounds like a "there's nothing you can do about it" scenario... Well, you could always call them up and see. They could be doing it accidentally. @JoãoNeves ok the foil bumped it up 2 Mb, but that puts it at 4 not the old 10 - ill make sure that goes in a final answer though @Frank I'll have to do that tomorrow, I'll let you know if something happens, thanks @Frank Ok, so I did a combo of everything: in the processing of fixing a few days ago, I changed the settings mentioned in the question. Now, after tinkering for a few hours, I managed to get 10Mb. I'm going to post it as an answer, but I want this to include ANY answer not just the one that fixed mine, so if you two could comment on my answer any alternative methods you can think of, that would be great, thanks :) @joãoNeves could only notify one user, read above comment ^^ One thing to note is that Speedtest itself is not always reliable. Check out www.fast.com (powered by Netflix, a company that doesn't have any deals with ISPs) or http://speedtest.xfinity.com/ I used to work for an ISP and there are some things that I can suggest assuming that you did not change anything yourself: - for best connection never use Wifi - do a speedtest from a wired machine that is not your PS4 and check the speed is at a minimum the same as you had before the problem if it is not - ensure that nothing has changed on the router on your end (check the config) and - call your ISP/provider and ask them to check if your subscription is still the same, if this is the case ask them to check the connection it is possible that due to the problems they had that your connection is configured at a lower speed One other thing I can recommend that hasn't already been posted is port forwarding. It requires a little network savvyness with your router, but it helps to clear up the channels (ports) that the PS4 needs open to connect to the Internet. http://portforward.com/english/applications/port_forwarding/PS4/ It may not help with your down and up speeds, but by opening up your NAT type, you may connect to games more easily/stop seeing that annoying weak connection message. In my experience PS4's signal reception really sucks, so, my suggestion is to buy a switch and connect it via cable. It's true, the PS4 really isn't that good with connection quality - even at the best of times, the results of its speed test are about 40% lower than my actual internet speed. Unfortunately for me, the house I live in was made years before the internet was ever considered practical, and so there is only 2 ports in the whole house. None of them are near me. I DID end up fixing it up (still not as good as it used to be), it included everything up to an including putting aluminium foil behind it Are there many devices connected to this WIFI network, getting one just for your PS4 might make things better Well, I'm sure it would help, but then I'd have to dish out the money on an entirely separate router just for my PS4. As it is, there are 2.4 and 5Ghz networks. Unfortunately the PS4 can't pick up a the 5g network, otherwise I'd connect it to that. As it is, I've put as many devices as I can on the 5g network and left the PlayStation on the 2.4. It has seemed to help a little. Buying a new one only for this purpose is definitely not worth it, I meant, just try to disconnect as many devices from the network as you can when using the PS4. Ah I see. The only issue with that is that I'm one of four people living in my house, and only 2 of us use the Play Station. I'd never convince the others to disconnect for the sake of video games. But yes, that's a valid solution in most cases Well, you can try replacing PS4's antenna, although i really believe you should rethink about the cable solution. You can get a ethernet cable up to 100m long and if you don't have conduits inside your walls you can just leave it exposed and only plug it when playing. Fair call. If it were up to me, I would run a cable from the PS4 to the Ethernet cable. Except the other people would really not like loose cable linking the walls. It's about 26m to the closest port, which is where the modem is plugged in. The only other port is literally the opposite side of the house, with a home phone plugged in. I'd have to unplug the phone, or else there's no other way. But eventually we will do renovations, and I'm going to have far more ports installed around the place Ideas: Get a Wifi repeater (extends the Wifi signal to be stronger near your PS4) Get a Wifi bridge with better reception than your PS4 - it's similar to a router except reversed - connect your PS4 to the bridge by cable. Get power-line adapters - transfers ethernet signal over the power cabling in your house. Performance depends on the house. Change your network port to <IP_ADDRESS> or something like that and the one below as <IP_ADDRESS> That's Google network speed and its super fast. All downloads will be so much faster. Here's a link: http://community.us.playstation.com/t5/Consoles-Peripherals/How-to-improve-PS4-download-speeds-Works-for-most/td-p/43701250 OP already did this and it didn't work.
STACK_EXCHANGE
Thanks for the help guys. Status update. Changing to the new driver allowed weewx to download all the info for the barometer, indoor temp sensor and anemometer. unfortunately the thermo hygro sensor still does not read. Im guessing this is due to the T-H sensor being set to channel 2, whereas n the driver its as following: elif sensor_id == 1: # Outdoor temperature sensor. record['outTemp'] = temp record['outHumidity'] = humidity record['heatindex'] = heat_index So I guess if i change the line to elif sensor_id == 2: instead of 1, that would fix it? I could add another line elif sensor_id == 1 for Channel 1 just in case, though i think its not necessary since I only have one sensor. Id like to test it out, but unfortunately mid way my SD card has gone belly up :( at least some progress was seen! now i need to go get a new SD card and reinstall everything on my pi On Wednesday, 12 October 2016 19:30:49 UTC+8, mwall wrote: > On Wednesday, October 12, 2016 at 5:05:03 AM UTC-4, shriramvenu wrote: >> Thanks for this. I saw the first bug but was'nt quite sure what to do. I >> guess i just replace wmr200.py in /bin/weewx/drivers? (where /bin is from >> the root directory of my R-pi?) > if you installed using setup.py: > sudo mv /home/weewx/bin/weewx/drivers/wmr300.py > cp ~/Downloads/wmr300.py /home/weewx/bin/weewx/drivers/wmr300.py > if you install from .deb package then the path is > /usr/share/weewx/weewx/drivers instead of /home/weewx/bin/weewx/drivers > Yes I have RapidFire enabled. Before this I was uploading to Weather >> Underground using rapidfire via WeatherDisplay for months using my windows >> PC so I dont think the problem is the WMR200 hardware. Are you saying that >> weewx doesnt properly handle rapidfire for the WMR200? > wmr hardware reports partial packets, i.e., one packet might contain wind > information, another temperature and humditiy. this is different from > hardware that reports full packets, such as vantage, where every packet > contains a value from every sensor. > there is no right or wrong - they are just different architectures. > however, wu rapidfire does not like to receive data from one sensor > without receiving data from *all* sensors. > apparently weather-display does some caching to keep wu rapidfire happy. > weewx does not do any caching. as issue #31 shows, we will probably *not* > do comprehensive caching in weewx, since that leads to other problems > (especially when you work with dynamic data collection systems and a wide > range of extensible sensors). but at some point weewx will probably will > do caching just for wu rapidfire. >> Also should I keep the archive interval at the default 60s or change to >> 300s like advised here? > how often do the sensors update on a wmr200? (check the owner's manual) You received this message because you are subscribed to the Google Groups To unsubscribe from this group and stop receiving emails from it, send an email For more options, visit https://groups.google.com/d/optout.
OPCFW_CODE
How to create users in active directory using script How do I create a new user in Active Directory? How to Create a New Active Directory User Account - Open Active Directory Users and Computers MMC. - Right click the folder where you want to create the new user account, select new and then click user. If you have not created additional organizational units, you can put the new account in the Users folder. - Fill out the fields in the New Object – User window. How do I create multiple users in Active Directory? Create multiple users in Active Directory (AD) - Click Management tab. - Click the Create Bulk Users link under Create Users to invoke the Create Bulk Users wizard. - Select the domain of your choice from the domain drop-down box. - Select a previously created user template. - You have the following options to add users: How do I create ad user in PowerShell? New-ADUser: Creating Active Directory Users with PowerShell - Add an Active Directory user account using the required and additional cmdlet parameters. - Copy an existing AD user object to create a new account using the Instance parameter. - Pair the Import-Csv cmdlet with the New-ADUser cmdlet to create multiple Active Directory user objects using a comma-separated value (CSV) file. How do I create a bulk user in Active Directory using PowerShell? a. Install the PowerShell Active Directory Module - Go to Server Manager. - Click on “Manage” > click on “Add Roles and Features” - Click “Next” until you find “Features” - Go to “Remote Server Administration Tools” > Role Administration Tools > AD DS and AD LDS Tools > enable “Active Directory Module for Windows PowerShell” How do I bulk modify Active Directory user attributes? The AD Bulk User Modify tool uses a CSV file to bulk modify Active Directory user accounts. All you need is the users sAMAccountName and the LDAP attribute you want to modify. Example 1: Bulk Modify Users Office Attribute - Step 1: Setup the CSV File. - Step 2: Run AD Bulk User Modify Tool. - Step 3: Verify the changes. How do I run a PowerShell script? How can I easily execute a PowerShell script? - Browse to the location you stored the ps1-file in File Explorer and choose; File-> Open Windows PowerShell. - Type (part of) the name of the script. - Press TAB to autocomplete then name. Note: Do this even when you typed the name in full. - Press ENTER to execute the script. How do I run a script? You can run a script from a Windows shortcut. - Create a shortcut for Analytics. - Right-click the shortcut and select Properties. - In the Target field, enter the appropriate command line syntax (see above). - Click OK. - Double-click the shortcut to run the script. How do I run a script from command line? Run a batch file - From the start menu: START > RUN c:\path_to_scripts\my_script.cmd, OK. - “c:\path to scripts\my script.cmd“ - Open a new CMD prompt by choosing START > RUN cmd, OK. - From the command line, enter the name of the script and press return. - It is also possible to run batch scripts with the old (Windows 95 style) . Can PowerShell run SQL query? PowerShell features many one-line commands for working with SQL Server, one of which is Invoke-SqlCmd. PowerShell ISE is included in most versions of Windows along with the PowerShell command line. For the examples in this article, we are using PowerShell version 5.
OPCFW_CODE
My name is Dennis im a 25 year IT student from the netherlands. I realy love playing indie games a lot, specialy games alike Terraria. Im making this post on the Ubuntu forums because im looking for some help i stil cant find (this isnt the first forum for me) I usualy ask some people in my gaming-clan or my favorite overclock forum :3 But Thus far nobody has be able to help me to fix my linux related problems. I went to get UBuntu server install, and i just tried it out actualy. That part went oke, except for something binaries......? or some basic folders i forgot what is was called (im sorry i dont remember what this was exactly) BUt i *just* choose one of them at random to contiu the installation. Most of my problem related to Steamcmd , something with actualy downloading the server files onto the machine. I never managed to figure it out and i gave up wich actualy makes me sad Because i dont like to give up on these things (spent my entire weekend on it) So my reason of this post and introduction is to search for somebody who wants and could help me figure all this beginner stuff out. Alright i see u having problems understand me , i have this isseu sometimes duo my native dutch language. But i wil try my best to keep it simple in lines where i am trying to explain something. First off , id like to know where to start. I got a version of Ubuntu Server, with just a terminal. Im very new to this but id love to slowly get used to it in some fun way. This is why i want to host little Dedicated Game Server on a self made Linux Machine. The Pc i have ready to be playing the role of Host / Server =) (I have 2 HDD's into the machine to manualy switch harddrive if i make mistakes or try multiple OS setups) I think for me its best to take a good look at the first step , the OS and how exactly to install it for hosting games? Sinds i had a lot of errors and almost everything i tried typing into the linux terminal to download the Dedicated server files to run it failed -I got Ubuntu Server 12.04 on one hard drive installed and working. (but not 100% certain if installed correct) Sorry i cant be more specific but for now im just looking for a new basis for the Server = Wich OS is best? and how to install it. -Then the second step i think wil be the games i want to host, and how to install and run them. -Games i want to host (Terraria-Starbound-Edge of Space) i dont know wich one is easyiest to start with. Tell me if u stil having trouble understand my questions
OPCFW_CODE
Believe it or not, building a tiny compiler from scratch can be as fun as it is accessible. [James Smith] demonstrates by making a tiny compiler for an extremely simple programming language, and showing off a Here’s what happens with a compiler: human-written code gets compiled into low-level machine code, creating a natively-executable result for a particular processor. [James]’ compiler — created from scratch — makes native x64 Linux ELF binary executables with no dependencies, an experience [James] found both educational and enjoyable. The GitHub repository linked below has everything one needs, but [James] also wrote a book, From Source Code to Machine Code, which he offers for sale to anyone who wants to step through the nitty-gritty. The (very tiny) compiler is on GitHub as The Pretty Laughable Programming Language. It’s tiny, the only data types are integers and pointers, and all it can do is make Linux syscalls — but it’s sufficient to make a program with. Here’s what the code for “Hello world!” looks like before being fed into the compiler: ; the write() syscall: ; ssize_t write(int fd, const void *buf, size_t count); (syscall 1 1 "Hello world!\n" 13) Working at such a low level can be rewarding, but back in the day the first computers actually relied on humans to be compilers. Operators would work with pencil and paper to convert programs into machine code, and you can get a taste of that with a project that re-creates what it was like to program a computer using just a few buttons as inputs. 21 thoughts on “Here’s How To Build A Tiny Compiler From Scratch” My first system was a Motorola D2 kit. It came as a couple of PCBs with lots of components and had to be assembled. It was programmed directly in machine code via a hex keypad and display. I actually wrote my programs in assembly language and hand-assembled them. I have a D2 kit in the attic. It has seen much use. You could add a n RS-232 interface and install the MIKBUG PROM and program the kit from a serial terminal I only get the same error with all examples I tried… bridgescript.com is better :) May I offer “the dragon book”, Aho and Ullman’s “Principles of Compiler Design”. Once you’ve looked at that, Linux has (inherited from Unix) all the tools you need to build your own compiler: lex, yacc, etc. this project clearly comes from the lisp and forth school of compiler design. lex and yacc and a lot of the ‘classic’ stuff in those books is completely irrelevant and unnecessary for this sort of compiler. there’s a lot of neat stuff in those books but almost the entire content of both of those books is almost completely irrelevant to ‘simple’ compilers. there’s some classic stuff you can’t hardly get away from if you want to make a ‘real’ compiler (like register allocation), but i’m gonna say even if you’re going in that direction, i wouldn’t recommend starting with a book. write the simple compiler and tackle the classic questions as they come to you. that’s just my opinion. Unnecessary, maybe, but writing a compiler for a context free grammar with (f)lex and yacc/bison finite state machine is almost trivial. Of course there are more modern tools available for the job as well, e.g. antlr. Or just ask ChatGPT to write you a simple compiler. Ugh. A masochist! Wouldn’t you rather play along with a maestro, than try to rediscover all the rules of symphonic composition on your own? I disagree with your approach, but to each their own. I think my suggestion (and that of the other poster) get you more value, faster. In high school I could code in assembly for both the c64 and Apple iie as well as read Hex. At one stage I remember looking at a project to compile BASIC, but it was easier to just use Machine code. And this was a great way to go at the time, especially since the C64 and Apple iie use the same microprocessor. But today, if you want to do a microcontroller project, it’s nice to not have to care about what the instruction set is, especially if you’re having to upgrade a project from 8-bit to 32-bit, or just adapt to a new architecture for supply chain reasons. I’ve implemented a few language parsers with lex and yacc, and I’ve implemented a few with no tools and recursive descent. I found it way easier to develop and maintain the recursive descent parsers. I was wondering about that. Seems like having to learn the “language” needed to write the specification for a code generator might be more challenging than writing the parser, and just raise the overall complexity of the design process. What I’d suggest is write a translator to convert the language to C, then compile that. It’s a much quicker path to production. Later, if there’s a need, you can write a direct compiler to assembler for the language. This is what happened when Pascal went out of favor – someone came up with a Pascal-to-C translator, and that was the last we heard of Pascal. But I think a better way to go is the other direction. C was a failed concept that just won’t go away, and yet I use it almost exclusively because it’s the best existing solution that is available for every architecture. But this has been a roadblock many times, like in the early days of microcontrollers, when C compilers for them were expensive, and because of the complexity of C, difficult to write. I’d far rather have a simpler syntax that does little more than remove the need to know the instruction set of a given computer, rather than trying to read like English, and then make a translator that converts C to that lower-level language. So a project like this has a great deal of appeal for me. Pascal and C were so similar you could almost use the others compiler with a set of macros. C++ and Object Pascal, the same. Just enough screwy differences to make things complicated. Kind of like Java and C#. Right down to one being clearly better. C a failed concept? No. Just bonkers wrong. Is sharp tool, don’t cut self. Do not attempt to perform bris with C. I sometimes like diving into C in Four Functions when I am in the mood for seeing how a simple compiler fits together. https://github.com/rswier/c4 How about N.Wirth’s Compiler book: https://github.com/tpn/pdfs/blob/master/Compiler%20Construction%20-%20Niklaus%20Wirth%20-%201996%20(CBEAll).pdf Has anyone actually read the book that is sorta-being-advertised here? Coming from the Lisp/Forth world myself, I’m curious, but I want to make sure this is a solidly-written book before I plunk down $29+. One reviewer on Amazon quipped that the author’s DB book was more “a bunch of notes” than a “book.” (But as we know, Amazon reviews are never biased :-). I do want to believe. But this book is so new and self-published, that a second opinion would be very much welcome. Example given in the linked article: (def foo (n) (if (le n 0) (else (+ n (call foo (- n 1)))))) (def foo (n) (if (le n 0) (else (+ n (call foo (- n 1)))) I shouldn’t have to rely on the parenthesis-matching features of a coding-focused text editor to read your code. #facepalm# Stupid WordPress took out all of my indentation. Those reading this far may be interested in this website/course/book. Excellent education in this general area, in my non-professional opinion. https://www.nand2tetris.org/ Please be kind and respectful to help make the comments section excellent. (Comment Policy)
OPCFW_CODE
Data Analytics Course For Beginners – Career Path Last updated on 12th Jul 2020, Blog, General Data Science and Data Analytics are two most trending terminologies of today’s time. Presently, data is more than oil to the industries. Data is collected into raw form and processed according to the requirement of a company and then this data is utilized for the decision making purpose. This process helps the businesses to grow & expand their operations in the market. But, the main question arises – What is the process called? Data Analytics is the answer here. And, Data Analyst and Data Scientist are the ones who perform this process. What is Data Analytics? Data or information is in raw format. The increase in size of the data has lead to a rise in need for carrying out inspection, data cleaning, transformation as well as data modeling to gain insights from the data in order to derive conclusions for better decision making process. This process is known as data analysis. Data Mining is a popular type of data analysis technique to carry out data modeling as well as knowledge discovery that is geared towards predictive purposes. Business Intelligence operations provide various data analysis capabilities that rely on data aggregation as well as focus on the domain expertise of businesses. In Statistical applications, business analytics can be divided into Exploratory Data Analysis (EDA) and Confirmatory Data Analysis (CDA). EDA focuses on discovering new features in the data and CDA focuses on confirming or falsifying existing hypotheses. Predictive Analytics does forecasting or classification by focusing on statistical or structural models while in text analytics, statistical, linguistic and structural techniques are applied to extract and classify information from textual sources, a species of unstructured data. All these are varieties of data analysis. The revolutionising data wave has brought improvements to the overall functionalities in many different ways. There are various emerging requirements for applying advanced analytical techniques to the Big Data spectrum. Now experts can make more accurate and profitable decisions. In the next section of the Data Analytics tutorial, we are going to see the difference between Data Analysis and Data Reporting. - 3 hours of online self-paced learning - Lifetime access to self-paced learning - Industry-recognized course completion certificate - Real-world case studies and examples Introduction to Data Analytics Course Curriculum This Data Analytics for beginners course is ideal for anyone who wishes to learn the fundamentals of data analytics and pursue a career in this growing field. The course also caters to CxO-level and middle management professionals who want to improve their ability to derive business value and ROI from analytics. This Introduction to Data Analytics for Beginners course has been designed for all levels, regardless of prior knowledge of analytics, statistics, or coding. Familiarity with mathematics is helpful for this course. Data Analysis Process Now in the Data Analytics tutorial, we are going to see how data is analyzed step by step. 1. Business Understanding Whenever any requirement occurs, firstly we need to determine the business objective, assess the situation, determine data mining goals and then produce the project plan as per the requirement. Business objectives are defined in this phase. Learn Data Analytics Certification Training with Industry Standard Concepts - Instructor-led Sessions - Real-life Case Studies 2. Data Exploration For the further process, we need to gather initial data, describe and explore data and lastly verify data quality to ensure it contains the data we require. Data collected from the various sources is described in terms of its application and the need for the project in this phase. This is also known as data exploration. This is necessary to verify the quality of data collected. 3. Data Preparation From the data collected in the last step, we need to select data as per the need, clean it, construct it to get useful information and then integrate it all. Finally, we need to format the data to get the appropriate data. Data is selected, cleaned, and integrated into the format finalized for the analysis in this phase. 4. Data Modeling After gathering the data, we perform data modeling on it. For this, we need to select a modeling technique, generate test design, build a model and assess the model built. The data model is build to analyze relationships between various selected objects in the data. Test cases are built for assessing the model and model is tested and implemented on the data in this phase. 5. Data Evaluation Here, we evaluate the results from the last step, review the scope of error, and determine the next steps to perform. We evaluate the results of the test cases and review the scope of errors in this phase. We need to plan the deployment, monitoring and maintenance and produce a final report and review the project. In this phase, we deploy the results of the analysis. This is also known as reviewing the project. The complete process is known as the business analytics process. Skills required to become a Data Analyst Data Analytics Tutorial is incomplete without knowing the necessary skills required for the job of a data analyst. In today’s world, there is an increasing demand for analytical professionals. All the data collected and the models created are of no use if the organization lacks skilled data analysts. A data analyst requires both skills and knowledge for getting good data analytics jobs. To be a successful analyst, a professional requires expertise on the various data analytical tools like R & SAS. He should be able to use these business analytics tools properly and gather the required details. He should also be able to take decisions which are both statistically significant and important to the business. Technical & Business Skills for Data Analytics In this part of the data analytics tutorial, we will discuss the required technical and business skills. Technical skills for data analytics: - Packages and Statistical methods - BI Platform and Data Warehousing - Base design of data - Data Visualization and munging - Reporting methods - Knowledge of Hadoop and MapReduce - Data Mining Business Skills Data analytics: - Effective communication skills - Creative thinking - Industry knowledge - Analytic problem solving
OPCFW_CODE
Welcome! Today we're jumping in to have a peek at a fantastic feature of Showcase that you might not be aware even exists! We're going to look at the more 'interactive' side of Showcase - where you might be entering some kind of information right into the app. Broadly speaking, we're talking about 'forms'. And yes, we've mentioned filling in PDF forms before - still a valid option! - but this is for collecting a simpler set of info to store right within the Showcase website. This is the first in a 3-part blog series detailing the different types of forms you can have inside your showcase, so you can see what one fits your needs perfectly. Today we'll talk about our "Simple Pop Up Forms" - the next posts will be about HTML5 Forms, and then HTML5 Calculators. We can do so many exciting things with these 3 different kinds of forms - data capture, quoting calculators, quizzes and tests (great for tradeshows and staff training) and we've also done forms for incident reporting and uniform ordering. The sheer amount of options available is mind boggling! What our forms can do largely depends on what your needs are - here's our first excellent option. Simple as pie. Our Simple Pop Up Forms are the essential 'basic' forms for data entry. They're quite simple, exactly as they sound. You can customise them to display whatever text you like, depending on your needs. This form option is really great for typical data collection, like personal details and email addresses. Like this example here, you can add up to around 6-7 simple text fields (anymore than that and it gets a bit difficult). If you have the skills to write basic JSON yourself or in-house - or even if you're prepared to spend some time figuring it out - you could give this one a go on your own! The documentation is here. Follow the Leader Lead generation is the game and Simple forms are the name. Or something like that. The point is, using a simple form for lead generation could be crucial to adding more business to your books, or just making that critical connection to your customer base. Showcase runs offline, as you know, but so do forms! If you're gathering data from customers using a simple form (for contact information and the like), it will queue all the data collected, then as soon as you're back in a reliable WiFi connection, it will send it all through. Boom! No fuss. Once you have all your data collected, you can download a CSV file directly from Showcase and import it into your own CRM (like MailChimp) - the simple forms are really living up to their name, huh? Nuts and Bolts So, what can and can't I do with these forms? As I mentioned earlier, they're text box forms essentially. You can totally edit all the fields to display whatever text you like, both the title above the box and the "placeholder" text in the inside. This wee example of a 2 text box form has a larger explanation in the placeholder text, so you can be as in-depth or as short as you need. You can set fields to be either mandatory, or optional. That way you can ensure that you get all the information you actually need, and none of your users can accidentally submit an empty or half finished form. There's also the option (if you're feeling particularly organised) to have an entirely separate showcase just for your forms! You can't add in fancy additions like dropdowns, checkboxes, or your own branding - these are available in our more detailed forms (HTML forms). Which we will chat about in Part 2! Cross the T's and dot the I's So, you've collected all this critical info, and you're dying to follow up on all of your leads. Where does all the data from your forms go? How do you even do anything with it? Is it all just piling up in a stack of paper on my desk?! You'll find it under the Reporting tab. There's a section just for Forms, called..."Forms". Users with Admin capabilities can access all the form data submitted by all users in the workshop; Viewer users can see any form submitted under their own login. Once you're in that Reporting tab, you can filter and search for particular things - i.e by showcase, who submitted, personal details, text, form type, name. You can also use the tabs at the top to filter by particular showcase, alphabetical order etc. You can be super niche and look for a small group of people, or be broad as heck and just grab all of the data! Once you have the details you need, you can export to a CSV file using the button top right. Don't worry, you can download by the filter settings we just talked about, not just all details. Then you can take your handy CSV file and upload into whatever CRM you like! Everything has a purpose We've talked through the capabilities and the functionality of the form, and how to use what you get out of it - but what kind of things can you use these forms for? Here's an example where a simple form was used for a staff meeting. Forms are often used internally and they're especially beneficial in cases like this, when you have staff dispersed over the country. This particular form is used for procedural sign offs - how streamlined! Pop up forms can quite literally be used for anything your heart desires! If you're wanting to implement a form in your showcase, you can follow the documentation mentioned earlier and give it a crack yourself if you're feeling adventurous. We definitely encourage people to dive in and have a go! It might be a good idea to set up a demo showcase to have a play in, so there's no stress about accidentally fluffing around with important things. However, most people will come to us for help, and we're more than happy to discuss your needs. Flick us an email at email@example.com, and let's chat!
OPCFW_CODE
namespace WinKeyHandler { using System; using System.Diagnostics; using System.Runtime.InteropServices; internal class WinKeyPressedEventArgs : EventArgs { public VK Key { get; private set; } public WinKeyPressedEventArgs(VK key) { Key = key; } } internal static class Keyboard { const int HC_ACTION = 0; static IntPtr hHook = IntPtr.Zero; static HookProc hookProc; public static event EventHandler WinKeyPressed = delegate { }; #region P/Invoke declarations delegate IntPtr HookProc(int nCode, IntPtr wParam, [In] IntPtr lParam); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern IntPtr SetWindowsHookEx(WH idHook, HookProc lpfn, IntPtr hMod, uint dwThreadId); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool UnhookWindowsHookEx(IntPtr hhk); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern IntPtr GetModuleHandle(string lpModuleName); [DllImport("user32.dll")] static extern short GetAsyncKeyState(VK vKey); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern IntPtr CallNextHookEx(IntPtr hhk, int nCode, IntPtr wParam, IntPtr lParam); #endregion P/Invoke declarations static Keyboard() { // Keep a reference to prevent // delegate from garbage collection hookProc = OnKeyPressed; } public static void Hook() { using (var process = Process.GetCurrentProcess()) { using (var module = process.MainModule) { IntPtr hMod = GetModuleHandle(module.ModuleName); hHook = SetWindowsHookEx(WH.KEYBOARD_LL, hookProc, hMod, 0); } } } public static void Unhook() { if (hHook != IntPtr.Zero) { UnhookWindowsHookEx(hHook); hHook = IntPtr.Zero; } } private static IntPtr OnKeyPressed(int nCode, IntPtr wParam, IntPtr lParam) { var keyInfo = (KBDLLHOOKSTRUCT)Marshal.PtrToStructure(lParam, typeof(KBDLLHOOKSTRUCT)); if (nCode == HC_ACTION && (WM)wParam == WM.KEYDOWN) { if (IsWinKey()) { WinKeyPressed(null, new WinKeyPressedEventArgs(keyInfo.VkCode)); } // Ignore the key press //return (IntPtr)1; } return CallNextHookEx(hHook, nCode, wParam, lParam); } private static bool IsWinKey() { return GetAsyncKeyState(VK.LWIN) < 0 || GetAsyncKeyState(VK.RWIN) < 0; } } }
STACK_EDU
Acessing system information from frameworks in iOS Developer Library As the title says, I am trying to create an app(personal development) and try to see what kind of system information or user data can be retrieved from phone. For now I am using the stimulator provided. So far what I am able to do is to retrieve information using the Address Book and UIKit Framework such as contact details and System Name/Version etc. Is there any system information/user data that I missed out because I have yet to know of any more frameworks that allows retrieving of any information from the phone/Stimulator. I am not able to test EventKit Framework(the only other framework that I know) due to the fact that I am deploying the app in the stimulator which does not have the required apps. (Will be trying on jailbroken iphone in the later stages). Also, I have yet to find any information of accessing the .sqlitedb /.db/.plist files programmatically instead of using any software tools as I would like to access the files such as messages, phone history through my app that I created. If this is possible, I would also like to know if accessing these .sqlitedb /.db files/.plist is only applicable if I deploy my app in the jailbroken phone /Applications folder which does not have sandbox or is it also applicable in the stimulator itself? Possible duplicate of http://stackoverflow.com/questions/9822031/call-history-sms-history-email-history-in-ios This will be definitely helpful to you to get access the call/sms/email details by reading the .sqlite database. Here is a tutorial. Hi I have came across this before but reading the comments given many said that this is not workable for iOS5 and above? Maybe you can clarify this for me? Is it not workable for unjailbroken phone that has firmware 5 and above but still workable for any firmware of jailbroken phone? You are right. From ios 5, apple didn't gave access to the database. Just to be sure, apple does not allow access to any.db files which are of iOS5 and above, whether jailbroken or not , or only unjailbroken devices that is iOS5 and above? Yes, apple is not allowing for iOS5 and above. http://stackoverflow.com/questions/19654495/what-is-the-path-for-call-history-db-in-ios-6-is-it-same-as-that-of-ios5 As u can see, this person is able to do access .db files in iOS5(not sure about the credibility) and the answer provided mentioned that you can access the .db files if your phone is jailbroken(if i read it correctly). So I am very confused. You can give a try with iOS5, if worked then charm for you and also share your experience here, will be helpful for us. BTW, he mentioned "iOS < 5.0" means less than 5 not working on 5. i was deciphering what he mentioned "(and probably all jailbreacked versions)". i know providing links are discouraged here however these apple and wiki links will be helpful to you. thanks Hi, Thanks for your links. Read through it and I found out about IOKit for hardware information and CoreTelephony for carrier information. Not sure if I missed out any others. Would be great if you could fill me up any that i did not mention. :)
STACK_EXCHANGE
WebRTC audio support We already have WebRTC DataChannel support. This PR extends that to audio channel support, via a bump to WebRTC M71, plus an integration with WebAudio for device input/output on all platforms. This needs more testing, but should support WebRTC audio channels on all platforms. Got a crash on Magic Leap when requesting video: #0 0x0000aaaad2aaba28 in webrtc__ZN7cricket18WebRtcVideoChannelC2EPN6webrtc4CallERKNS_11MediaConfigERKNS_12VideoOptionsEPNS1_19VideoEncoderFactoryEPNS1_19VideoDecoderFactoryE () #1 0x0000aaaad2aab054 in webrtc__ZN7cricket17WebRtcVideoEngine13CreateChannelEPN6webrtc4CallERKNS_11MediaConfigERKNS_12VideoOptionsE () #2 0x0000aaaad2bade44 in webrtc__ZN7cricket14ChannelManager18CreateVideoChannelEPN6webrtc4CallERKNS_11MediaConfigEPNS1_20RtpTransportInternalEPN3rtc6ThreadERKNSt6__ndk112basic_stringIcNSC_11char_traitsIcEENSC_9allocatorIcEEEEbRKNS9_13CryptoOptionsERKNS_12VideoOptionsE () #3 0x0000aaaad2bae934 in webrtc__ZN3rtc21FunctorMessageHandlerIPN7cricket12VideoChannelEZNS1_14ChannelManager18CreateVideoChannelEPN6webrtc4CallERKNS1_11MediaConfigEPNS5_20RtpTransportInternalEPNS_6ThreadERKNSt6__ndk112basic_stringIcNSF_11char_traitsIcEENSF_9allocatorIcEEEEbRKNS_13CryptoOptionsERKNS1_12VideoOptionsEE3$_8E9OnMessageEPNS_7MessageE () #4 0x0000aaaad29b727c in webrtc__ZN3rtc12MessageQueue8DispatchEPNS_7MessageE () #5 0x0000aaaad29bec54 in webrtc__ZN3rtc6Thread22ReceiveSendsFromThreadEPKS0_ () #6 0x0000aaaad29b6a40 in webrtc__ZN3rtc12MessageQueue3GetEPNS_7MessageEib () #7 0x0000aaaad29be988 in webrtc__ZN3rtc6Thread15ProcessMessagesEi () #8 0x0000aaaad29be890 in webrtc__ZN3rtc6Thread6PreRunEPv () #9 0x000040000783e688 in __pthread_start(void*) () from C:\Users\kbiedrzycki\Documents\GitHub\exokit5\build\magicleap\program-device\release_lumin_clang-3.8_aarch64\libc.so #10 0x00004000077f1600 in __start_thread () from C:\Users\kbiedrzycki\Documents\GitHub\exokit5\build\magicleap\program-device\release_lumin_clang-3.8_aarch64\libc.so #11 0x0000000000000000 in ?? () That's not the purpose of this PR to enable it, but we should make it not crash in that case. This should be working reasonably well on Magic Leap now. The other platforms still need a binary rebuild before merging. This PR additionally removes the previous WebAudio (LabSound) ownership of the MediaStream implementation in favor of the one coming from WebRTC. I don't think the WebAudio version was anything more than an shim object, so we can use the WebRTC MediaStream as the new opaque shim. This PR will need a major rebase, though we do want to get it in soon.
GITHUB_ARCHIVE
Re: [PATCH RFC 00/10] RDMA/FS DAX truncate proposal From: Dave Chinner Date: Thu Jun 13 2019 - 13:02:25 EST On Wed, Jun 12, 2019 at 04:30:24PM -0700, Ira Weiny wrote: > On Wed, Jun 12, 2019 at 05:37:53AM -0700, Matthew Wilcox wrote: > > On Sat, Jun 08, 2019 at 10:10:36AM +1000, Dave Chinner wrote: > > > On Fri, Jun 07, 2019 at 11:25:35AM -0700, Ira Weiny wrote: > > > > Are you suggesting that we have something like this from user space? > > > > > > > > fcntl(fd, F_SETLEASE, F_LAYOUT | F_UNBREAKABLE); > > > > > > Rather than "unbreakable", perhaps a clearer description of the > > > policy it entails is "exclusive"? > > > > > > i.e. what we are talking about here is an exclusive lease that > > > prevents other processes from changing the layout. i.e. the > > > mechanism used to guarantee a lease is exclusive is that the layout > > > becomes "unbreakable" at the filesystem level, but the policy we are > > > actually presenting to uses is "exclusive access"... > > That's rather different from the normal meaning of 'exclusive' in the > > context of locks, which is "only one user can have access to this at > > a time". As I understand it, this is rather more like a 'shared' or > > 'read' lock. The filesystem would be the one which wants an exclusive > > lock, so it can modify the mapping of logical to physical blocks. > > The complication being that by default the filesystem has an exclusive > > lock on the mapping, and what we're trying to add is the ability for > > readers to ask the filesystem to give up its exclusive lock. > This is an interesting view... > And after some more thought, exclusive does not seem like a good name for this > because technically F_WRLCK _is_ an exclusive lease... > In addition, the user does not need to take the "exclusive" write lease to be > notified of (broken by) an unexpected truncate. A "read" lease is broken by > truncate. (And "write" leases really don't do anything different WRT the > interaction of the FS and the user app. Write leases control "exclusive" > access between other file descriptors.) I've been assuming that there is only one type of layout lease - there is no use case I've heard of for read/write layout leases, and like you say there is zero difference in behaviour at the filesystem level - they all have to be broken to allow a non-lease truncate to IMO, taking a "read lease" to be able to modify and write to the underlying mapping of a file makes absolutely no sense at all. IOWs, we're talking exaclty about a revokable layout lease vs an exclusive layout lease here, and so read/write really doesn't match the policy or semantics we are trying to provide. > Another thing to consider is that this patch set _allows_ a truncate/hole punch > to proceed _if_ the pages being affected are not actually pinned. So the > unbreakable/exclusive nature of the lease is not absolute. If you're talking about the process that owns the layout lease running the truncate, then that is fine. However, if you are talking about a process that does not own the layout lease being allowed to truncate a file without first breaking the layout lease, then that is fundamentally broken. i.e. If you don't own a layout lease, the layout leases must be broken before the truncate can proceed. If it's an exclusive lease, then you cannot break the lease and the truncate *must fail before it is started*. i.e. the layout lease state must be correctly resolved before we start an operation that may modify a file layout. Determining if we can actually do the truncate based on page state occurs /after/ the lease says the truncate can proceed....
OPCFW_CODE
The diagram above illustrated the steps in handling site fault. When webpage fault happens for the duration of plan execution, the kernel will very first Find the lacking website page on the backing keep (disk). A web page fault happens every time a system make an effort to use a page that's not within the memory, resulting from demand from customers paging will only paged the web pages in the memory when it is needed. Our very capable United states of america assignment writers have confidence in supplying most effective assignment help that help college students to attain academic excellence. NVRTC is a runtime compilation library for CUDA C++. It accepts CUDA C++ resource code in character string sort and results in handles that could be used to get the PTX. The PTX string produced by NVRTC may be loaded by cuModuleLoadData and cuModuleLoadDataEx, and joined with other modules by cuLinkAddData in the CUDA Driver API. procedure Regulate, Linux forks Execution of code by spawned procedures Programming with spawned procedures Linux file system Data files and databases systems: Database structure assignment of the operating system is intriguing, and it retains several learners puzzled. Note that every one the claims built about KETO OS on the website be certain the make use of the words may perhaps, may, and suggests when speaking about the investigation Prüvit says exhibits how exogenous ketones work as neuroprotectors that help cognition, enhance Power, target and concentration, control blood sugar, and—with the purposes of the review concerned Specifically with weight loss—“exogenous ketones alter human body composition.” You will be suitable for referral reward When your Good friend put the order using the very same referral code working with no other reductions soon after thriving payment created by him. AllAssignmentHelp is without doubt one of the Leading assignment help and essay creating services service provider We're based out in the US and help college students across the globe. Our consumer assist and certified tutors differentiate us from your Other folks. The Negative effects of exogenous ketones from KETO OS items are numerous. The corporation admits that “often” folks have “digestive upset,” and that use in their product or service “could be a little bit dehydrating,” which ends up in mineral decline, which equals exhaustion, headache, diarrhea and constipation. Complexity of windows databases style and design is large. It is out from the capacity of the rookie to grasp the structure. Consequently, our knowledgeable Personal computer science tutors can support you with such ideas. You'll find three different types of algorithm can be used to hundreds This system anywhere the memory Place is unused, which happens to be to start with suit, finest healthy and worst healthy. The dimensions of each and every course of action is different, for that reason check these guys out when the procedures is been swapped out and in, there will be a numerous holes while in the memory since UNIX is employing variable partitioning. An operating system (OS) may be described like a program program that provide a channel to speak concerning Components and Software. The hypothesis ... See total definition software package audit A software program audit is an inner or exterior evaluate of the software package software to check its top quality, progress or adherence to plans, ... See comprehensive definition
OPCFW_CODE
""" Access to files containing sequence data in 'twobit' format. """ from collections.abc import Mapping from struct import ( calcsize, unpack, ) from typing import ( BinaryIO, Dict, List, Tuple, ) from . import _twobit TWOBIT_MAGIC_NUMBER = 0x1A412743 TWOBIT_MAGIC_NUMBER_SWAP = 0x4327411A TWOBIT_MAGIC_SIZE = 4 TWOBIT_VERSION = 0 class TwoBitSequence: masked_block_sizes: List masked_block_starts: List n_block_sizes: List n_block_starts: List def __init__(self, tbf, header_offset=None): self.tbf = tbf self.header_offset = header_offset self.sequence_offset = None self.size = None self.loaded = False def __getitem__(self, slice): start, stop, stride = slice.indices(self.size) assert stride == 1, "Striding in slices not supported" if stop - start < 1: return "" return _twobit.read(self.tbf.file, self, start, stop, self.tbf.do_mask) def __len__(self): return self.size def get(self, start, end): # Trim start / stop if start < 0: start = 0 if end > self.size: end = self.size out_size = end - start if out_size < 1: raise Exception(f"end before start ({start},{end})") # Find position of packed portion dna = _twobit.read(self.tbf.file, self, start, end, self.tbf.do_mask) # Return return dna class TwoBitFile(Mapping): def __init__(self, file: BinaryIO, do_mask: bool = True): self.do_mask = do_mask # Read magic and determine byte order self.byte_order = ">" strng = file.read(TWOBIT_MAGIC_SIZE) magic = unpack(">L", strng)[0] if magic != TWOBIT_MAGIC_NUMBER: if magic == TWOBIT_MAGIC_NUMBER_SWAP: self.byte_order = "<" else: raise Exception("Not a NIB file") self.magic = magic self.file = file # Read version self.version = self.read("L") if self.version != TWOBIT_VERSION: raise Exception("File is version '%d' but I only know about '%d'" % (self.version, TWOBIT_VERSION)) # Number of sequences in file self.seq_count = self.read("L") # Header contains some reserved space self.reserved = self.read("L") # Read index of sequence names to offsets index: Dict[str, TwoBitSequence] = dict() for _ in range(self.seq_count): name = self.read_p_string() offset = self.read("L") index[name] = TwoBitSequence(self, offset) self.index = index def __getitem__(self, name: str) -> TwoBitSequence: seq = self.index[name] if not seq.loaded: self.load_sequence(name) return seq def __iter__(self): return iter(self.index.keys()) def __len__(self) -> int: return len(self.index) def load_sequence(self, name: str) -> None: seq = self.index[name] # Seek to start of sequence block self.file.seek(seq.header_offset) # Size of sequence seq.size = self.read("L") # Read N and masked block regions seq.n_block_starts, seq.n_block_sizes = self.read_block_coords() seq.masked_block_starts, seq.masked_block_sizes = self.read_block_coords() # Reserved self.read("L") # Save start of actualt sequence seq.sequence_offset = self.file.tell() # Mark as loaded seq.loaded = True def read_block_coords(self) -> Tuple[list, list]: block_count = self.read("L") if block_count == 0: return [], [] starts = self.read(str(block_count) + "L", untuple=False) sizes = self.read(str(block_count) + "L", untuple=False) return list(starts), list(sizes) def read(self, pattern: str, untuple: bool = True): rval = unpack(self.byte_order + pattern, self.file.read(calcsize(self.byte_order + pattern))) if untuple and len(rval) == 1: return rval[0] return rval def read_p_string(self) -> str: """ Read a length-prefixed string """ length = self.read("B") return self.file.read(length).decode()
STACK_EDU
Hey Team! In this Google CS First Sports activity, you will build an All-Star Passing Drill while you learn about the computer science concept called sensing. Sensing in computer science is when the computer takes in information, then uses it to make decisions. This is just like what you do in real life! People find out about the world by collecting information through their senses, then they react and make decisions based on that information. When you feel hot stove, you pull your hand away. If you hear the music of an ice cream truck, you run towards it. I know I would. Computers and robots also sense things in the world around them, and react to what they detect. Each year, students and researchers gather at an event called RoboCup to play soccer - with robots! To play soccer, the robots have to sense the locations of their teammates, the ball, and the goal, and then react accordingly. Check out this video that shows a member of one of the RoboCup teams, the Spelbots, demonstrating how their robot finds and moves towards the ball. So the robot is tracking the ball and walking towards it. As you can see, its head moves when the ball moves to be able to track it. And it moves backwards so that it can be in its range of vision, and then it moves forwards when it gets to a certain point to know to walk towards the ball. And this is very helpful in our robocup competition because as we said, the goal of the competition is to play soccer, and essentially, we would like to be the world champions by 2050, so that's basically the research portion of us studying and trying to see what we could do to better enhance the robots to play soccer as best as possible. In 2009, the Spelbots tied for first place in the RoboCup humanoid soccer championship! What researchers learn in RoboCup helps them to build other machines that better sense the world around them. The vehicle should be able to navigate the track autonomously. So a robotic driver at work, and then it comes to a stop. The same computer science concepts practiced in robocup are also used in the self-driving car. Robocup robots need to sense the ball, goalposts, and react to the other players. The self-driving car senses and reacts to the road, stoplights, and pedestrians. In the future, self-driving cars may help blind people get around and reduce the risk of automobile fatalities. The “sensing” ability computer scientists like you will give them will help improve and save lives. You lose your timing in life. Everything takes you much longer. There are some places that you cannot go, there are some things that you really cannot do. Where this would change my life is to give me the independence and the flexibility to go the places I both want to go and need to go when I need to do those things. In this activity, you'll program a project to sense when buttons on the keyboard are pressed, when sprites touch each other, and when the sprites touch the edge of the screen. The starter project gives you a passer, a receiver, and a ball or puck. Choose one of these four starter projects: soccer, basketball, football, or hockey. Click on the starter project link for the sport of your choice. Click remix, and sign in to Scratch. - Open the starter project. - Click remix. - Sign in to Scratch. - "Silhouette hand" by SimonWaldherr (https://commons.wikimedia.org/wiki/File:Silhouette_hand.svg) -- Licensed by Creative Commons Attribution 3.0 Unported (https://creativecommons.org/licenses/by/3.0/legalcode) - "SpelBots: A Program Overview" by Spelman College (https://www.youtube.com/watch?v=9dEwxEV5LP0) -- Licensed by Creative Commons Attribution 3.0 Unported (https://creativecommons.org/licenses/by/3.0/legalcode) -- Video trimmed to needed length - "Snobots RoboCup Qualification 2014" by Chris I-B (https://www.youtube.com/watch?v=nv2zCJQlw7E) -- Licensed by Creative Commons Attribution 3.0 Unported (https://creativecommons.org/licenses/by/3.0/legalcode) -- Video trimmed to needed length, audio removed - "Walking Robot LAURON V - Search and Rescue" by FZIchannel (https://www.youtube.com/watch?v=BlHgdi_k21g) -- Licensed by Creative Commons Attribution 3.0 Unported (https://creativecommons.org/licenses/by/3.0/legalcode) -- Video trimmed to needed length, audio removed
OPCFW_CODE
GUI Style Guidelines m (Category changes: -Information, -Openmoko, +Application developer) |Line 77:||Line 77:| Latest revision as of 08:00, 19 July 2009 Openmoko is a platform targeted at small screen devices. This means that many of the usual desktop paradigms of windows and menus do not apply very well due to the limited space. Different form factors mean that displays on these devices are often at different orientations and aspect ratios. The Openmoko platform tries to address some of these differences by providing a framework in which application authors do not need to be too concerned about the final layout of their application. Top Level Overview Openmoko applications are designed as a number of "pages", which may have a number of relationships between them. Each page contains one task, such as selecting a contact or viewing a calendar. Layout Abstraction Pages that are not affected by changes in other pages are known as Primary pages. These work independently of any other pages. Pages which are affected by changes in a primary page are known as secondary pages. For example, in a contacts application, the primary page would be the list of contacts. Secondary pages would be the pages that display information about the selected contact. Every page has a label, icon and content associated with it. This is used to identify and display the page when necessary. The relationships between pages are described in a model created when the application is started. The application then instantiates a further object which acts as the view and controller to the model. This deals with creating the initial layout of the application. Neo1973 Layout On this form factor, the layout is portrait and constrained by a very small screen size. To accommodate this, all pages are full screen and the target area for buttons must be as large as possible. Borders and spacing between widgets is kept to a minimum to ensure best use of available screen estate. - 1) Toolbar -- Additional actions related to the current page. - 2) Filter/Search -- Filtering options for the current page. - 3) Pages Navigation -- Method to switch between pages. - 4) Title -- The window title is not part of the application itself. It is embedded into the main panel and is automatically set to the current applications name. It also serves as a quick way to navigate between open applications. Switching between pages is achieved by a series of tabs laid horizontally across the bottom of the screen. Each tab contains an icon depicting the purpose of the page it is attached to. Toolbars appear at the top of the screen, with tool buttons expanded to fill the space available. This ensures maximum target hit area. There should be no more than four items in a toolbar. The filter/search bar is an optional component, composed of three widgets. A toggle button switches between filter (combo box) and search (entry box). Typing in the search box should re-filter the data after each keypress. MokoSearchBar is a convenience widget available that implements the above logic. It is part of libmokoui2. Input Considerations For devices which require on screen keyboards, the keyboard will automatically appear whenever a widget that requires key input is focused. Touch Screen The touch screen should be used for single click (tap) and drag options only. A tap and hold will activate button three on the mouse ("right" click). The "double click" action is strongly discouraged. General Application Guidelines Data Persistence All applications that manipulate data should aim to follow the "instant apply" model so that there is no need for the user to explicitly save any data entered. State Persistence Applications should save their state if possible between sessions. This might include current view details or "unsaved" data. Using GTK+ and libmokoui GTK+ is a C library that uses GObject for pseudo object orientation. This allows it to be very portable and flexible. General Guidelines Openmoko code mostly uses the c99 standard for C. Spacing,widgets sizes and fonts must not be hard coded into Openmoko applications. Openmoko is a framework for small screen devices, which may include anything as small as QVGA (320x240) to 800x600. Therefore, for applications to work on these different resolutions, programs must not hard code anything to do with the specific appearance of widgets. Programming Guidelines Most on screen elements such as buttons and entry boxes are sub classed from the GtkWidget base class. The normal practice is to cast up from this class. For this reason, all the creator functions return GtkWidget rather than the class of which they are creating. GTK+ has many macros that check the type and then cast for you, for example, GTK_TREE_VIEW(foo) will check the pointer "foo" is a GtkTreeView and then cast the pointer to a GtkTreeView pointer. English • العربية • Български • Česky • Dansk • Deutsch • Esperanto • Eesti • Español • فارسی • Suomi • Français • עברית • Magyar • Italiano • 한국어 • Nederlands • Norsk (bokmål) • Polski • Português • Română • Русский • Svenska • Slovenčina • Українська • 中文(中国大陆) • 中文(台灣) • Euskara • Català
OPCFW_CODE
Back in 1987, when David Aha was still a Ph.D. student in UCI’s Department of Computer Science, he had an idea. “My plan was to provide a location where datasets — and descriptions of them — could be shared with researchers studying supervised learning,” recalls Aha, now the director of the Navy Center for Applied Research in AI (NCARAI) at the Naval Research Laboratory. He started with a small number of datasets gathered by fellow Ph.D. student Jeff Schlimmer and then waited to publicize the repository until it had at least 25 datasets. “Once it caught on,” he says, “it became clear that the collection had to live on with the dedicated help of subsequent UCI student librarians, and they’ve been outstanding.” Indeed, the collection has lived on, with various faculty and Ph.D. students passing the baton to make sure the collection has remained up and running. By the time the current librarians — Ph.D. students Casey Graff and Dheeru Dua — took over, the UCI Machine Learning Repository had 469 datasets, representing a variety of applications domains, from physical and social sciences to business and engineering. This publicly accessible archive has been a tremendous resource for empirical and methodological research in machine learning for decades. In fact, it has had more than 38,000 citations since 1998, rendering it one of the most highly cited “references” across all of computer science. Yet with the growing number of machine learning (ML) research papers, algorithms and datasets, it is becoming increasingly difficult to track the latest performance numbers for a particular dataset, identify suitable datasets for a given task, or replicate the results of an algorithm run on a particular dataset. To address this issue, Computer Science Professors Sameer Singh and Padhraic Smyth in the Donald Bren School of Information and Computer Sciences (ICS), along with Philip Papadopoulos, Director of UCI’s Research Cyberinfrastructure Center (RCIC), have planned a “next-generation” upgrade. The trio was recently awarded $1.8 million for their NSF grant, “Machine Learning Democratization via a Linked, Annotated Repository of Datasets.” “It is quite an important grant, combining research, computational infrastructure and community outreach for machine learning,” says Singh, the principal investigator. The goal is to enhance the current Repository with rich metadata, links to research papers, and automated extraction and presentation of metadata and performance data. The new version will also provide systematic support for reproducible science by letting users validate empirical ML results on testbed datasets. “ICS is known for its work in AI and emphasis on ML,” says Smyth, “and I don’t think I am boasting when I say that pretty much everyone in AI, all over the world, knows about the UCI Repository.” As outlined in the grant proposal, the Repository had an estimated 24 million visitors in 2018, with 2 million dataset downloads from 723 unique web addresses and from 119 different countries and territories, ranging from Botswana to Fiji to Greenland. Smyth himself started using the Repository long before he came to UCI. “I was a researcher in machine learning at JPL at the time [it first started] and remember being very happy to find the Repository and be able to download datasets — and documentation — for my research,” he says. As noted in the grant abstract, the existing Repository “directly impacts tens of thousands of ML researchers and students by providing a standard and widely cited set of testbed datasets to support both research and education.” The proposed improvements will support broader and more systematic and reproducible evaluations of ML algorithms, leading to robust advances better calibrated for success in real-world environments, helping in areas ranging from climate science to personalized medicine. “I wish UCI well on continuing to provide this important service,” says Aha, “and encourage its continued growth, not just in the number of datasets, but in broad support of empirical research in ML.” — Shani Murray
OPCFW_CODE
Why am I getting this serialization exception when using a pre-generated XmlSerializer dll? Our wsdl is very large, and it causes our SoapHttpClientProtocol to take a very long time to initialize. As a solution, it looks like a previous developer used something similar to this solution to create a dll of the pre-generated XML serialization code. However when using the wsdl we get exception an exception when trying to use inherited types. For example, here is an extremely simplified version of my web service and base class : namespace MW { [System.CodeDom.Compiler.GeneratedCodeAttribute("wsdl", "2.0.50727.42")] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Web.Services.WebServiceBindingAttribute(Name="MyWebServicePortBinding", Namespace="myNamespace")] [System.Xml.Serialization.XmlSerializerAssemblyAttribute(AssemblyName = "MyWebService.XmlSerializers")] public partial class MyWebService: System.Web.Services.Protocols.SoapHttpClientProtocol { ... } // XmlInclude attribute removed via script // [System.Xml.Serialization.XmlIncludeAttribute(typeof(MyChildClass))] [System.CodeDom.Compiler.GeneratedCodeAttribute("wsdl", "2.0.50727.42")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace="myNamespace")] public abstract partial class MyBaseClass { } [System.CodeDom.Compiler.GeneratedCodeAttribute("wsdl", "2.0.50727.42")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace="myNamespace")] public partial class MyChildClass : MyBaseClass { } } Results in an exception of this when trying to call a method that accepts a MyBaseClass as a parameter There is an error in XML document The specified type was not recognized: name='MyChildClass', namespace='myNamespace' The solution of course is to keep the [System.Xml.Serialization.XmlIncludeAttribute(typeof(MyChildClass))] attributes in our class instead of removing them, but over time the number of these has grown to the point that it is again impacting the initialization time of our web service object, and I am wondering if there is an alternate solution. I'm no expert at how msbuild or the compiler works, and haven't been able to find anything similar so far with Google Searches. I think if this were a common issue, it would be easy to find a solution. So now I am wondering if perhaps the previous developer that wrote our build scripts missed something. Our build scripts run this set of commands in this order : Generate the webservice from wsdl : wsdl.exe <webService> Build webservice with all serializers : msbuild BuildWebservice1.xml Run custom script to remove [XmlInclude] attributes from generated webservice cs class file Build webservice again without serializer attributes : msbuild BuildWebservice2.xml The only difference between the two xml files for msbuild I think is the <GenerateSerializationAssemblies> attribute being set to On in the first and Off in the second, and the second includes a reference to the first. <Reference Include="MyWebService.XmlSerializers, Version=<IP_ADDRESS>348, Culture=neutral, PublicKeyToken=2401953c7c666e82, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>Serializers\MyWebService.XmlSerializers.dll</HintPath> </Reference> Is there something I am missing for using a pre-generated serialization assembly with inherited types? XmlSerializer suuuuuuucks. Have you/can you use one of the more modern serializers, like the NetDataContractSerializer? @Will I'm not sure, but probably not at this time. Our webservice is massive and I have no idea how much work it would involve to change the serializer. I think the powers-that-be would prefer to have a slower startup time than to rewrite anything. You could try to expose an interface instead of an abstract base class, and use this interface as parameter : it's just an idea as other... I'm not sure that resolve your problem
STACK_EXCHANGE
Hi Support, We're trying Import Users from CSV. For the most Users it works fine, but for Users with Umlaut (ö,ü,ä, è, é,à, etc) the Import Service makes Symole? Is there a way Import Users with Umlaut in their Name? Thanks for your Help. Because before "Create User" a manager has to approve this action, I can not use the "Show Create Mailbox form after creation" option. So I want to create a ... Rule a powershell script will run to create the mailbox automatically. Is this possible? Remco Hi, is there any way to bypass SSO and get directly to the Loginpage when a machine is not joined to the domain? Reason why I'm asking is, in the last months ... machines not connected to the domain to go directly to the Adaxes Login form. Best regards Ingemar hello- we use change auditor to monitor changes within the environment. It has been brought to my attention, that password reset are being recorded as me changing passwords, ... why is reporting incorrectly? and where would it be pulling from? Please advise. I want to send an e-mail when an operation fails (e.g. creation of a user), which can be done as an After Create User business rule. Is there a ... %adm-OperationDescription% which includes the errorlog I can include in de body text? Kind regards, Remco hey- I'm not sure if I'm going in the right direction as far as troubleshooting but when the HD try to edit a user after filling in the required fields, we get an ... locked but not sure why and how that would relate to the error message we are receiving. Hello again, I am planning the upgrade to 2013.2, but I would like to do it to a new server. The documentation is about using the same server (http://www.adaxes.com ... would like to have the current one migrated (SQL Settings). Thank you very much in advance howdy! is there a nicer way to view changes within the system besides the logging option in the console or the .db3 file on the system? Thanks! It is possible to automatically create the logon name based on eg %firstname% and %lastname%, however I want to achieve the following: logon name = first letter FirstName, first ... account where that field is used for. How to achieve this? With regards, Remco hello- when clicking on 'unlock account'...by default it brings up locked accounts right? and if I do a search, it will only list locked accounts? Hi, I was was wondering if I could request a new feature for the next incarnation of adaxes... Renaming a computer account from the Console - We have some successes ... the possibility of integration of such a facility in the next version? Many thanks, Jay what would be the reason notifications to the respective party are getting multiple emails? I have a termination custom command that will disable acct, reset pwd, move acct ... stating they are getting multiple emails and on the home screen it says "saving". hello- We are utilizing the Helpdesk console and we're unable to add to groups. Access is denied. I granted access for write member property on group objects as per doc, but getting access denied. pls assist. need help so I set up the LDAP filter to display OU with the name ''External' by doing (name=External*). I also need to have another OU displayed, but no matter what I ... different name show? for instance the other OU is OU=Ops,OU=Resource, DC=domain, DC=com I'm trying to build a simple scheduled task but having no success: If the 'Account Expires' property is less than or equal to '%datetime,+30d%' AND the ' ... of precedence? And can statements be grouped as in: statement1 AND (statement2 OR statement3)? Hi, First of all, what a great product! However, I do have some questions, I tried searching for the answers but I couldn't find any relevant information. How do I ... is there an option to change the error message on the login-page? Best regards, Tommi We are struggeling with a bit slow response from the webinterface. Standing on the Home page then clicking the Search tab, it takes about 5 seconds until it loads. ... in the release notes about performance enhancements for the web interface in version 2012.2) Hi- Is it by design when viewing managed objects via Self Service, it list the user by display name? Can that view be change to include maybe full name? Thanks! Hi, I created a distribution group via adaxes web interface. And Established e-mail address for the group. And "Automaticaly update e-mail addresses based on e-mail address policy" ... work fine. I make a mistake but I can not find it. Any suggestion. Thanks Hello Support, I'm looking for a solution for the following problem: After creating a user, we send a welcome email with the login(login and password) to ... can insert the autogenerated Password automaticly into the Welcomemail or a SMS? Thanks for helping
OPCFW_CODE
I met just such a task: string str1 = "\ U0010FADE"; string str2 = "\ U0000FADE"; Console.WriteLine (str1.Length); Console.WriteLine (str2.Length); As it turned out, terminals 2 and 1. What is going on here? I know only lowercase \ u , for which should follow the 4 hexadecimal digits. In MSDN for char \ U is not listed, it is logical – the result is clearly not For strings – there is mention, but still do not understand: The escape code ddddis a four-digit number) represents the Unicode character U + dddd. Eight-digit Unicode escape codes are also recognized: Elsewhere says that they need to form a surrogate pairs, but also without further explanation: \ Uxxxxxxxx– Unicode escape sequence for character with hex value xxxxxxxx(for generating surrogates) So does the \ U and why in the second row to be not a surrogate pair, but only one character? I tried to run on ideone , but something derived characters are not those codes, which are specified in the source code. While this may be a shoal of ideone. Answer 1, Authority 100% The information in the documentation is absolutely correct. Syntax \ Udddddddd simply includes Unicode character string constant with dddddddd code. This symbol can be a surrogate pair and hold two code units of UTF-16, but it can also be an ordinary character, holding a single code unit. 7.4.2 Unicode character escape sequences A Unicode escape sequence represents a Unicode code point. Unicode escape sequences are processed in identifiers (§7.4.3), character literals (§18.104.22.168), and regular string literals (§22.214.171.124). A Unicode escape sequence is not processed in any other location (for example, to form an operator, punctuator, or keyword). unicode-escape-sequence :: \ U hex-digit hex-digit hex-digit hex-digit \ U hex-digit hex-digit hex-digit hex-digit hex-digit hex-digit hex-digit hex-digit A Unicode character escape sequence represents the single Unicode code point formed by the hexadecimal number following the “\ u” or “\ U” characters. Since C # uses a 16-bit encoding of Unicode code points in character and string values, a Unicode code point in the range U + 10000 to U + 10FFFF is represented using two Unicode surrogate code units. unicode code points above U + FFFF are not permitted in character literals. Unicode code points above U + 10FFFF are invalid and are not supported. In the first case, the value of the code position over U + 10000, so it is represented by two code units. In the second case – the less so one. In other words, write \ U0000FADE is equivalent to \ uFADE , not \ u0000 \ uFADE , as it might seem at first glance (the last really consists of two code units).
OPCFW_CODE
from itertools import chain, repeat, zip_longest from operator import itemgetter from typing import ( TYPE_CHECKING, Any, Dict, ItemsView, Iterable, Iterator, List, MutableSequence, Sequence, Tuple, Union, overload, ) from funcy import reraise if TYPE_CHECKING: from dvc.ui.table import CellT class Column(List["CellT"]): pass def with_value(value, default): return default if value is None else value class TabularData(MutableSequence[Sequence["CellT"]]): def __init__(self, columns: Sequence[str], fill_value: str = ""): self._columns: Dict[str, Column] = {name: Column() for name in columns} self._keys: List[str] = list(columns) self._fill_value = fill_value @property def columns(self) -> List[Column]: return list(map(self.column, self.keys())) def column(self, name: str) -> Column: return self._columns[name] def items(self) -> ItemsView[str, Column]: projection = {k: self.column(k) for k in self.keys()} return projection.items() def keys(self) -> List[str]: return self._keys def _iter_col_row( self, row: Sequence["CellT"] ) -> Iterator[Tuple["CellT", Column]]: for val, col in zip_longest(row, self.columns): if col is None: break yield with_value(val, self._fill_value), col def append(self, value: Sequence["CellT"]) -> None: for val, col in self._iter_col_row(value): col.append(val) def extend(self, values: Iterable[Sequence["CellT"]]) -> None: for row in values: self.append(row) def insert(self, index: int, value: Sequence["CellT"]) -> None: for val, col in self._iter_col_row(value): col.insert(index, val) def __iter__(self) -> Iterator[List["CellT"]]: return map(list, zip(*self.columns)) def __getattr__(self, item: str) -> Column: with reraise(KeyError, AttributeError): return self.column(item) def __getitem__(self, item: Union[int, slice]): func = itemgetter(item) it = map(func, self.columns) if isinstance(item, slice): it = map(list, zip(*it)) return list(it) @overload def __setitem__(self, item: int, value: Sequence["CellT"]) -> None: ... @overload def __setitem__( self, item: slice, value: Iterable[Sequence["CellT"]] ) -> None: ... def __setitem__(self, item, value) -> None: it = value if isinstance(item, slice): n = len(self.columns) normalized_rows = ( chain(val, repeat(self._fill_value, n - len(val))) for val in value ) # we need to transpose those rows into columnar format # as we work in terms of column-based arrays it = zip(*normalized_rows) for i, col in self._iter_col_row(it): col[item] = i def __delitem__(self, item: Union[int, slice]) -> None: for col in self.columns: del col[item] def __len__(self) -> int: return len(self.columns[0]) @property def shape(self) -> Tuple[int, int]: return len(self.columns), len(self) def drop(self, *col_names: str) -> None: for col_name in col_names: self._keys.remove(col_name) self._columns.pop(col_name) def rename(self, from_col_name: str, to_col_name: str) -> None: self._columns[to_col_name] = self._columns.pop(from_col_name) self._keys[self._keys.index(from_col_name)] = to_col_name def project(self, *col_names: str) -> None: self.drop(*(set(self._keys) - set(col_names))) self._keys = list(col_names) def to_csv(self) -> str: import csv from io import StringIO buff = StringIO() writer = csv.writer(buff) writer.writerow(self.keys()) for row in self: writer.writerow(row) return buff.getvalue() def add_column(self, name: str) -> None: self._columns[name] = Column([self._fill_value] * len(self)) self._keys.append(name) def row_from_dict(self, d: Dict[str, "CellT"]) -> None: keys = self.keys() for key in d: if key not in keys: self.add_column(key) row: List["CellT"] = [ with_value(d.get(key), self._fill_value) for key in self.keys() ] self.append(row) def render(self, **kwargs: Any): from dvc.ui import ui ui.table(self, headers=self.keys(), **kwargs) def as_dict(self): keys = self.keys() return [dict(zip(keys, row)) for row in self]
STACK_EDU
import axios from 'axios' function _effector (name='') { const f = function () { return {...f.eff, name} } const p = { init: function (callback) {this.eff.init = callback; return this}, ready: function (callback) {this.eff.ready = callback; return this}, done: function (callback) {this.eff.done = callback; return this}, cancel: function (callback) {this.eff.cancel = callback; return this}, fail: function (callback) {this.eff.fail = callback; return this}, use: function (callback) {this.eff.resolver = callback; return this}, activate: function (callback) {this.eff.activator = callback; return this}, when: function (callback) {this.watch = callback; return this}, } Object.setPrototypeOf(p, Function) Object.setPrototypeOf(f, p) f.eff = {} return f } class EffectorFunc extends Function { constructor (name='', eff={}) { const f = function (...params) { return f._effData.use.apply(this, params) // console.log('f', f._effData, params) } Object.setPrototypeOf(f, new.target.prototype) f._effName = name f._effData = eff return f } use (fn) { return new EffectorFunc(this._effName, {...this._effData, use: fn}) } watch (event, callback) { const watchers = (this._effData.watchers || []).splice(0) watchers.push({event, callback}) return new EffectorFunc(this._effName, {...this._effData, watchers}) } get ready () { return '@'+this._effName } get done () { return '@'+this._effName+':done' } get fail () { return '@'+this._effName+':fail' } get finals () { return {['@'+this._effName+':done']: true} } } function effector (name) { return new EffectorFunc(name) } // const load = effector('load').use(() => { // return axios.get('https://conduit.productionready.io/api/articles') // .then(response => response.data) // }) // // console.log(load('x')) // export function getArticles (watch) { // const eff = function () { // return { // resolver: () => { // return axios.get('https://conduit.productionready.io/api/articles') // .then(response => response.data) // }, // done: (data, effect) => { // effect.source.set(data.articles) // } // } // } // eff.watch = watch || ((evt) => evt.name == 'init') // return eff // } // // export function getTags (watch) { // const eff = function () { // return { // resolver: () => { // return axios.get('https://conduit.productionready.io/api/tags') // .then(response => response.data) // }, // done: (data, effect) => { // effect.source.set(data.tags) // } // } // } // eff.watch = watch || ((evt) => evt.name == 'init') // return eff // } const getArticles = effector('fetch_articles') getArticles.ByTag = getArticles .use((tag) => { return axios.get('https://conduit.productionready.io/api/articles?tag='+tag) .then(response => response.data) }) getArticles.All = getArticles .use((tag) => { return axios.get('https://conduit.productionready.io/api/articles') .then(response => response.data) }) getArticles.ByAuthor = getArticles .use((author) => { return axios.get('https://conduit.productionready.io/api/articles?author='+author) .then(response => response.data) }) getArticles.Favorited = getArticles .use((username) => { return axios.get('https://conduit.productionready.io/api/articles?favorited='+username) .then(response => response.data) }) export {getArticles} export const getTags = effector('fetch_tags') .use(() => { return axios.get('https://conduit.productionready.io/api/tags') .then(response => response.data) }) export const getArticle = effector('fetch_article') .use((slug) => { return axios.get('https://conduit.productionready.io/api/articles/'+slug) .then(response => response.data) }) export const getComments = effector('fetch_comments') .use(slug => { return axios.get('https://conduit.productionready.io/api/articles/'+slug+'/comments') .then(response => response.data) }) export const getProfile = effector('fetch_profile') .use(username => { return axios.get('https://conduit.productionready.io/api/profiles/'+username) .then(response => response.data) }) export const sendRegister = effector('send_register') .use(data => { return axios.post('https://conduit.productionready.io/api/users', {user: data}) .then(response => response.data) .catch(err => {throw err.response.data}) }) export const sendLogin = effector('send_login') .use(data => { return axios.post('https://conduit.productionready.io/api/users/login', {user: data}) .then(response => response.data) .catch(err => {throw err.response.data}) }) export const getUser = effector('fetch_user') .use(token => { return axios.get('https://conduit.productionready.io/api/user', {headers: {Authorization: 'Token '+token}}) .then(response => response.data) }) //export {getArticles, getTags}
STACK_EDU
Specifies the per-thread area buffer size (in bytes). Append k or K, to specify the scale in KB, m or M to specify the scale in MB, g or G to specify the scale in GB. Will not deploy applications that use this option to override a class in rt.jar, since this violates the JRE binary code license. The java command supports an array of alternatives that can be divided into the following types: The discharge argument specifies possibly the exact Model string, or a summary of Model strings and ranges divided by spaces. A Model string is definitely the developer designation of your version selection in the subsequent form: one.x.0_u (exactly where x is the major Variation amount, and u is the update Model variety). A Edition selection is built up of the version string accompanied by a moreover sign (+) to designate this version or afterwards, or perhaps a Portion of a Model string accompanied by an asterisk (*) to designate any Edition string using a matching prefix. Specifies the overall volume of Main memory (in bytes) utilized for information retention. Append k or K, to specify the scale in KB, m or M to specify the size in MB, g or G to specify the size in GB. By default, the dimensions is about to 462848 bytes. In order to customize no matter whether groovy evaluates your item to true or Fake put into action the asBoolean() method: Make a one list of classes used by all the apps that will share the shared archive file. The default worth is about to five hundred KB. The initial code cache measurement need to be not fewer than the technique's negligible memory web page dimensions. The subsequent case in point demonstrates ways to established the Preliminary code cache sizing to 32 KB: Permits tracing from the loader constraints recording. By default, this selection is disabled and loader constraints recording isn't traced. *** generate a recursive method that has one particular parameter, a personality, c. This 1 parameter is probably the figures ‘A’ via ‘Z’. The method will use recursion to print out a pattern of characters as follows: Don't deploy applications that use this feature to override a class in rt.jar, mainly because this violates the JRE binary code license. The -Xbatch flag disables track record compilation so that compilation of all techniques proceeds as a foreground activity until eventually completed. Using the as key word is simply feasible In case you have a static reference to a class, like in the following code: Specifies the most dimension (in bytes) of the information chunks inside a recording. Append k or K, to specify the dimensions in KB, m or M to specify the size in MB, g or G to specify the size in GB. visit homepage By default, the check utmost sizing of knowledge chunks is ready to twelve MB.
OPCFW_CODE
/* * Copyright (C) 2017 20102122476. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, * MA 02110-1301 USA */ package modelo.lectura; import basededatos.BaseDatosOracle; import static basededatos.Secuencia.nextVal; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; /** * * @author 20102122476 */ public class Sector { private long id; private String det; /** * Constructor de la clase Sector */ public Sector() { } /** * Constructor de la clase Sector * @param id id del sector */ public Sector(long id) { this.id = id; } /** * Constructor de la clase Sector * @param id id del sector * @param det nombre del sector */ public Sector(String det) { this.det = det; } /** * Constructor de la clase Sector * @param id id del sector * @param det nombre del sector */ public Sector(long id, String det) { this.id = id; this.det = det; } /** * Funcion que retorna el id del Sector * @return id del sector */ public long getId() { return this.id; } /** * Métopdo que recibe el id del Sector * @param id id del sector */ public void setId(long id) { this.id = id; } /** * Funcion que retorna el nombre del Sector * @return nombre del sector */ public String getDet() { return this.det; } /** * Métopdo que recibe el nombre del Sector * @param det nombre del sector */ public void setDet(String det) { this.det = det; } /** * Función utilizada para insetar los sectores * @return 0 en caso de no insertado y otro numero en caso contrario * @throws SQLException en caso de excepcion */ public int insertar() throws SQLException { long secuencia = nextVal("SECTORES_SEQ"); System.err.println("intente con: " + secuencia); BaseDatosOracle basededatos; basededatos = BaseDatosOracle.getInstance(); String sql; int ejecucion; sql = "INSERT INTO SECTORES (ID,DET) VALUES (?,?)"; basededatos.conectar(); basededatos.prepararSql(sql); basededatos.asignarParametro(1, secuencia); basededatos.asignarParametro(2, getDet()); ejecucion = basededatos.ejecutar(); basededatos.cerrarSentencia(); return ejecucion; } /** * Función utilizada para actualizar los sectores * @return 0 en caso de no actualizar y otro numero en caso contrario * @throws SQLException en caso de excepcion */ public int actualizar() throws SQLException { BaseDatosOracle basededatos; basededatos = BaseDatosOracle.getInstance(); String sql; int ejecucion; sql = "UPDATE SECTORES SET DET = ? WHERE ID = ?"; basededatos.conectar(); basededatos.prepararSql(sql); basededatos.asignarParametro(1, getDet()); basededatos.asignarParametro(2, getId()); ejecucion = basededatos.ejecutar(); basededatos.cerrarSentencia(); return ejecucion; } /** * Función utilizada para eliminar los sectores * @return 0 en caso de no eliminar y otro numero en caso contrario * @throws SQLException en caso de excepcion */ public int eliminar() throws SQLException { BaseDatosOracle basededatos; basededatos = BaseDatosOracle.getInstance(); String sql; int ejecucion; sql = "DELETE FROM SECTORES WHERE ID = ?"; basededatos.conectar(); basededatos.prepararSql(sql); basededatos.asignarParametro(1, getId()); ejecucion = basededatos.ejecutar(); basededatos.cerrarSentencia(); return ejecucion; } /** * Función utilizada para listar los sectores * @return ArrayList * @throws SQLException en caso de excepcion */ public static ArrayList<Sector> listar() throws SQLException { ArrayList<Sector> datos = new ArrayList<>(); BaseDatosOracle basededatos; ResultSet cursor; String sql; basededatos = BaseDatosOracle.getInstance(); sql = "SELECT ID, DET FROM SECTORES"; basededatos.conectar(); basededatos.prepararSql(sql); cursor = basededatos.ejecutarQuery(); datos.clear(); while (cursor.next()) { datos.add(new Sector( cursor.getLong("ID"), cursor.getString("DET") ) ); } return datos; } /** * Función buscar un Sector dependiendo * @param id id del sector * @return objeto Sector * @throws SQLException en caso de excepcion */ public static Sector listar(long id) throws SQLException { Sector dato = null; BaseDatosOracle basededatos; ResultSet cursor; String sql; basededatos = BaseDatosOracle.getInstance(); sql = "SELECT DET FROM SECTORES WHERE ID = ?"; basededatos.conectar(); basededatos.prepararSql(sql); basededatos.asignarParametro(1, id); cursor = basededatos.ejecutarQuery(); if (cursor.next()) { dato = new Sector(id, cursor.getString("DET")); } return dato; } /** * Función buscar un Sector dependiendo * @param det nombre del sector * @return objeto Sector * @throws SQLException en caso de excepcion */ public static ArrayList<Sector> listar(String det) throws SQLException { ArrayList<Sector> datos = new ArrayList<>(); BaseDatosOracle bd = BaseDatosOracle.getInstance(); String sql = "SELECT ID, DET FROM SECTORES WHERE DET LIKE ?"; bd.conectar(); bd.prepararSql(sql); bd.asignarParametro(1, "%" + det + "%"); ResultSet reg = bd.ejecutarQuery(); datos.clear(); while (reg.next()) { datos.add(new Sector(reg.getLong("ID"), reg.getString("DET"))); } return datos; } /** * Función utilizada para saber si un sector existe o no * @param det nombre del sector * @return true o false segun la condicion * @throws SQLException en caso de excepcion */ public static boolean existe(String det) throws SQLException { ArrayList<Sector> datos = new ArrayList<>(); BaseDatosOracle bd = BaseDatosOracle.getInstance(); bd.conectar(); bd.prepararSql("SELECT ID, DET FROM SECTORES WHERE DET LIKE ?"); bd.asignarParametro(1, det); ResultSet reg = bd.ejecutarQuery(); datos.clear(); return reg.next(); } /** * UTILIZADO PARA EL RE-ASIGNAR SECTOR EN EL JFRAME LECTORES * ME VALIDA SI EL USUARIO YA FUE ASIGNADO O NO * @param sector id del sector * @return true o false segun la condicion * @throws SQLException en caso de excepcion */ public static boolean estaAsignado(long sector) throws SQLException { BaseDatosOracle bd = BaseDatosOracle.getInstance(); bd.conectar(); bd.prepararSql("SELECT USUARIOS.ID FROM USUARIOS WHERE SECTOR_ID = ?"); bd.asignarParametro(1, sector); ResultSet reg = bd.ejecutarQuery(); return reg.next(); } }
STACK_EDU
import UIKit import CoreBluetooth class RemoteLedPeripheral : NSObject, CBPeripheralManagerDelegate { // MARK: Peripheral properties // Advertized name let advertisingName = "LedRemote" // Device identifier let peripheralIdentifier = "8f68d89b-448c-4b14-aa9a-f8de6d8a4753" // MARK: GATT Profile // Service UUID let serviceUuid = CBUUID(string: "00001815-0000-1000-8000-00805f9b34fb") // Characteristic UUIDs let commandCharacteristicUuid = CBUUID(string: "00002a56-0000-1000-8000-00805f9b34fb") let responseCharacteristicUuid = CBUUID(string: "00002a57-0000-1000-8000-00805f9b34fb") // Command Characteristic var commandCharacteristic:CBMutableCharacteristic! // Response Characteristic var responseCharacteristic:CBMutableCharacteristic! // the size of a Characteristic let commandCharacteristicLength = 2 let responseCharacteristicLength = 2 // MARK: Commands // Data Positions let bleCommandFooterPosition = 1; let bleCommandDataPosition = 0; // Command flag let bleCommandFooter:UInt8 = 1; // LED State let bleCommandLedOn:UInt8 = 1; let bleCommandLedOff:UInt8 = 2; // MARK: Response // Data Positions let bleResponseFooterPosition = 1; let bleResponseDataPosition = 0; // Response Types let bleResponseErrorFooter:UInt8 = 0; let bleResponseConfirmationFooter:UInt8 = 1; // LED States let bleResponseLedError:UInt8 = 0; let bleResponseLedOn:UInt8 = 1; let bleResponseLedOff:UInt8 = 2 // MARK: Peripheral State // Peripheral Manager var peripheralManager:CBPeripheralManager! // Connected Central var central:CBCentral! // delegate var delegate:RemoteLedPeripheralDelegate! /** Initialize BlePeripheral with a corresponding Peripheral - Parameters: - delegate: The BlePeripheralDelegate - peripheral: The discovered Peripheral */ init(delegate: RemoteLedPeripheralDelegate?) { super.init() // empty dispatch queue let dispatchQueue:DispatchQueue! = nil // Build Advertising options let options:[String : Any] = [ CBPeripheralManagerOptionShowPowerAlertKey: true, // Peripheral unique identifier CBPeripheralManagerOptionRestoreIdentifierKey: peripheralIdentifier ] peripheralManager = CBPeripheralManager( delegate: self, queue: dispatchQueue, options: options) self.delegate = delegate } /** Stop advertising, shut down the Peripheral */ func stop() { peripheralManager.stopAdvertising() } /** Start Bluetooth Advertising. This must be after building the GATT profile */ func startAdvertising() { let serviceUuids = [serviceUuid] let advertisementData:[String: Any] = [ CBAdvertisementDataLocalNameKey: advertisingName, CBAdvertisementDataServiceUUIDsKey: serviceUuids ] peripheralManager.startAdvertising(advertisementData) } /** Build Gatt Profile. This must be done after Bluetooth Radio has turned on */ func buildGattProfile() { let service = CBMutableService(type: serviceUuid, primary: true) var rProperties = CBCharacteristicProperties.read rProperties.formUnion(CBCharacteristicProperties.notify) var rPermissions = CBAttributePermissions.writeable rPermissions.formUnion(CBAttributePermissions.readable) responseCharacteristic = CBMutableCharacteristic( type: responseCharacteristicUuid, properties: rProperties, value: nil, permissions: rPermissions) let cProperties = CBCharacteristicProperties.write let cPermissions = CBAttributePermissions.writeable commandCharacteristic = CBMutableCharacteristic( type: commandCharacteristicUuid, properties: cProperties, value: nil, permissions: cPermissions) service.characteristics = [ responseCharacteristic, commandCharacteristic ] peripheralManager.add(service) } /** Make sense of the incoming byte array as a command */ func processCommand(bleCommandValue: [UInt8]) { if bleCommandValue[bleCommandFooterPosition] == bleCommandFooter { print ("Command found") switch (bleCommandValue[bleCommandDataPosition]) { case bleCommandLedOn: print("Turning LED on") setLedState(ledState: true) case bleCommandLedOff: print("Turning LED off") setLedState(ledState: false) default: print("Unknown command value") } } } /** Turn Camera Flash on as an LED - Parameters: - ledState: *true* for on, *false* for off */ func setLedState(ledState: Bool) { if ledState { sendBleResponse(ledState: bleResponseLedOn) } else { sendBleResponse(ledState: bleResponseLedOff) } delegate?.remoteLedPeripheral?(ledStateChangedTo: ledState) } /** Send a formatted response out via a Bluetooth Characteristic - Parameters - ledState: one of bleResponseLedOn or bleResponseLedOff */ func sendBleResponse(ledState: UInt8) { var responseArray = [UInt8]( repeating: 0, count: responseCharacteristicLength) responseArray[bleResponseFooterPosition] = bleResponseConfirmationFooter responseArray[bleResponseDataPosition] = ledState let value = Data(bytes: responseArray) responseCharacteristic.value = value peripheralManager.updateValue( value, for: responseCharacteristic, onSubscribedCentrals: nil) } // MARK: CBPeripheralManagerDelegate /** Peripheral will become active */ func peripheralManager( _ peripheral: CBPeripheralManager, willRestoreState dict: [String : Any]) { print("restoring peripheral state") } /** Peripheral added a new Service */ func peripheralManager( _ peripheral: CBPeripheralManager, didAdd service: CBService, error: Error?) { print("added service to peripheral") if error != nil { print(error.debugDescription) } } /** Peripheral started advertising */ func peripheralManagerDidStartAdvertising( _ peripheral: CBPeripheralManager, error: Error?) { if error != nil { print ("Error advertising peripheral") print(error.debugDescription) } self.peripheralManager = peripheral delegate?.remoteLedPeripheral?(startedAdvertising: error) } /** Connected Central requested to read from a Characteristic */ func peripheralManager( _ peripheral: CBPeripheralManager, didReceiveRead request: CBATTRequest) { let characteristic = request.characteristic if (characteristic.uuid == responseCharacteristic.uuid) { if let value = characteristic.value { //let stringValue = String(data: value, encoding: .utf8)! if request.offset > value.count { peripheralManager.respond( to: request, withResult: CBATTError.invalidOffset) return } let range = Range(uncheckedBounds: ( lower: request.offset, upper: value.count - request.offset)) request.value = value.subdata(in: range) peripheral.respond( to: request, withResult: CBATTError.success) } } } /** Connected Central requested to write to a Characteristic */ func peripheralManager( _ peripheral: CBPeripheralManager, didReceiveWrite requests: [CBATTRequest]) { for request in requests { peripheral.respond(to: request, withResult: CBATTError.success) print("new request") if let value = request.value { let bleCommandValue = [UInt8](value) processCommand(bleCommandValue: bleCommandValue) } } } /** Connected Central subscribed to a Characteristic */ func peripheralManager( _ peripheral: CBPeripheralManager, central: CBCentral, didSubscribeTo characteristic: CBCharacteristic) { self.central = central } /** Connected Central unsubscribed from a Characteristic */ func peripheralManager( _ peripheral: CBPeripheralManager, central: CBCentral, didUnsubscribeFrom characteristic: CBCharacteristic) { self.central = central } /** Peripheral is about to notify subscribers of changes to a Characteristic */ func peripheralManagerIsReady( toUpdateSubscribers peripheral: CBPeripheralManager) { print("Peripheral about to update subscribers") } /** Bluetooth Radio state changed */ func peripheralManagerDidUpdateState( _ peripheral: CBPeripheralManager) { peripheralManager = peripheral switch peripheral.state { case CBManagerState.poweredOn: buildGattProfile() startAdvertising() default: break } delegate?.remoteLedPeripheral?(stateChanged: peripheral.state) } }
STACK_EDU
The Linux Kernel • About 6 million lines of code • Controls memory and process management. The Linux Kernel Linux & the Kernel Kernel Version Numbering: 184.108.40.206 • A is the version • B is the major revision • C is the minor revision • D are security and bug fixes • 2.6.39 was the last minor revision before 3.0 is released. Linux Boot Sequence • After the BIOS/UEFI instructions are given to start the OS, – A compressed version of the kernel is loaded into the first megabyte of ram. – The complete kernel is expanded and loaded – The kernel starts • Checks memory • Probes for hardware Kernel Hardware Probe • Some drivers are compiled into the kernel other drivers are loaded when the kernel starts. • Everything recognized by the os has a file node associated with it. The kernel will establish nodes for each required device. Starting the kernel • Find and set up swap (virtual memory) • A hardware probe is done to determine what drivers should be initialized. • Read /etc/fstab to mount the root file system. • Mount other devices • Start a program called • Kernel passes control to init. • Create tables for memory management. • Dummy processes are started. These processes do not have PIDs, and cannot be killed: – Swap – Paging virtual memory – IO activity – Other (example: managing threads) • Swapper • vhand, kflushd, kpiod, mdrecoveryd • Find and mount the other partitions starting with the swap partition space. – The /etc/fstab • /dev/root • /proc –A virtual file-system (in memory) used for managing processes. This allows loadable runtime modules to be inserted and removed as needed. • /dev/pts – A VFS used for managing devices • The first non-dummy process, PID = 1. • First process to run and is always running. • The kernel passes process management to init. • Init reads its configuration file: /etc/inittab • Directly or indirectly spawns all other processes until shutdown. • The basis of all Linux systems. • Run perpetually in the background waiting for input. – Services – Daemons – Shells – utilities init:Services & Daemons • Examples: • login prompt • X-Window server • keyboard input functionality • firewall • and all classic server programs; e-mail, DNS, FTP, telnet, ssh, etc. the runlevel is determined by inittab: 0. Halt the system 1. Enter single-user mode (no networking) 2. Multiuser mode, without NFS. 3. Full multiuser mode 4. Unused 5. Same as runlevel 3 but with X-windows. • Additional programs and services are started when reads the contents of inittab. • Each line has the same syntax: Id:runlevels:action:process • Id A unique sequence of 1-4 characters to identify the line in the file (a macro). • Runlevels are any combination of numbers from 0-6. If blank it implies all run levels. • Action- what should happen. • Process – the actual process to run. • Respawn – restart whenever it terminates. • Wait – don’t continue until it completes. • Once – will not wait for completion. • boot – run at boot time. • Bootwait – start at boot time and wait for completion before continuing. • ondemand, initdefault, sysinit, powerwait, powerfail, powerkwait, ctrlaltdel. • Read /etc/fstab • Read /etc/inittab • Run /etc/rc.sysinit • Run /etc/rc • Run the K* and S* startup scripts. inittab and sysinit • The file rc.sysinit is a script run once at boot time to set up services specific to the computer system: – Hostname – Networking – Etc. • They keep track of the large number of services to be managed. • The main rc script is /etc/rc.d/rc • Responsible for calling the appropriate scripts in the correct order for each run level. • There is an additional directory for each different runlevel: 0-6. • Each directory has the scripts needed to run the specified boot level. • The files in these subdirectories have two classifications: kill and start. Kill and Start Services. • The letter K is the prefix to all services to be killed. • The letter S is the prefix to all the services to be Started. • The K & S prefixed names are links to the actual service scripts located in /etc/rc.d/init.d Order of services • A number follows the S or K prefix. • The number determines the order of execution. • The rc file accesses the correct run-level directory and executes the files in numerical order. • K files are first, then S files. K and S scripts • Every script in the /etc/rc.d/init.d directory is run with one of two options: – stop if the prefix is K – start if the prefix is S /etc/rc.d/init.d/xfs start /etc/rc/d/init.d/sound stop • If a service need be installed at boot time, you can edit the /etc/rc.d/rc.local file. • Or add functionality by adding a script to the /etc/rc.d/init.d directory. Contents of Custom RC script • A description of the scripts purpose. • Verify the program really exists before attempting to start it. • Parse for start and stop command line options. • Put your script in the /etc/rc.d/init.d directory. Link the New RC script • Add new service script to the /etc/rc.d/init.d directory. • cd to the correct run level directories, cd /etc/rc.d/rc5.d • Add the link ln –s ../init.d/mynewservice S98mynewservice • You can also add a service by modifying the /etc/rc.d/rc.local file. • init – process id 1. • inetd – Traditional Unix service. • xinetd – Linux version of inetd. Used to manage some services: xinetd.org/faq.html • syslogd – logs system messages to /var • cron – used to start utilities at a pre described time. • The parent process of all processes. • Started by the kernel at boot time. • If a process dies before all of its children complete, then the children inherit init as the parent (PPID = 1). • Controls the run level with /etc/inittab. inetd & xinetd • Daemons – independent background processes that poll for events. • Events sent to the daemons determine how the daemon behaves at any given time. • Pre-Linux tool. • The supervisor of network server-related processes. • Rather than running many daemons taking up memory, initd polls for daemon requests. When a particular daemon gets an event, initd will activate the appropriate daemon. • Linux implementation of inetd. • A list of the services currently offered by xinetd are in /etc/xinetd.d directory. • With programs/services disconnected from a terminal, where does their standard output go? • Syslogd routes the output of services to text files. • Most log files are in /var/log • Works in a heterogeneous environment. • /sbin/syslogd • /etc/syslogd.conf • man syslogd • Syntax of file involves – Facility: mail, kern, daemon, lpr, other services – Priority: emerg, alert, warning, err, notice, etc. – Log file *.emerg @loghost,childrja,root Emergency messages are sent to machine loghost and to the console session of childrja and root. • Used to schedule programs to run. • Cron wakes up once every minute and checks all the crontab files on the system. • If an entry in one of the crontab files matches the date and time, then the designated process is run. • Crontab file syntax: • Min Hr Day Month DayofWk command
OPCFW_CODE
|Global X, CC BY 2.0, flickr| I think that the two main findings, as summarized by Farrell, are partially unsurprising, but also significant, especially as they reflect the nature of the mechanisms of social media feedback. Up/downvotes are the bluntest possible feedback indicators - no nuance, pure affirmation/rejection - and as such, lend themselves quite excellently to overinterpretation. My tweet gets 35 favorites, and I feel personally validated - irrationally, I know, but I still feel it. My comment on a Chronicle article gets 35 downvotes, and I feel hurt - again, irrationally, but the reaction is there. Ceteris paribus, that mechanism alone would tend to trigger, I think, a strong challenge reaction. What motivates you more to take the time and energy to engage in public discourse - the prospect of competing with contrary views, or the prospect of preaching to the choir? Now mix in the finer-grained feedback mechanism of comments. At first glance, you'd think these would have a mitigating effect on the blunt-impact up/downvotes. But then consider how poorly, overall, we communicate in the written word, especially in online contexts. Comments can be no more nuanced than a "like"; they often reflect a respondent's own agendas and priorities, which are orthogonal to any consideration of the original poster at all; and they are often embedded within a pre-existing conversational structure which tends to promote a spiraling escalation of reaction. I think that Wong is right - the problem, such as it is, is that his position tends to get borne out over timeframes that are too long for individuals. Sure, redditors may trend toward higher-level discourse under Wong's laissez-faire model - over generations, maybe. And if trolls can game such a system in the short-term to predominate and push out the non-trolls, that long-term gameplan is a dead strategy. But beyond the explicit point of the article, I was really interested in considering the idea of reddit as a government. The criteria for identifying geopolitically-relevant entities have varied over time, right? Family-based, early on; geography-based, mostly; sometimes ideology-based (hello, Roman Catholic Church!); nowadays, increasingly corporation-based. As online spaces continue to merge into our meatspace in socially, economically, and politically salient ways, why would we imagine that online communities would be exempted ex hypothesi from geopolitical relevance? (for fun, Cory Doctorow, "When Sysadmins Ruled the Earth" 2006). (for serious, interesting conversations around open-source governance - Doug Rushkoff's Open Source Democracy and spin-off/inspired projects like airesis.us, wematter.com, efficasync, the Shadow Parliament Project, and the broader "libre culture" conversation).
OPCFW_CODE
We had an interesting Skype migration, where we needed to migrate 10,000 numbers from BT to Microsoft Phone System for Skype for Business. With BT you have to port the entire block in one go, so we had no way of porting numbers in batches as users migrated over to Skype for Business. So, we needed a way for users to continue to be able to receive calls as they migrated onto S4B, but whilst the numbers were still with BT. Since calls were still coming into an old PBX, the initial thought was to program the PBX to forward numbers as and when users were migrated. However, since we were migrating about 100 users a day, this would basically be a full time job to program all the forwarders on the PBX, and would also be very prone to error, since the old PBX had to be programmed via a terminal application with no scripting support. However, we realised that users were already used to forwarding their own numbers to their mobile phones, so they can forward their own calls in this way using their desk phone (a feature of the old PBX). What we therefore did was allocate 10,000 temporary local numbers in Skype for Business, and then ask each user to forward on their old desk phone to their temporary number as they get migrated. Once this is done, users receive all their calls on their old number using S4B (either client or Polycom desk phone), and can make outbound calls (which doesn’t show their number anyway). This made the whole migration process a lot easier, and placed the onus on the user to set up a simple call forward, less prone to error and easy for them to correct if they make a mistake. The final job will be one off number port, at which point we will map all the ported numbers to the users using a PowerShell script, and job done! How to connect PowerShell to the internet with authenticated proxy servers. PowerShell won’t update help, or let you connect to online repositories, without configuring it to work with your corporate web proxy proxies. Whilst it will use system Internet settings, which will work for unauthenticaed proxies, you will have do some changes manually if your proxy requires authentication. If you try and use an online command such as update-help, you will get an error like this: PS C:\WINDOWS\system32> update-help update-help : Failed to update Help for the module(s)'ActiveDirectory, AppBackgroundTask, AppLocker, AppvClient, Appx, AssignedAccess, BestPractices, BitLocker, BitsTransfer, BranchCache, CimCmdlets, ClusterAwareUpdating, ….Unable to connect to Help content. The server on which Help content is stored might not be available. Verify that the server is available, or wait until the server is back online, and then try the command again. This is a workaround in order to migrate mailboxes to Office 365 which have special characters. You may run into an issue migrating from on premise Exchange to Office 365, where some accounts fail due to illegal characters in email addresses or UPNs. Having spoken to Microsoft about this, the official line is that no special characters are supported, therefore you should remove them if at all possible. However, we found during a large migration that in some cases they can be retained. Note that the Microsoft article linked below does not mention UPNs, however we found that special characters in UPNs will cause users and shared mailbox migrations to fail. The result of our testing was as follows: Distribution Lists don’t require any changes and will still work User accounts and shared mailboxes need any illegal characters removing from UPN, mail and primary SMTP address in proxyaddresses attribute, but these can be added back in as SMTP aliases, and should then be retained once the account is migrated. Note: I would suggest that special characters are simply removed, rather than replaced with anything else. There are different types of characters, so removing them works for anything. It is not required to change the Display Name in the GAL, only the email address and whilst that is what people will see if they reply to an email, and they will still receive emails on the old address. This will be very difficult to manage if we have different rules for different areas of the business, different characters etc. You can follow the process below to make changes to affected accounts in your on premise Active Directory before they can be migrated. To be clear, the process is as follows (for users and shared mailboxes): Remove any special characters from UPN, mail, and primary SMTP address Add back the primary SMTP address as an alias (if still required) All other aliases and attributes can remain as-is, including display name. Display name: R&D Team Mailbox – leave as-is UPN: Change R&DTeamMailbox to RDTeamMailbox Change Primary SMTP: From R&DTeamMailbox.com to RDTeamMailbox@domain.com (remove &) Add back the primary address as an alias: R&DTeamMailbox.com
OPCFW_CODE
Lync and Exchange will let you customise and import new recordings for various greetings, voice announcements, messages, etc. Here are the respective requirements for the audio file formats. The source is hyperlinked or referenced beneath each item. Yes, some of this is a cut and paste job. Before You Start Just taking any old audio file and converting it as recommended/required below isn’t necessarily going to deliver an ideal or acceptable outcome. You might end up with sibilants (“ess-ing”) or quiet patches. Any commercial audio production house will be familiar with the tricks for phone production, but if you’re doing it yourself: - convert it to mono - compress the hell out of it. Dynamic range isn’t your friend for a music source or recording – and it can actively work against you if people can’t hear the quiet bits and hang up thinking they’ve been cut off (I kid you not) - consider the use of a male voice in spoken word recordings, because their tone is going to generally be lower and more suited to the narrow bandwidth of a phone call - for music on hold, realise that the file plays from the start each time and take advantage of that with embedded voice messages. Also make sure that the end and the start of the file go well together when it loops at the end of the file. Having said that, also try and avoid embedding messages that might aggravate a caller who’s on an extended hold. You don’t want them counting the number of times they looped through - use silence appropriately. Depending upon the use, you’ll need to either strip any silence from the start and end, or ADD some: - you’ll generally want hold music to start with a PUNCH or a fast build - remember to leave perhaps 500mS of silence before messages to make sure the speech-path has been established before the message commences - silence can be used to pad out the end of an Auto-Attendant or RGS menu to give the user more time to respond before Exchange or the RGS repeats itself or takes a default action - if multiple sampling rates are acceptable (e.g. Lync MOH, RGS), experiment with a few different ones. (Read more on this below, under MOH) - if you’re in AU or NZ, buy a copy of Australian Standard AS/NZS 4263:2003 – “Interactive voice response systems – User-interface – Dual tone multi frequency (DTMF) signalling”. It explains how you should phrase your Auto-Attendants and RGS messages, how many entries per level, how many levels, and that you should always say “zero” not “oh”. At $80 it’s a bit rich, but there’s a point to having standards, and I’m quite the consistency freak… - LISTEN to the recording when you’re done – and on a lousy set of tiny speakers, not your best concert monitors Lync Music On Hold – MOH I can’t find a published standard from Microsoft for the Client-side MOH. Ken suggests a CBR or VBR WMA, mono or stereo, with a bitrate of up to 192kbps. Don’t lose sight of the fact that you’re playing audio down a phone line, so there’s no point using your best broadcast-quality recordings. To reduce down-sampling effort and the resulting nasty scratchy audio artefacts I suggest a low, constant bit rate in mono. To prove this point, we took our reference track (Pet Shop Boys – Left To My Own Devices) from the CD and converted it 3 ways: - 44kHz sampling rate, 16bit, mono, saved as a 320kbps WMA. The file size was 19M! - 44kHz sampling rate, 16bit, mono, saved as a 32kbps WMA. This was under 2M. - 16kHz sampling rate, 16bit, mono, saved as a 16kbps WMA. This was just on 1M. When we placed our test call on hold with the 320k file, CPU use on my quad-core x64 machine spiked to 30%, but quickly settled down to around 5%. The audio at the other end (a nearby CX600) sounded quite good. (Our test is a little tainted because this is going to be using wideband audio). The 32kbps file resulted in a peak of 20% at the start, and the audio had a noticeable and displeasing level of artefacts. We weren’t surprised to see the 16k file had the lowest CPU demand, but WERE pleased that the audio sounded much better than the file with twice as many samples. I’m guessing that in down-sampling the file, Lync’s simply throwing away ‘redundant’ samples, and it shows. Once you’ve chosen and distributed the file, don’t forget to enable it for the users. EVEN IF YOU’RE NOT using a custom file, make sure you specify the location otherwise the users will be able to change it to their own – and that’s only going to end in laughter. Followed by tears: set-csclientpolicy -identity global -enableclientmusiconhold $true –MusicOnHoldAudioFile "C:\Program Files (x86)\Microsoft Lync\Media\FunkyMohMusicFile.wma" set-csclientpolicy -identity global -enableclientmusiconhold $true –MusicOnHoldAudioFile "C:\Program Files (x86)\Microsoft Lync\Media\DefaultHold.wma" Lync Call Park The Call Park application supports only Windows Media Audio (WMA) files for music on hold. The recommended format is Media Audio 9, 44 kHz, 16 bits, Mono, CBR, 32 kbps. The converted file plays over the phone only at 16 kHz, even if it was recorded at 44 kHz. Lync Announcements and Response Groups The Announcement and Response Group applications support Wave (WAV) and WMA files for music and announcements. Announcements have the same audio file requirements as response group workflows. You can use Windows Media audio (.wma) file format or wave (.wav) file format for unassigned number Announcements or for Response Group messages, on-hold music, or interactive voice response (IVR) questions. Call Park, Response Group, and Announcement applications require that Windows Media Format Runtime is installed on Front End Servers running Windows Server 2008 R2 and Windows Server 2008. The Windows Media Format Runtime is required for Windows Media Audio (WMA) files that these applications play for announcements and music. The Windows Media Format Runtime is installed automatically when you run Setup, but it might require you to restart the computer. Therefore, we recommend that you install the runtime before you run Setup. You can install Windows Media Format Runtime from the command line, as follows. %systemroot%\system32\dism.exe /online /add-package /packagepath:%windir%\servicing\Packages\Microsoft-Windows-Media-Format-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.mum /ignorecheck All wave files must meet the following requirements - 8-bit or 16-bit file - Linear pulse code modulation (LPCM), A-Law, or mu-Law format - Mono or stereo - 4MB or less For the best performance of wave files, a 16 kHz, mono, 16-bit Wave file is recommended. [Lync CHM. Topic Last Modified: 2010-11-08] Lync Dial-In Conferencing Greetings Lync Server 2010 does not support customisation of voice prompts and music for dial-in conferencing. However, if you have a strong business need that requires you to change the default audio files, see Microsoft Knowledge Base article 961177, “How to customize voice prompts or music files for dial-in audio conferencing in Microsoft Office Communications Server 2007 R2,” available at http://go.microsoft.com/fwlink/?LinkId=179684 Conferencing Attendant and Conferencing Announcement have the following requirements for music on hold, recorded name, and audio prompt files: - Windows Media Audio (WMA) file format - 16-bit mono - 48 kbps 2-pass CBR (constant bit rate) - Speech level at -24DB [Lync CHM. Topic Last Modified: 2011-05-15 and TechNet] Exchange 2007 UM – Greetings Exchange Server 2007 voice prompts need to be in WAV format and the following attributes: - Linear PCM (16 bit/sample) - 8 kilohertz (kHz) “If you do not use this specific format for the .wav file, an error will be generated stating that the source file is in an unsupported format. Although an error is generated, the error will not appear in Event Viewer.” (That would be too easy!) Here’s a useful link pointing out how to manage custom audio prompts in Exchange 2007. Exchange 2010 UM – Greetings Thankfully the same format as 2007! Here’s a link to customising greetings in Exchange 2010.
OPCFW_CODE
Works on iPhoneX, iPhone8, iPhone8 Plus and iPad Pro and all the devices below! This already updated to iOS11 hit game will make your users go crazy… in a good way Just take a look at the original The Impossible Game, one of the most super addictive games on the market. It’s a best selling Xbox Live Indie Game with a simple yet exciting gameplay. As on Flappy Bird, it seems extremely easy to play but be sure you will easily fail (it’s an impossible game!). You must smash and avoid objects with your square by jumping on a fast moving space. Game becomes more and more exciting as you go ahead. Watch the video here. The Impossible Game – Deluxe Edition is available on the App Store. Download it and take it for a spin: Check out one of our clients awesome reskin: https://itunes.apple.com/us/app/fudg-fosters-unexpectedly/id912229038?ls=1&mt=8 The Impossible Game – Deluxe Edition is much more than the original. We have added our renown Magnetization System that you’ve seen in Flappy Blueprint. * as of v.5.0.0 only AdMob is supported Setup and Reskin Rewritten the whole code to support iOS11, Swift 4, Xcode 9 and iPhoneX 4.3.3 (27.12.2016) – CRITICAL - Fixed “obj_msgSend” error (Files changed: OALSuspendHandler.m lines 156-157) 4.3.2 – (20.10.2016) – CRITICAL - Fixed “Going to Settings” not working in iOS10 (Files changed: Info.plist – removed ‘prefs’ URL Type; RootViewControllerInterface.h; RootViewControllerInterface.m; MainMenu.m) - Updated to Xcode Suggested Settings 4.3.1 – (16.09.2016) – CRITICAL - Fixed iCloud issue: user was forced to Sign in into iCloud to use the app. Now the user may choose to sign in into iCloud. (Files changed: RootViewVontrollerInterface.m -> lines 205 – 241) 4.3.0 – (08.06.2016) – CRITICAL - Updated iCloud functions from ‘fetchRecordWithID:’ to not use the recordID and use ‘performQuery’ instead (file changed: RootViewControllerInterface.m) - Added support for user to sign in later to iCloud (files changed: MainMenu.ccb,MainMenu.m, AppDelegate.m, RootViewControllerInterface.m, Info.plist) IMPORTANT: MAJOR CHANGES HAVE BEEN MADE !!! YOU WILL HAVE TO START YOUR UPDATE WITH THIS NEW VERSION !!! 4.2.0 – (14.05.2016) – CRITICAL - Added iCloud support to ensure restoring of the user process and to avoid rejection from the Apple Review Team because of Guideline 10.6: Your app uses intermediary currency to purchase items that function as non-consumable products but does not include a restore mechanism. Users restore transactions to maintain access to content that they’ve already purchased. SEE THE “How to Set Up iCloud” PDF TO SET UP YOUR CODE – (files changed: AppDelegate.m, GameOver.m, ShopBoosts.m, ShopCoins.m, RootViewControllerInterface.h/m; added files: TheImpossibleGame.entitlements, CloudKit.framework) 4.1.0 – (14.12.2015) – OPTIONAL - Added Fyber (files changed: Info.plist, Defaults.h, AppDelegate.m, TheImpossibleGameSettings.plist ; added files: fyber-sdk-lib, FyberCocos2dHelper.h/.m, CoreLocation.framework and CoreTelephony.framework) 4.0.1 – (05.12.2015) – CRITICAL - Disabled Bitcode (modified Project and Targe / Build Settings / Enable Bitcode) - Fixed elements going outside the screen on iPhone 6 Plus (files changed: all the .ccb files in SpriteBuilder; MainMenu.m; Shop.m; GameOver.m; Gameplay.m)
OPCFW_CODE
/** * */ package io.ddf; import io.ddf.content.Schema; import io.ddf.exception.DDFException; import java.io.Serializable; import java.lang.reflect.Array; import java.lang.reflect.ParameterizedType; /** * A one-dimensional array of values of the same type, e.g., Integer or Double or String. * <p/> * We implement a Vector as simply a reference to a column in a DDF. The DDF may have a single column, or multiple * columns. * <p/> * The column is referenced by name. * <p/> * TODO: Vector operations */ public class Vector<T> implements Serializable { /** * TODO: test this * * @return */ @SuppressWarnings("unchecked") protected Class<T> getParameterizedType() { Class<T> clazz = (Class<T>) ((ParameterizedType) this.getClass().getGenericSuperclass()) .getActualTypeArguments()[0]; return clazz; } /** * Instantiate a new Vector based on an existing DDF, given a column name. The column name is not verified for * correctness; any errors would only show up on actual usage. * * @param theDDF * @param theColumnName */ public Vector(DDF theDDF, String theColumnName) { this.initialize(theDDF, theColumnName); } /** * Instantiate a new Vector with the given T array. Uses the default engine. * * @param data * @param theColumnName * @throws DDFException */ public Vector(String theColumnName, T[] data) throws DDFException { this.initialize(theColumnName, data, null); } /** * Instantiate a new Vector with the given T array. Uses the default engine. * * @param data * @param theColumnName * @param engineName * @throws DDFException */ public Vector(String theColumnName, T[] data, String engineName) throws DDFException { this.initialize(theColumnName, data, engineName); } private void initialize(String name, T[] data, String engineName) throws DDFException { if (data == null || data.length == 0) throw new DDFException("Cannot initialize a null or zero-length Vector"); DDF newDDF = DDFManager.get(DDFManager.EngineType.fromString(engineName)) // .newDDF(null, (Object) data, new Class[] { Array.class, this.getParameterizedType() },name, // new Schema(name, String.format("%s %s", name, this.getParameterizedType().getSimpleName()))); this.initialize(newDDF, name); } private void initialize(DDF theDDF, String name) { this.setDDF(theDDF); this.setDDFColumnName(name); } /** * The DDF that contains this vector */ private transient DDF mDDF; /** * The name of the DDF column we are pointing to */ private String mDDFColumnName; /** * @return the mDDF */ public DDF getDDF() { return mDDF; } /** * @param mDDF the mDDF to set */ public void setDDF(DDF mDDF) { this.mDDF = mDDF; } /** * @return the mDDFColumnName */ public String getDDFColumnName() { return mDDFColumnName; } /** * @param mDDFColumnName the mDDFColumnName to set */ public void setDDFColumnName(String mDDFColumnName) { this.mDDFColumnName = mDDFColumnName; } // @SuppressWarnings("unchecked") // public Iterator<T> iterator() { // return (Iterator<T>) this.getDDF().getElementIterator(this.getDDFColumnName()); // } }
STACK_EDU
package Model; import java.util.ArrayDeque; import java.util.Deque; import java.util.Iterator; import Calculatrice.App; /** * Classe abstraite pour les pile de valeur * @param <T> Type de donnée contenu dans la pile */ public abstract class Stack<T> implements StackBehavior<T>{ /** * Pile de valeur */ private final Deque<T> STACK; /** * Pile de l'historique */ private final Deque<String> HIST; /** * Pile de valeur et de variable */ private final VariableList<T> VARIABLE_LIST; /** * Constructeur de la pile */ protected Stack() { STACK = new ArrayDeque<>(); HIST = new ArrayDeque<>(); VARIABLE_LIST = new VariableList<>(); } /** * Vide la pile de valeur */ public final void clear(){ this.STACK.clear(); this.VARIABLE_LIST.clear(); this.HIST.clear(); } /** * Ajoute une valeur dans la pile * @param element Valeur ajoutée */ public final void push(T element){ this.STACK.push(element); } /** * Ajoute une valeur dans la pile HIST * @param s Valeur ajoutée */ public final void pushHist(String s){ this.HIST.push(s); } /** * La taille de la pile * @return Un entier représentant la taille */ public final int size(){ return this.STACK.size(); } /** * La taille de l'historique * @return Un entier représentant la taille de l'historique */ public final int sizeHisto(){ return this.HIST.size(); } /** * Retourne l'élément au sommet de la pile * @return Un élément au sommet de la pile */ public final T peek(){ return this.STACK.peek(); } /** * Retire et retourne l'élément au sommet de la pile * @return Un élément au sommet de la pile */ public final T pop(){ return this.STACK.pop(); } /** * Affichage de la pile */ public final void print(){ Iterator<T> value = this.STACK.iterator(); int count = this.STACK.size()-1; while (value.hasNext()) { System.out.println("\t pile("+count+"): " + value.next()); count--; } } /** * Affiche la liste des variables */ public final void printVariable(){this.VARIABLE_LIST.print();} /** * Obtient la valeur d'une variable * @param s la Variable * @return Une valeur correspondant a la variable */ public final T getValueFromVariable(String s){ return this.VARIABLE_LIST.getValueFromVariable(s); } /** * Retourne true si variable est dans la liste des variables false sinon * @param variable la Variable * @return Retourne true si variable est dans la liste des variables false sinon */ public final boolean varContain(String variable){ try{ if(this.VARIABLE_LIST.getValueFromVariable(variable) != null){ return true; } } catch(Exception e){ return false; } return false; } /** * Crée une variable avec sa valeur attribuée ou la remplace * @param var nom de la variable * @param value Valeur de la variable */ public final void replaceAddValue(String var, T value) { this.VARIABLE_LIST.replaceAddValue(var, value); } //========================================================== //============FINALEMENT INUTILISE========================== /* public final Iterator<T> iterator(){ return this.STACK.iterator(); } public final Iterator<T> descendingIterator(){ return this.STACK.descendingIterator(); } public final Iterator<String> iteratorHist(){ return this.HIST.iterator(); } */ //========================================================== //========================================================== /** * Crée un interator à la fin de HIST * @return l'iterator de HIST */ public final Iterator<String> descendingIteratorHist(){ return this.HIST.descendingIterator(); } /** * verifie si s est de la forme (x) * @param s Une chaîne de caractère * @return true si s est de la forme (x) */ public final String containsP(String s){ if(s.contains("(") && s.contains(")")) { if (s.charAt(0) == '(') { String r = s.substring(1); if (r.charAt(r.length() - 1) == ')') { return r.substring(0, r.length() - 1); } } } return "Error"; } /** * Retourne la valeur de la valeur de l'index pile(x) de STACk * @param s la Variable * @return la valeur de la valeur de l'index pile(x) de STACK */ public final String pile(String s) { String res = s.replace("pile",""); String res3 = containsP(res); if (res3.equals("") || !App.containOnlyNumber(res3)){ System.out.println("\t erreur: " + s + " n'existe pas " ); return s; } if(Integer.parseInt(res3) > this.STACK.size()-1 || (-1 * Integer.parseInt(res3)) > (this.STACK.size() + 1) ){ System.out.println("\t erreur: " + s + " n'existe pas " ); return s; } if (Integer.parseInt(res3) < 0) { Iterator<T> iterator = this.STACK.iterator(); int i = 0; while (i != (Integer.parseInt(res3) * (-1) - 1)) { iterator.next(); i++; } return String.valueOf(iterator.next()); } if(Integer.parseInt(res3) >= 0 ){ Iterator<T> iterator = this.STACK.descendingIterator(); int i = 0; while (i != Integer.parseInt(res3)) { iterator.next(); i++; } return String.valueOf(iterator.next()); } return "ERROR"; } /** * Retourne la valeur de la valeur de l'index hist(x) de HIST * @param s la Variable * @return la valeur de la valeur de l'index hist(x) de HIST */ public final String hist(String s) { String res = s.replace("hist",""); String res3 = containsP(res); if (res3.equals("") || ! App.containOnlyNumber(res3)){ System.out.println("\t erreur: " + s + " n'existe pas " ); return s; } if(Integer.parseInt(res3) > this.HIST.size()-1 || (-1 * Integer.parseInt(res3)) > (this.HIST.size() + 1) ){ System.out.println("\t erreur: " + s + " n'existe pas " ); return s; } if (Integer.parseInt(res3) < 0) { Iterator<String> iterator = this.HIST.iterator(); int i = 0; while (i != Integer.parseInt(res3) * (-1) -1) { iterator.next(); i++; } return String.valueOf(iterator.next()); } if(Integer.parseInt(res3) >= 0 ){ Iterator<String> iterator = this.HIST.descendingIterator(); int i = 0; while (i != (Integer.parseInt(res3))) { iterator.next(); i++; } return String.valueOf(iterator.next()); } return "ERROR"; } /** * Retourne un string de la forme x = STAcK.pop() * @param s la Variable * @return Retourne un string de la forme x = y */ public final String varE(String s){ if(this.STACK.size() == 0){ System.out.println("\t erreur la pile est vide"); return s; } String variable = s.replace("!",""); String value = String.valueOf(pop()); String trimmedValue = value.replace(" ",""); return variable + " = " + trimmedValue; } /** * Renvoie la valeur de s dans la list des Variables * @param s la Variable * @return Renvoie la valeur de s dans la list des Variables */ public final String varI(String s){ String variable = s.replace("?",""); try{ if(this.VARIABLE_LIST.getValueFromVariable(variable) != null){ String value = String.valueOf(this.VARIABLE_LIST.getValueFromVariable(variable)); return value.replace(" ",""); } } catch(Exception e){ System.out.println("\t "+ variable +" n'a pas de valeur"); return s; } return s; } /** * Appel la bonne fonction selon si s contient "hist" ou "pile" * @param s la Variable * @return Renvoie un string de hist(s) ou pile(s) */ public final String hp (String s){ if (s.contains("hist")){ s = hist(s); } if(s.contains("pile")){ s = String.valueOf(pile(s)); } return s; } /** * Appel la bonne fonction selon si s contient "!" ou "?" * @param s la Variable * @return Renvoie un string de varE(s) ou varI(s) */ public final String var (String s){ if (s.contains("!")){ s = varE(s); } if (s.contains("?")){ s = varI(s); } return s; } }
STACK_EDU
Simplanova team attended a workshop in Microsoft Lithuania office on May 24th in Vilnius, Lithuania. We would like to share the most interesting information we have heard about Microsoft Dynamics 365 Business Central. - Dynamics 365 BC localization availability. - Reports. Microsoft 365 Business Central will have less standard reports (300+). Other reports will be in PowerBI which will be available for D365BC users for FREE. - Licensing. There will be three different licensing options: cloud (available from April 2nd, 2018), on-premise and hosted. Microsoft will be offering a Dynamics 365 Business Central hosted in a private cloud on Azure. Licensing conditions of privately hosted version will be different than the on-premise version. What’s more: cloud version will have more features, especially in integrations, than the on-premise version. A lot more details were promissed to be presented in Microsoft Inspire partner event on July 15-19 this summer in Las Vegas, USA. Do not miss it! Simplanova team will be there, so we would be happy to meet. - W1 version availability. W1 version of D365BC cloud will be released on the 1st of July, 2018 and will be available on demand only. What does that mean for you? Only a selected number of countries will be able to use W1 version. Microsoft will release Worldwide version in waves for other countries, which are not on the list (see image above). If you’re not on the list – be patient or go to Microsoft with a business case. - User Discounts. Users will be offered a 40% discount for NAV transition to Dynamics 365 BC. - Unsuported NAV version users. 52% of all NAV users from total customer base are still using older versions than 2015. These numbers will be higher very soon because in 2020 NAV 2015 will not be supported. - WEB service. Everything what is published as web service in Dynamics 365 BC is available in CDS (Common Data Service). - WEB client personalization. Choose, move, add fields to page, resize and etc. all personalization process is intuitive and easy to make. - GDPR. To meet the GDPR requirements in Dynamics 365 BC you need to open all tables and all fields in the program and choose date sensitivity mode. When you first do this the data sensitivity mode will be set up automatically by Microsoft and does not meet GDPR requirements. Client needs to check information with the partner and clasify the information according to the client will. Nobody can come directly to table in Dynamics 365 BC and take data from there. This function meets GDPR requirements. More documentation about GDPR can be found here: https://servicetrust.microsoft.com/ViewPage/TrustDocuments You can read more about GDPR, NAV Data Encryption in Roberto Stefanetti NAV Blog here. - Image analyzer. If you attach images to the contact card, system automatically detects the age and gender of the contact. This feature can be enabled by downloading Image Analyzer from app source for FREE. If you run Image Analyzer in Items, it can recognize item category, type or condition. - Cortana intelligence as Azure machine learning. Microsoft is trying to use artificial intelligence in Dynamics 365 BC also. For example system can predict cash flow information from previous (historical) data. - How to configure Cash flow machine learning? Open cash flow setup. Choose Cortana Intelligence and put API url. How to get API URL? Open Cortana Intelligence gallery – open the studio – press run – deploy the web service – then copy API url you need. You can use Azure Machine learning gallery to create your own Machine Learning model for predictions. - Preconfigured excel reports. Now there are only 7 available excel reports in Dynamics 365 BC. Report preview is now available in Dynamics 365 BC with copy, download, print features. - Visual Studio and App – Designer. When you save your extension in Dynamics 365 BC you can download the source file and it opens in Visual Studio. You can continue developing it. Visual Studio and App – Designer are connected, for example you can use Visual Studio code and preview what you did in App Designer or vise versus. - Number ranges to create extensions in Dynamics 365 BC. We are sharing this table from Kauffmann blog. Simplanova is specializing in Dynamics NAV upgrades and offers full service of customizations migration to Extensions or Dynamics NAV Extensions Training Course. If you have the NAV Upgrade project with constant migration to Extensions, fill-in the form below and we will provide you with a full solution for upgrading to Dynamics 365 Business Central.
OPCFW_CODE
To answer the question regarding random hardware failure rates, AMD products need to be analyzed for random hardware failure susceptibility and hardware failure rates. For components to qualify for functional safety application, there is a requirement for the system to detect a failure. This detection method can be implemented by the system integrator (for example, using a redundant device) or can be implemented in the device (for example, parity) by designers. The system integrator (customer) is required to analyze the quality of the diagnostic elements with respect to their ability to detect and report a fault. For business reasons, integrating diagnostics on chip and the associated analysis provides value to customers. This value and the associated analysis drives datapath analysis. For the most part, AMD products are designed to cover multiple use cases or out of context. In all applications, their intended use case is processing data either by hardware implemented datapaths or by using instructions to drive CPUs to process data and make decisions. The key to datapath analysis is understanding that data processing can be broken down into these four actions or first principles: - Data transformation - Data analysis - Data transportation - Data storage The IP designer uses these four first principles to determine what diagnostic is required to detect unintended or dangerous operation of the component for functional safety applications. In the case of data transformation or data analysis (for example, a floating-point operation fails or a compare zero fails), if the failure modes are transient in nature, then temporal diversity, that is, executing the operation twice using the same hardware and comparing the results, provides acceptable diagnostic coverage. If the failure modes are permanent in nature, then redundancy or executing the same operation using different hardware and comparing the result, is the required diagnostic. In the case of data transportation where a data packet is corrupted when moving from one location to another, for both transient and permanent failure modes, check bits are used to validate the data packet. There are numerous check bit techniques from cyclic redundancy check to forward error correction where the error in a data packet can be detected and, in some cases, corrected. In the case of data storage where data is at rest, devices holding data are exposed to single event upsets for a much longer period of time. This exposure is a unique problem in data processing. For both transient and permanent errors check, bits are used to validate the correct address of that data and sometimes to correct the stored data. For transient errors, patrolling data (reading, correcting, and updating) is used to mitigate error accumulation. Datapath analysis is valuable because the designer does not need to have in-depth knowledge of the functional safety standard to generate the required dataset used by the system integrator. The focus of the designer moves from concerns regarding standards implementation issues to functional design plus diagnostic coverage. This systematic approach is used to break down a design into a set of elements (buses and functions), based on the first principles, which follows the flow of data from top down recursively through a hierarchy.
OPCFW_CODE
What is Software Testing? is quite interesting question. This term software testing getting boom now a days. Many of you might have these questions like what is software testing? , how do they do it? Software testing is really needed? , how they test the software? And many more. If you googled what is software testing, you might have seen many statement and many definitions like its test technique, its black box, its white box, it’s a process which check completeness,correctness, quality and security etc. these statements,these definitions will not clear the concept of the software testing, especially for those people who want to choose software testing as a career. So, in this article I will try to clear the concept of software testing, how do they do it and if you are planning to start the career as a software tester. After reading this article you will get clear idea of software testing. Software testing is done to ensure that all the functionality and the features of the software is working as per the clients requirements in simple word software testing is an activity to check that the software is bug free. For example, there is a person who want its own e-commerce site like amazon, eBay etc. So he will contact any software company and he will explains his all the requirement like he wants customer login, customer can add their favorite product to cart, customer should able to pay via online banking, credit debit card etc.To the software company. So Software Company will write down all clients requirement and according to that requirement Software developer develop the e-commerce site. Then that developed software comes to Software Tester who test all the functionalities and feature means software testers ensure that all the functions and feature of software is working fine with all the possible conditions and all the requirement which is given by the client is included in software and working properly. But the main question is how does the Software Tester test the software? There are different technique, different ways to test the software. Basically Testing starts with understanding the software requirement. First tester understand all the requirement of the software and its output. Then he writes the test cases to test each requirement. For example, suppose one of the requirement of client is, he wants that the customer should able to add product to cart or bucket. So to test this functionality of software tester will write following test cases. - Verify ability of user to add the product to cart. - Verify the ability of the user to add multiple product to the cart/bucket. - Verify the ability of the user to add different product from different categories. - Verify the description, price and quantity of added product in the cart - User should not be able to add duplicate product in the cart - User should able to remove the product from cart - User should able the increase quantity of the cart product - And many more Just imagine to test a single add to cart functionality testers have to write so many test cases. It will help to find all major bugs/ errors present in the software. You might have one more question in your mind that do software testers have to write test cases all the time. The answer is, it depends on the testing technique they are using. In many testing techniques they have to write the test cases but in Adhoc Testing testers do not need to write the test cases.but to use Adhoc Testing technique tester should have lots of knowledge about the software. This type of testing technique is useful before the releasing of the software.There are almost 150+ software testing techniques present. There are lots of example available in history which shows the important of proper software testing. Software bugs can cause money loss and even loss of life. some of the incidence is shown below. NASA’s Mars Climate Orbiter was lost due to an issue in interfaces. The communication with the spacecraft was lost as the spacecraft went into orbital insertion. The issue was Ground based software was producing data in the units of pound-seconds instead of producing data in newton-seconds. The cost of the mission was almost $330 million. This is very huge loss of money which could have been prevented if they had found this bug during testing phase. There are many such a incident happened , one of those is once Chine Airlines Airbus A300 had crashed on 26 April 1994 due to the software bug which killed 264 peoples present on the plane. Such an incident shows us that how important Software Testing is. I Hope You liked my article.
OPCFW_CODE
St. Paul’s School is pleased to announce that it has been recognised as an Apple Distinguished School for 2021–2024 for its innovative and consistent use of Apple technology in learning and teaching. We believe that technology brings unmeasurable benefits to pupils as we prepare them to be global citizens and 21st century learners. We are proud to have a digital learning strategy through which technology is used to support our curricular goals. Coding at St. Paul's From the early years, pupils start using bee bots to learn commands and how to apply them in everyday life. They also work on MIT’s Scratch program to build games, animations and quizzes using blocks. Pupils finish Prep 2 already having a good understanding of the logic behind programming and are able to use commands. They then start to work with functions and loops in two ludic environments called Tynker and The Foos. These form part of the 'Get Started with Code' programme in 'Everyone Can Code' by Apple. Pupils use their iPad to work on both apps and use a learning portfolio to register their progress and receive teacher’s feedback. This continues in the senior years when they move on to use the iPad app Swift Playgrounds to work on the more advanced Learn to Code part of the Everyone Can Code programme. Pupils start to use 'SWIFT' programming language instead of blocks, and are now learning much more complex concepts, but still in a ludic and fun environment. Pupils spend their time in lessons engaged in the activities. They are challenged to solve an issue using computational thinking and the many concepts they have learned combined. Learning visual programming and the use of iPad Based on Apple Education's "Get Started with Code 1 and 2" book, our pupils were able to understand the language of computers and develop their programming skills. In class, they learned concepts such as sequencing, debugging, functions and conditionals. These are practiced both plugged and unplugged. During the process, they were able to reflect, share ideas and solve problems. Finally, learners performed exercises on the code.org website using their iPad, in which they developed their own programmes and games. We use pupil's Swift knowledge to go further, and we learn to compare this to other programming languages such as python and scratch. Programming project with CodeSpark Academy During the Computing lessons, pupils explored programming concepts such as loops, events, conditionals, variables and boolean logic. In order to practice with their iPad, pupils used the CodeSpark Academy app to develop electronic games. During the process, they used game design strategies and drafted, designed, tested the game to correct possible errors, and finally shared their creations with classmates. Coding with Swift Playground During Computing lessons, students learn about programming using the Swift Playground language. This project started two years ago with the Learn to Code 1 book. The class worked both individually and collaboratively to pass the various application levels. In both situations, learners’ motivation, engagement and commitment were fundamental for the project's success. Byte became an honorary member of the computing lessons and it also helped us when we started comparing Swift language to other programming languages. Swift app and the use of drone Parrot As pupils finish book 1 of Learn to Code, we introduce them to the Parrot drone to explore their knowledge of the Swift language. Our aim is to have pupils using what they learned to control the drone. In the beginning, they follow the instructions of the challenges proposed by the app and thereafter some pupils tried to create their own challenge, not always achieving the required result. This was a very rich learning moment because they had to find the error in their code and correct it. The use of Answers app Learning different ways to represent an algorithm, our pupils had a challenge that was divided into three stages. The first one was to write pseudocode; the second was to use a flowchart to illustrate the digit, and the third, and most important for computer classes, was to transfer the learned content into a programming language. For this, students used the Answers app, which is part of the Learn to Code programme. Pupils were delighted to see their pseudocode become a programme. Learners were also encouraged to explore the app and expand their knowledge by building their own code. Storytelling using the Photon Edu While working on the topic The Stories We Tell, the PP3 pupils used the Photon Robot to retell one of the stories they had worked on. The children used an iPod Touch and the application, Photon EDU, to practise programming the Photon robot and later built a story map out of 3-D materials on the ground. They then programmed the robot to visit each part of the story and retold the story to their classmates. Green Screen Project with 5-6 years old pupils While working on the topic Living Things, the children had the opportunity to research and record their own documentary-style projects. This group chose the emperor penguin and researched and recorded everything independently. They used the iPad and the application Green Screen by Do Ink to include a background that was appropriate for the artic setting. Book Creator App Pupils in Prep 5 had to create a narrative based on the theme of freedom. This was also the theme of the topic and they gained much of their inspiration from the core text Rooftoppers, by Katherine Rundell. Having written their texts, pupils used their iPad and the Book Creator application to display and share their work. This was also joined with an autobiography that the children wrote in Portuguese as they were reading an autobiography about Malala Yousafzai. In the process, they developed their book design skills, thinking creatively about how to capture and keep their reader’s attention throughout the texts they wrote. I like to have the iPad because I’m able to research when doing a research activity (Junior School pupil) Strip Designer App When learning how to use the features of speech in narrative, pupils formed groups to create dramatic still-frames to capture various moments from a chapter of The Iron Man. Using pictures of their still-frames, pupils used their iPad and the Strip Designer App to add speech and thought bubbles to capture the characters' dialogue and thoughts. This strip was used in the following lesson to write a short narrative, focusing on the features of speech. iMovie and GarageBand skills In PSHE (personal, social, health and economic) lessons, our pupils develop the knowledge, skills and attributes they need to manage their lives, now and in the future. During the pandemic, in one PSHE lessons, our pupils were challenged to create a video to engage our community and keep everyone connected and positive despite the challenging times. They created the concept, the plot, recorded it and edited the video using their devices and the iMovie app. In the Music Department, pupils use GarageBand to create music or podcasts. I love using iMovie to edit my videos and projects and use other apps to connect with friends and teachers (Junior School pupil)
OPCFW_CODE
Altcademy.com is an online bootcamp teaching software development and other computer-related topics. The program is designed for people with no prior experience in the field and offers a variety of tracks to choose from, including front-end web development, back-end web development, iOS development, and Android development. The platform provides users with the opportunity to learn new skills in a convenient, affordable, and stress-free environment. Altcademy.com also offers certification for users who complete their courses successfully. At Altcademy, coding experts strive to provide the best coding education possible in order to help individuals succeed in the tech industry. Our online Altcademy courses are tailored to be most relevant to today’s market, and bootcamp graduates are held to the highest standards. The reps of the courses work closely with the business startup partners and advisors from Silicon Valley to ensure that their curriculum is current and of the highest quality. The only measure of success is the success of online students, and they do everything they can to help them reach their full potential in python, data science, react native, or web development in general. If you’re looking for an affordable and intensive way to learn coding, Altcademy is the place for you. Their bootcamp prices start at just $299, making their courses some of the most affordable in the industry. In addition to their low prices, Altcademy offers a wide range of courses that cover a variety of programming languages. Whether you’re a beginner who wants to learn how to code or an experienced developer who wants to add a new language to your repertoire, Altcademy has a course for you. Altcademy bootcamps are significantly less expensive than the average coding bootcamp, with Altcademy cost ranging from $1,090 to $1,670. In 2021, the average Altcademy price of a coding bootcamp was about $13,580. The Altcademy bootcamp is an intensive three-month program that helps students learn the ins and outs of web development. Graduates of the bootcamp are equipped with the skills they need to start a career in web development. Since the program is so comprehensive, graduates are able to hit the ground running in their new careers. They know how to code, design, and build websites, and they have the confidence to take on any challenge. If you’re looking to jumpstart your career in web development, then the Altcademy bootcamp is the perfect option for you. The program is intense but rewarding, and it will give you the skills you need to succeed in this rapidly growing industry, including finding job in the companies like Slack or Rouge. Altcademy Cities to Find Expert Courses Altcademy bootcamp cities are some of the most exciting and innovative in the world. With locations in San Francisco, Buenos Aires, New York, Sao Paulo and London, students have access to some of the best resources and brightest minds in the industry. Whether you choose Altcademy Sao Paulo or any other location, the curriculum is constantly updated to reflect the latest advancements in the field, and the team of instructors is comprised of seasoned professionals who are passionate about teaching. Altcademy Offered Programs The Altcademy coding bootcamp offers a variety of programs that can equip you with the skills you need to be successful in the tech industry. Their 12-week program is designed to help students learn the basics of web development, and their 8-week program is perfect for students who want to learn how to code or become a successful full stack web developer. In addition to its full-time programs, Altcademy also offers part-time and online programs that are perfect for busy professionals. How Does It Work? Pick Altcademy online program. There are many different online programs that can help you become a professional coder. These programs are designed by industry experts and structured to provide you with all the knowledge and skills you need to succeed in this field. - The course curriculum provides you with the tools to understand fundamental concepts and apply them in practical ways. The content is constantly updated, so you can keep learning new things throughout your lifetime. - Enjoy mentoring. Altcademy web development, Altcademy Python, Altcademy data science bootcamp, Altcademy GitHub, or any other course offers private 1-on-1 office hours, instant ask and chat, and a Q&A database to help you sustain a healthy learning pace. Our response rates are fast so you can get the help you need in time. - Get career guidance. When you’re done with the studying part, the reps of the company will help you find a job that’s the best fit for you by teaching you how to research the job market, review and improve your resume, use social media to network with potential employers, and prepare for technical interviews. Scholarships Offered by Altcademy Altcademy is excited to offer bootcamp scholarships for Data Science, Web Development, and iOS Development courses. This is a fantastic opportunity to learn new programming skills and become a part of the Altcademy community. The scholarships are open to anyone who is interested in learning to code, regardless of experience level. If you are accepted into the bootcamp, they will provide you with a scholarship that covers the cost of tuition. Job Guarantee from Altcademy At Altcademy, they take their job guarantee very seriously. Altcademy for business or student guarantees 100% satisfaction with your experience there, and that you feel confident in your new skills whether you mastered Altcademy full stack, GitHub, web analytics, or any other issue. If for any reason you are not happy with your experience, they will refund your tuition. They want you to be able to walk away from Altcademy feeling confident in your new abilities, and knowing that you have a job waiting for you. Customer Care Support Based on the Reddit Altcademy reviews, one of the great things about Altcademy is their customer support. They have a team of experts who are available 24/7 to help students with any questions or problems they may have. The team is always quick to respond and is more than happy to help students resolve any issue they may have. If you’re looking for an affordable and high-quality education in the cryptocurrency and blockchain space, Altcademy is definitely worth considering. Their customer support team is second to none, and they will do everything they can to help you succeed. Customer Reviews of Altcademy Bootcamp The Altcademy bootcamp customer feedback have been unanimously positive. Students appreciate the one-on-one time with their mentor, the intimate class size, and the focus on real-world projects. Many students have found new career opportunities as a result of the bootcamp, and all would recommend it to others. The Altcademy bootcamp is an intensive, three-month course that covers everything from the basics of coding to developing and launching a website. The program is designed for people with little or no experience in web development, and it provides students with all the skills they need to start their own online business. In addition to teaching coding, the bootcamp also covers topics like search engine optimization (SEO), social media marketing, and email marketing. Students who complete the program will have a strong foundation in digital marketing and will be ready to start their own online business. If you’re interested in learning more about the Altcademy bootcamp, do not hesitate to visit the website today. What is Altcademy acceptance rate? AltSchool’s bootcamp has a high acceptance rate of 99% due to the rigorous and selective process that is undertaken by the admissions team. The admissions team looks for qualities such as grit, curiosity, and creativity in their applicants. Is Altcademy worth it? Altcademy bootcamp is a great way to learn new coding skills in a relatively short period of time. It is definitely worth it. The instructors are knowledgeable and the curriculum is well-designed. The best part is that you can apply what you learned in class immediately to your job. Is Altcademy legit? Altcademy is a top-tier bootcamp with an excellent reputation. It is one of the most respected coding schools in the world. Are there any refunds? If you are not satisfied with the bootcamp for any reason, you will receive a full refund. This policy is in place to ensure that you are 100% satisfied with your experience at Altcademy. Are there any guarantees? At Altcademy, they take their job training students very seriously. That’s why they offer a money-back guarantee on their bootcamp courses. If you don’t find a job within six months of graduating, they’ll refund your entire tuition. They’re that confident in their ability to help you start your career in tech.
OPCFW_CODE
You might be interested to read about ssdeep, a "context-triggered piecewise hash" (CTPH) used as a content-agnostic form of fuzzy hashing. Ssdeep builds hashes of pieces of a file so that similarity can be determined; say one character changes between two otherwise identical files. The checksum of the changed file parts will differ, but the checksums of all other parts will not, so the files are deemed to be highly similar. You're essentially trying to do this but without intending to use it to measure file similarity. I am under the impression that as long as you keep entire hashes (do not truncate them) and the segments you hash are large enough to make collisions rare (maybe 512 bytes?) then you will have a sufficient level of data integrity. It is theoretically possible that you'll have more integrity since you have a longer hash length, but there are so many areas that you have to be especially careful about in your implementation that I would not at all recommend this. That said, you specified three sha256 hashes, including one for the whole file. As long as you're matching all three, this should be, at its weakest, as good as sha256 alone. It's likely stronger still, but you'll (probably) be just as susceptible as a single sha256 when it comes to theoretical SHA-2 vulnerabilities, so you might as well go for another algorithm such as SHA-3 or even (since you still have the full file's sha256) something faster like MD5. You can also consider storing the byte size. 256 bit SHA-2 should be sufficient for anything unless you're worried about the distant future. If that's the case, you can't take anything for granted, but I'd go with SHA2-512 and SHA3-512 and the exact file size. If you just want more speed and aren't concerned with being attacked (i.e. you're just worried about data integrity from faulty hardware and/or crappy networks), you can start with just the file size, then calculate MD5 and SHA1 concurrently (two separate processes, one read of the file). I still wouldn't mess with cutting up a file unless you wanted to use ssdeep (which appears to use MD5 for its pieces by default). Perhaps an attractive balance of speed and integrity could be to check the file size, then the MD5 of the first 5MB (or the whole file if under 5MB), then the real integrity check, e.g. sha256 or sha3-512. It should* be harder to create a collision and faster to detect failures (stop on the first failure) while being only negligibly slower than just the last check (it took me 0.003s to calculate the MD5 hash of a random 5MB test file). (* I'm expert in neither cryptoanalysis nor cryptographic checksums: this is not authoritative.) I'm tempted to say that if an attack isn't suspected, you'd be fine with both file size and MD5 (you'd probably be fine with just MD5, though file size would allow you to fail faster and provide slightly better data integrity).
OPCFW_CODE
Tsz Wo \ 2013-10-25, 01:19 I agree with what Nicholas is saying. Feature branch merge votes are similar to traditional review-commit process. That means the code should be ready, and pass the Jenkins build tests. Also similar to regular patches where one describes what changes the patch brings, having an updated design document (with change history for the updates made to the design) helps people understand the design. Updated user document for the feature helps end users to understand how the feature works and helps them participate in testing. Finally, as expected by every patch, we should have unit tests and also details of manual tests done (with test plan for a feature). This helps people participate in the voting/review more effectively. There can be exceptions to the above list and some of the work could be deferred. That could be captured in the jira, with the plan of how that work gets done later. In such cases in the past we had deferred merging the feature from trunk to a release branch until the work was completed in trunk. Granted some of the feature readiness activity can be done during voting. But I fail to understand why expediting a feature that takes months to build should try to optimize a week. Why not finish the requirements we have for every patch for feature branch also? On Thu, Oct 24, 2013 at 6:19 PM, Tsz Wo (Nicholas), Sze < [EMAIL PROTECTED]> wrote: > No. In the past, committers would merge a branch once the merge vote had > been passed even there were problems in the branch. Below is my > understanding of merge vote. > Feature branch merge votes is the same as the traditional code > review-commit process except that it requires three +1's and it happens in > the mailing list. For review-commit, we +1 on the patch. If we think that > the patch needs to be changed, we should ask the contributor posting a new > patch before +1. This is not strictly enforced. In some cases, committers > may say something like "+1 once X and Y have been fixed". In some worse > cases, a committer may has committed a patch without +1 and then his friend > committer will say "I mean +1 by my previous comment but I forgot to post > it. Sorry, here is my +1." > For merge vote, it should be considered that a big patch is generated by > the diff from the branch over trunk. Then, committers vote on the big > patch in the mailing list. As review-commit, if the patch need to be > changed, committers should not +1 on it. Unfortunately, it is generally > hard to review big patches and it is relatively easy to sneak bad code in. > Both review-commit and merge vote are similar to voting on release > candidates -- we vote on the bits of the candidate but neither vote on an > idea nor a plan. (Of course, there are other types of vote for voting on a > > On Thursday, October 24, 2013 5:09 PM, Tsz Wo Sze <[EMAIL PROTECTED]> > > > No. In the past, committers would merge a branch once the merge vote > had been > > passed even there were problems in the branch. Below is my > understanding of > > merge vote. > > Feature branch merge votes is the same as the traditional code > > review-commit process except that it requires three +1's and it happens > > the mailing list. For review-commit, we +1 on the patch. If we think > > that the patch needs to be changed, we should ask the contributor > > posting a new patch before +1. This is not strictly enforced. In some > > cases, committers may say something like "+1 once X and Y have been > > fixed". In some worse cases, a committer may has committed a patch > > without +1 and then his friend committer will say "I mean +1 by my > > previous comment but I forgot to post it. Sorry, here is my +1." > > For merge vote, it should be considered that a big patch is > > generated by the diff from the branch over trunk. Then, committers vote > > the big patch in the mailing list. As review-commit, if the patch need NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. Doug Cutting 2013-10-25, 17:18 Vinod Kumar Vavilapalli 2013-10-25, 17:56 Doug Cutting 2013-10-25, 19:04 Chris Nauroth 2013-11-07, 17:53 Chris Nauroth 2013-10-24, 20:40 Aaron T. Myers 2013-10-24, 21:07 Chris Nauroth 2013-10-24, 21:51 Doug Cutting 2013-10-24, 22:46 Aaron T. Myers 2013-10-25, 06:51
OPCFW_CODE
Mutt Over SSH, but What About Attachments? If you've proposed an article for Linux Journal and haven't heard back yet, one reason is sometimes our rejection letters get a little long. Here's an example. I'm going to have to reject your Mutt article. Yes, it's cool to be able to get to your mail on any server easily by ssh, and yes, it's cool to be able to specify the application of your choice to view MIME attachments. But what about combining the two? You can't expect people to stop sending you attachments just because you're out of town. The whole “Mutt plus viewers” concept breaks down when you're reading mail in your account on a remote server—maybe a really remote server separated from you by a slow line—and you get an attachment. This happens to me all the time, especially with images for articles. Your choices are basically save the image and scp it back to your desktop machine, bounce it to an alternate account that's set up to use a GUI mail client on localhost or run an X-based image viewer over the ssh connection. All three are dumb. The last is the easiest if your connection is fast, but you have to send the whole X protocol instead of only the compressed image, and it can be slow, especially for PDFs. So, how do you efficiently handle attachments in a remote Mutt? I really want to know, because the more photos I get, the more I think about this. I like the “Only give your email address to literary UNIX greybeards who prefer text and maybe ASCII art” method, but here's a second-best choice. Important: you should not try this unless you control and trust both the system where you're running Mutt and your local system. Do this and it's possible that a bad person on the system running Mutt will see your attachments. A better version would protect your attachments from hostile users at both ends. First, set up an ssh tunnel. I put this in my .ssh/config: ProtocolKeepAlives 30 Host mail Protocol 2 EscapeChar none ForwardX11 yes PasswordAuthentication no LocalForward 8088 localhost:8088 The important part is the LocalForward line; when I establish an ssh connection to the host “mail”, I get an ssh tunnel from localhost's port 8088 to port 8088 on mail. You also need to have ForwardX11 on, since this method does use X for one little thing. Second, which program do we know that can handle any MIME type we're interested in? Our Mozilla web browser. But it's sitting on our laps, back on the client. That's fine; we'll point it at http://localhost:8088/, which is tunneled to “mail” where the attachment is. So we've got Mozilla making a secure connection to the host where the attachment is, but don't we need to write a web server to serve up the attachment? Sure. Here's a script suitable for demonstrating the idea, but it will need a little work to be useful on multi-user systems. #!/bin/sh # spewtomoz.sh # You shouldn't install this dumb script for all users because it only # uses one pipe and one port. Have it pick a better name for the pipe, # and set the port from an environment variable. TEMP="/tmp/spewtomoz" PORT=8088 rm -f $TEMP mkfifo --mode=600 $TEMP # netcat is the fun part of this script. # -l: listen for an incoming connection # -q 1: wait 1s after EOF and quit # -s 127.0.0.1 only use the lo interface # -p $PORT use $PORT netcat -l -q 1 -s 127.0.0.1 -p $PORT < $TEMP &> /dev/null & # send the HTTP headers, followed by a blank line. echo "HTTP/1.1 200 OK" >> $TEMP echo -n "Content-type: " >> $TEMP file -bni $1 2> /dev/null >> $TEMP echo >> $TEMP # Get started sending the file... cat $1 >> $TEMP & # Wait a second and tell the user's Mozilla, wherever it is, to start # viewing the file. This works over the X protocol. # (the date is to blow the cache and may not be necessary) sleep 1 && gnome-moz-remote http://localhost:$PORT/`date +%s` # end spewtomoz.sh All that remains is to make Mutt use this script as the handler for MIME attachments. That means we're going to have to give Mutt an alternate mailcap file. Put this in your .muttrc file: And put this in ~/.mutt-mailcap text/*; spewtomoz.sh %s application/*; spewtomoz.sh %s image/*; spewtomoz.sh %s audio/*; spewtomoz.sh %s Now, when you restart Mutt and select an attachment, it will come up in your Mozilla browser, if Mozilla understands the attachment's MIME type; if not, Mozilla will start a helper application, such as Abiword or xpdf. I'm not totally satisfied with this method of handling attachments, and I haven't tried it over a very slow connection, but it's better than the alternatives I've seen. For example, if you get an interlaced image, it should display while loading, saving you some time. A good Mutt article would have to handle this issue. I would welcome any suggestions for a better way. Don Marti is editor in chief of Linux Journal.
OPCFW_CODE
promptForChanges should flush stdin before printing the prompt and Scanln During a long operation, it's possible that the user presses some keys accidentally. If promptForChanges doesn't flush the stdin buffers before printing the prompt and Scanln, it's quite possible that it reads the inputs before the user has even see the prompt and abort/proceed incorrectly, which is quite frustrating. should also audit the code for similar problems. Thank you @minux for pointing this out, I agree. I shall jump on this when I get home in about two hours or so. @minux sorry for the late reply. I actually worked on patching this up but realized that Golang at least from my experimentation doesn't expose methods for draining stdin unbuffered. The methods I could find were buffered so wouldn't cover the case before someone hit enter. I tried different forms and hacks then thought of doing it the C-style with select's FD_ISSET with a timeout or even getc with an fd that was dup'd using O_NONBLOCK, but I figured this is way too much hackery yet I could ask you a Golang core developer for help on how you solve. If you can would you mind helping out with this or giving some hints? Thank you. There is no portable way to do this. On Linux, you can use golang.org/x/sys/unix package to do that. For example. unix.Syscall(unix.SYS_IOCTL, 0, unix.TCFLSH, 0) to do that. (The first 0 is the file descriptor for stdin, assuming it's a tty.) The problem is, however, even on Unix, the constant is not consistently named. You can use cgo for this task, which will be portable across unix (I don't think windows allow you input when the program is not asking for input, so we don't need to consider windows) Or we can just use a real terminal handling package. A some demo (for unix only): package main /* #include <termios.h> #include <unistd.h> void flushtty() { tcflush(0, TCIFLUSH); } */ import "C" import ( "fmt" "time" "golang.org/x/sys/unix" ) func main() { fmt.Println("long operation...") time.Sleep(5 * time.Second) fmt.Print("Y/N? ") if false { unix.Syscall(unix.SYS_IOCTL, 0, unix.TCFLSH, 0) // linux only } else { C.flushtty() } var s string fmt.Scanln(&s) fmt.Printf("%q\n", s) } Aha nice. Thank you! I had also considered the termios solution and I have done such in C but never brought that idea over to Go. We aren't worrying too much about Windows, this project is mainly for the *NIXes at least for now. If you'd like to submit this in a PR, that'd be appreciated plus it gives you the contributor credit for the solution. If not, I understand how busy you are :) Otherwise the branch that I started for this is at https://github.com/odeke-em/drive/tree/flush-stdin-before-prompt, and the first cut at it was at https://github.com/odeke-em/drive/commit/dfb410958cb17ffff879869018cfee9dabab9ffc I'm happy to send a PR. do you like the cgo version? Cool, thanks. Yes in deed, I like the cgo version and it will do the job. Actually would you mind instead switching this to the real terminal handling version since this should also work for OSX? Am off to bed as it is past midnight here but I'll catch you in a bit. The cgo version is tested on Linux/Darwin/NetBSD and Solaris. Oh okay, for sure go ahead. Thanks for working on this @minux I don't know why this is "closed". This is still an issue in Go 1.5.1. Will it be fixed in 1.5.2? @akevinbailey What is the issue that you think should be fixed in Go?
GITHUB_ARCHIVE
The recent DCMI (Dublin Core Metadata Initiative) conference in Porto was collocated with the International Conference on Theory and Practice of Digital Libraries (TPDL), which gave the programme a rich mix of theory and practice. Many of the discussed topics had specific resonance with our current work at Jisc. Therefore, the conference was a great opportunity to discover the latest progress in these areas. The full conference proceedings are now available; some sessions are also documented with copies of presentation slides. In this blog post, we report on three sessions that focused on particularly relevant themes: identify management using identifiers, expressing rights and licenses in human and machine friendly ways, and data management planning. Image from DCMI website Expressing rights in research data Slides from the Special Session Delivered by Antoine Isaac, (Mark Matienzo, Michael Steidl) This special session – Lightweight rights modeling and linked data publication for online cultural heritage – offered key insights into how the cultural heritage sector deals with the human and machine readable expression of rights for digital resources. The parallels with the research data landscape are interesting. In the cultural heritage sector, the rights issue is complex and involves simplifying semantically and legally complex rights statements into something more intelligible and consistent, a process that has resulted in the excellent rightsstatements.org. Perhaps the opposite is true in research data management, where often data is published with an attached licence that may or may not be machine readable, but contains not much other information around the rights, rights holders or conditions of access around the data. Even if some of this information is unavailable or inapplicable, the idea of creating meaningful, human readable statements about the conditions of use for data also has its place. Work being carried out with Jisc by the University of Glasgow has gathered the perceptions of the UK academic sector about this challenging area, and is working on a set of outputs that can help both depositors and users of data better understand the opportunities and limitations offered by various licences. The attendance, engagement level, and standard of feedback being collected by the University of Glasgow from the three workshops so far clearly shows that this an area of interest and anxiety for many in the sector. Drawing inspiration from examples of innovation and establishment of best practice in a related sector seems to be appropriate first step. Image from Twitter @taxobob PIDs in Dublin Core Slides from the session Co-ordinated by Paul Walk This session had the aim of considering how to provide Persistent Identifiers in metadata that uses Dublin Core vocabulary. This topic is of particular interest to Jisc, as the consortium lead working with the UK ORCID consortium member institutions. Through hackdays and other activity, the need to include ORCID IDs in metadata exposed by institutional repositories to downstream services has been highlighted. For example the eprints Advance ORCID plugin, which was released in 2018, includes the ORCID ID in a number of metadata exports. To date there had not been a clear recommendation or community agreed convention, so a meeting to reach community consensus was very welcome. In the run up to the session the community was invited to submit use cases using Github. The submitted user stories can be viewed as issues. The requirements gathered in this way were then summed up in a paper prior to the session (also available in the Github repository). Jisc also submitted a user story describing requirements, based on the ORCID UK hackday community input. The session considered a number of solutions against the requirements, whilst striving to stay faithful to some basic principles important to DCMI (e.g. simplicity, semantic interoperability). The proposed solution that the session agreed on will now be discussed more widely, with a suggestion to take it to PIDapalooza 2019. A report from the session is also being shared. In a nutshell, the solution proposes white-space-separated multiple identifiers (preferably http URIs) included in an id attribute within existing Dublin Core elements. Image from DMP workshop collaborative notes document Machine-actionable Data Management Plans Documentation from the session Delivered by Tomasz Miksa (SBA Research) and João Cardoso (INESC-ID) A third interesting workshop focussed on the idea of machine-actionable Data Management Plans (DMPs) and was delivered by Tomasz Miksa (SBA Research) and João Cardoso (INESC-ID). It built on the RDA working group for DMP Common Standards. In the past few years, it has become a common funding requirement for researchers to report which data their projects create, how they address potential sensitivity issues (e.g. due to privacy or other confidentiality constraints), and how they plan to manage and maintain data. Services such as the DCC’s DMP Online have become popular because they help users to write DMPs which meet the specific requirements of various funders. However, the current implementation of DMPs as static documents also limits their utility – and possibly even contributes to their perception as just another administrative tasks. Instead, the vision presented at the workshop was that, in a world of Open Science, machine-actionable DMPs should become rich interactive resources, which help to facilitate and optimise data management throughout the entire research life cycle. Tomasz and João acknowledged that a human-centric narrative – and thus some form of document – will still be needed in that future. However, DMP documents would only be generated as a by-product of an advanced DMP system which could integrate directly with e.g. the data storage systems of institutions, reporting systems for finance/procurement offices, and the reporting back-ends of funders. What became clear during the workshop is that the implementation of DMPs depends on three interrelated pre-conditions: 1) a clear DMP policy (which formulates clear DMP requirements); 2) well-structured workflows (which reflect policy requirements and enable integrate with the system’s technical environment); and 3) a concise but comprehensive data model. With regards to the last two aspects, the workshop included two group exercises and discussions. In the first group exercise, we discussed the potentials of modelling automated workflows, which would guide researchers through the various stages of formulating a DMP. This includes for example the specification of the type of data, storage requirements, licence, metadata standards, and repository deposit (you can find a handout here). One of the great potentials identified by participants was that a machine-actionable DMP workflows could directly integrate with various other systems such as repositories, data analytics tools, and possibly even financial auditing systems. This could for example facilitate the selection of DMP-compliant repositories or data formats and potentially even produce data on the cost of storage. But these interesting use cases also pose substantive practical challenges: Naturally, the integration and interoperability with a variety of technical systems is one. A more basic question is how the automated workflows of machine-actionable DMPs would integrate with other processes, such as ethics approval procedures, at institutions. Today’s DMPs document decisions taken via such “traditional” processes. Accordingly, there is an open question on how these processes would work with machine-actionable DMP systems which actively steer decision-making – instead of just documenting it. The second group exercise focused on the proposed common data model for machine-actionable DMPs. The model aims to capture DMP-relevant data in five areas: Administrative Roles and Responsibilities; Data; Infrastructure; Security, Privacy, and Access Control; and Policies, Legal and Ethical Aspects. Workshop participants discussed various options to reuse existing vocabularies and entities, such as Dublin Core format options, as well as entire ontologies, such as the PROV-O ontology for provenance information. The main challenge, which requires ongoing work, is to identify and formulate general concepts that can reflect a variety of different use cases and cross-domain requirements. The documentation of access controls is one area where the data model could draw from the vocabulary used by RDSS, which defines five different access control levels: “open”, “safe-guarded” (requiring login), “restricted” (requiring submission of information on intended use), “controlled” (access control by dedicated body, such as data access committee), and “closed”. With Jisc ORCID services already in operation, and with RDSS ready to launch in November, it is vital that we understand the latest thinking in related areas. In particular, we seek to include such emerging trends in the technology and services that we offer. Particularly in relation to identifiers and machine-actionability of policy documents (such as DMPs) this work has international scope, covered through our involvement in forums and projects such as RDA and EOSC. Let us know in the comments how these different areas impact on your day to day work with researchers and policy and strategy planning at your institutions.
OPCFW_CODE
Got exactly the same problem two weeks ago on my laptop. Suddenly, the Aero desktop has gone. The problem was an automatic driver update which did not If you use a driver which fails in any way to identificate successfully as a WDDM-compatible graphics driver, Vista thinks, that there is no capability to display Aero. Installing the newest official version of the Catalyst driver definetly solves the problem. It complies with Aero (I assume your graphics card is able to display Aero content) and it does NO automatic update like some OK. That's all the technical stuff. You say you are a newbie - so I will tell you what to do to get it work in a simple way ;-) 1. Go to www.ati.com - Download newest Catalyst drivers 2. Uninstall your old drivers (Control Panel/Software/Ati Display Drivers - if there is no such entry, skip this step) 3. Install the downloaded drivers, reboot and redo the performance 4. If all went good - the score for your graphics card should now be around 2.0 (ATi Radeon 9600 - got that from your second screenshot -> chipset 5. If your score is still at 1.0, you may think about reinstalling your system because a non-WDDM-driver may still be present in Windows 6. If your score is slightly above 1.0 but Aero still doesn't work - you should think about getting a new graphics card (Radeon 9600 is the ultimate minimum to run Aero - so if you actually enjoy glass, you should get a Feel free to write back if there is any further problem ;-) "rock_lee_96" <email@example.com> schrieb im Newsbeitrag > Hi there, since about... 4 days I've switched from XP to Home Premium > 32bit and I must say I'm very proud. > Anyways I went to the microsoft site and saw the key combo "start+tab" > that makes the cool "Flip 3D" effect, but today I started up my PC and > the "Experiance Index" showed up unrated. And with that my Aero was > gone. No more glass FX no more nothing. It's all lost. > I pressed the "Refresh" button ontop of the "Experiance Index" windows > and even if the scan finishes, I'm still left unrated. > With that I can't activate the Aero, because even clicking on the theme > and pressing OK leaves it with the basic theme... > There is no way I can rate my experiance, because I have some devices > with drivers that CAN'T run on vista, and the installed drivers must be > put into compatibility mode for XP SP2... > Anyway, that shouldn't be the problem, I have a plugged in camera too, > but It didn't seem to bother the experiance index when I plugged it > Anyways, the drivers of my printer (that don't run on vista) aren't > working so well even in compatibility mode, so I may uninstall them, I > have other Print & Scan programs that work on vista, but If i uninstall > the drivers will it work? > I also tried to unplug all the devices that don't have a driver yet, > and refreshed the index but no effect-it's still a stupid Black & White > icon that says 1.0.... > Please help because I really want my Aero back
OPCFW_CODE
IMPROVING PERFORMANCE OF UNIT COMMITMENT AND ECONOMIC DISPATCH FORMULATIONS Modeling tools demand more efficient and accurate tools to support decision making for resource scheduling in large complex power systems. Market operation problems are solved in power systems to determine when to startup or shutdown generating units, and to decide how to dispatch these generating units to meet system demand and spinning reserve requirements. These decisions are optimized minimizing overall operations cost or maximizing social welfare while under generation constraints like production limits, ramping limits, minimum up times, minimum down times, and system constraints like line limits, voltage limits and angle limits. Generation scheduling problems are solved by Independent System Operators (ISO) in electricity markets, using market participants bids and offers to maximize social welfare. System operators ensure system reliability by utilizing Security Constrained optimization models, namely a unit commitment and economic dispatch model that is extended to include transmission network constraints. Simulating real world operations more accurately or efficiently can result in better decision making tools. Description of updates to AMES Significant changes were made to better represent a model of the wholesale power market. These changes will be released as a version update, in the form of AMES (V4.0)1 . The following is a summary of the description of the changes from AMES (V2.06). The full comprehensive list of changes in AMES (V4.0) is described as part of the documentation . 1. AMES (V2.06) employs a DCOPFJ algorithm to solve for optimal dispatches in a Day- Ahead Market (DAM). AMES (V4.0) incorporates a new DAM interface. This DAM interface allows switching between different solvers. The current solvers supported are: (a) Deterministic and Stochastic SCUC using Pyomo (b) Deterministic SCUC using PSST and Pyomo 2. AMES (V4.0) also incorporates hourly Real-Time Market (RTM) interface. This RTM interface allows switching between different solvers. The current solvers supported are: (a) Deterministic SCED using Pyomo (b) Deterministic SCED using PSST and Pyomo 3. AMES (V4.0) allows the user to define the following parameters (a) Rolling Horizon Period (b) Number of pieces in for a Piecewise Linear Cost Curve Other changes include an improved test case file format reader that includes support for reading of load scenarios, and improved visualization of input and output data. The DAM interface includes an operation that solves a Security Constrained Unit Commitment (SCUC) formulation of the power system. The detailed formulation is listed in and in the chapter above. The SCUC formulation described in the previous chapter is implemented in Pyomo . Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities. This formulation can be solved as a stochastic or deterministic formulation using the direct Pyomo solver. The PSST solver interface currently supports deterministic formulations; stochastic formulations are planned to be supported in the future.
OPCFW_CODE
|Online Judge||Problem Set||Authors||Online Contests||User| In the mountains of Geodesia, geodes are found. These 'hollow' stones contain cavities with crystal formations around them. The beautifully colored crystals can be sold for a high price to people both inside and outside of Geodesia. The problem with geodes is that one cannot see from the outside of a given rock whether it is a geode or not. Out of every thousand rocks found in Geodesia only some are geodes. In order to find out whether a given rock is in fact a geode, one can try to consider its density where m represents its mass and V its volume. Since a geode contains empty space, a rock contaning one is expected to have a lower density than other rocks. If the density of a rock is too high, it would be a waste of time and effort to further investigate it. But how could one determine the density of each of the enormous number of rocks found in the mountains of Geodesia? Minig crafts collect rocks, which they put on mobile conveyer belts. Weighing the rocks automatically at a fast rate is no problem, but how to quickly determine their volume? Measuring e.g. the volume displacement of a liquid(like H2O), where each rock has to be put in and taken out individually is a time consuming process. So why not try to estimate the volume just by looking at the rock? Along the conveyer belt, two cameras are placed, perpendicular to each other. Of every rock that passes, they each take a picture. These pictures are sent to a computer for processing. The computer calculates from these pictures an upper bound for the volume of the rock, Vmax. A lower bound for the density of the rock is then given by This lower bound can then be used to reject rocks that are certainly too heavy to be geodes. The computer uses a relatively simple trick for estimating the maximal volume. It considers the picures taken as silhouettes, as parallel projections of a rock onto a planes. Since the two cameras are placed perpendicular to each other, the projection planes are also perpendicular to each other. In other words, choose a coordinate system in three dimensional space as follows: z runs along the vertical direction and x and y run along the perpendicular horizontal directions in which the two cameras are placed. Let S ∈ R3 represent a rock. Then the cameras yield the two parallel projections on the xz and yz plane respectively. See Figure 1 for an example. Now a curious property of the rocks found in Geodesia makes the measurement of the volume easier: they are all convex1. Furthermore, the projections can be considered as polygons. So first each of the two pictures taken of a rock is converted into a convex polygon by an image recognition program. You are to write a program that will do the second step: compute from these two polygons Vmax: the maximum volume of any rock having exactly these two polygon-shaped projections. The first line of input contains a single number:the number of test cases to follow. Each test case has the following format: For every test case in the input, the output should contain a single number on a single line: Vmax, the maximum volume of any object having the projections given in the input, rounded in the usual way to two decimals behind the decimal point, A round-off error of 0.01 is permitted in your answer. 1 5 1 0 1 -1 -1 0 -1 1 0 1 5 -1 1 0 1 1 1 1 -1 -1 -1 1.A set S(in R2, R3, or in fact in any vector space you like) is convex if any straight line segment connecting any two points a, b ∈ S is again completely within S. Formally this means that for all a,b ∈ S and all x ∈ [0, 1], also xa + (1-x)b ∈ S. [Submit] [Go Back] [Status] [Discuss] Home Page Go Back To top All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di Any problem, Please Contact Administrator
OPCFW_CODE
Running the HPC Pack Head Node in a Failover Cluster Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2 This section provides information about running HPC Pack in a failover cluster, and it describes the failover process. For connections to a head node that is configured in the context of a failover cluster, do not use the name of a physical server. Use the name of the clustered head node (file server) that appears in Failover Cluster Manager. To manage the starting and stopping of any service or resource that is configured within the failover cluster, use Failover Cluster Manager (not Server Manager or HPC Cluster Manager). You can see a list of these services and resources in Failover Cluster Manager by clicking the clustered instance of the head node. For more information about managing a failover cluster by using Failover Cluster Manager, see the TechNet Library documentation for Windows Server. The failover process When a failover cluster server within an HPC cluster fails, the specific services that are supported by that server begin to run on another server in that failover cluster. The steps in failing over are as follows: Detection: A failure is detected. Failover: The head node fails over to another server in the failover cluster. Client reconnect: Following a failure, clients reconnect. For the head node, this means that job scheduler clients reconnect to the HPC Job Scheduler Service on the server that is now the head node. The actual location of the service (on a server in the failover cluster) does not matter, because it appears to the clients under one consistent name offered by the failover cluster. Management clients will retry until they can reconnect to the HPC Management Service. Failure detection in a failover cluster The servers in a failover cluster monitor one another through periodic network signals, called heartbeats. If a server misses five heartbeats by default, communication with that server is considered to have failed. You can use Failover Cluster Manager to configure the thresholds at which a server is considered to have failed. You can also configure failover and failback settings in Failover Cluster Manager, but we recommend that you prevent failback unless you have a specific reason to allow it. By definition, failback causes the head node to return to running on a preferred physical server when possible. However, failback also causes a brief interruption in service. Preventing failback therefore decreases interruptions in service. Failover Clustering also monitors some of the services (for example, the HPC Job Scheduler Service on the head node) to ensure that they are running. For detailed information about which services are monitored, see the tables that are at the end of the following topics: The head node and one or more WCF broker nodes in failover clusters: See the table at the end of Overview of Microsoft HPC Pack and SOA in Failover Clusters. Only the head node in a failover cluster: See the table at the end of Overview of Configuring the HPC Pack Head Node for Failover.
OPCFW_CODE
simple web page code html HTML5 drag and drop HTML5 web workers HTML5 server sent events. HTML Cheat Sheet - A simple, quick reference list of basic HTML tags, codes and attributes.- Perhaps check out where to start or what is html first :-) Basic HTML Structure.Create a link to another page or website. 45 Simple Business Card Designs with Clean Typography. Website Templates. 35 Free HTML5 CSS3 Checkout Forms.Flat UI Login Form. A clean template with free html,css using minimal code and design for a website login page. Simple HTML Code for Web Page via. Website Templates HTML-Codes via.Continue with more related things such basic html codes, simple html code example and basic html codes. Ill be posting the code for creating this simple web page in HTML/CSS. There are lots of ways to create web pages using already coded programmes. These lessons will teach you how to use the underlying HyperText Markup Language - HTML.Write a simple web page. Copy out exactly the HTML below, using a WP program such as Notepad. In this video i have showed how to create a simple web page using a simple htmlcode. For agencies, business websites, landing pages and personal use, this free simple HTML website template covers you with all the basics. Still, you get a cool feature here and there to spice up your web design with extra entertainment. Hypertext Markup Language (HTML) is the most common language used to create documents on the World Wide Web.Below is an example of a very simple page: This is the code used to make the page Some of the names I have seen for these simple HTML editors are Mininote Tab, Note Tab Pro, HTML Coder Pro, Agile HTML Editor and a dozen other names.Just place the HTML code below onto your own Web page to create a link to this tutorial. Publishing your sample code describes how to get your simple sample code online with minimum effort. How the web works.Adding vector graphics to the Web. Responsive images. Assessment: Mozilla splash page. HTML tables. Generate Pascal and/or C code starting from a simple HTML-like file. You insert then the output in your program and with a simple call to a function youll see on the screen the linked HTML!HTML Web Pages Generator - John Klineaaaa. HTML for a simple "Hello World" web page. 5. Paragraphs tag.paragraphs, block quotes, and address blocks in HTML. 19. In this section we will give you very simple example of a HTML page. After reading this page you should be able to create simple HTML pages for your website.The above code displays "Welcome to my website." message on web page. Learn how to code a simple HTML page with HTML the widely used Markup Language for the Web. Designing and creating a simple website is easy. Even beginners can learn web designing without putting in too much effort and time. Code Playground. Discuss. Top Learners. Blog.Dark. Light. HTML/CSS/JS jQuery C C C Java Python 3 PHP Ruby Kotlin. SHARE. Codecademy is the easiest way to learn how to code. Its interactive, fun, and you can do it with your friends.Setting up our HTML. Your first website with CSS. Adding our style sheet. The site structure.To kick us off, why dont you try and change the title of our page in the title tag? Taking this opportunity we would like to show you how to code simple but really cool template from PSD.We will take as an example only the home page, in the archive it is called index. html.Infographic: A work for web designers and developers. Redirect from an HTML page. 6237. Why does HTML think chucknorris is a color?Web Applications. Ask Ubuntu. Webmasters. This simple website project. This is for the beginners in HTML and CSS.Download and use it for free. Dont forget to comment . Login Page with Bootstrap 4. Submitted byDo you have source code, articles, tutorials, web links, and books to share? You can write your own content here. Ive been doing some research on the web and downloaded some HTML5 templates so I could see their code.Simple HTML and CSS website. 3. Simple landing page markup with basic HTML. 6. Do you want to create a simple HTML website? If yes, read this complete and easy to understand guide on HTML.So, how do you see the HTML source code of any web page? Its really simple. So, what does a person need to create a web page? A simple text editor, and a web browser.This allows web designers to save time by not writing repetitive HTML code for multiple documents they can simply "include" the code HTML as a "static" frame. source code for simple web form. cv web page html source codes.A small popup HTML colour picker. Suitable for web developers who manually edit HTML source code. A Simple Sample Web Page. By Sheldon Brown. Demonstrating a few HTML features.This page shows on the left as it appears in your browser, and the corresponding HTML code appears on the right. To create a simple web page, the first step is to learn a few HTML tags.Here are some basic HTML tags that show how a web page is created. For a fast start, just copy n paste this code into a text editor - save as index.html. Web Sitesi Tasarm HTML Projects for 30 - 250. Need someone to help me create a page similar to synergynye.com where I want toDaha fazlasn grn: website pages design, simple website design html code, simple website design code, touching, html 5 pages, html code update website The process is very simple. For the most part, you need to learn the basics of HTML tagging and structure.The HTML in the above step lets web browsers, servers, and the Internet know that the code is an HTML document (a web page). HTML Tips HTML Codes Web Development.If youre looking for some HTML codes to spice up your website, youve come to the right place. You will find a variety of HTML tags and tips to assist you below. Copy and Paste this code directly into your HTML web page.CHANGE THE TWO LINES BELOW emailto "youyourdomain.com" emailsubject " website html form submissions" Signup Login Form Template Widget for your websites makes your website or web application signin, signup page looks Flat look. This widget is designed using web technologies such as HTML5, and CSS3. Search Results. Building simple web pages. Novice.Open the aboutme.html file. It contains only a little bit of HTML code to get you started, but you will write the rest yourself. Step 2: Make a page about yourself. This is the first lesson of the Mini Web App course, which walks you through the creation of a simple web application, covering HTML and CSSIn the HTML code above, you can see an example of two different tags: , designating the main header or title of a page, and
OPCFW_CODE
Because of the way linear bus cabling is laid out, this type of cabling is simple. The bus topology is very reliable, because if any node on the bus network fails, the bus itself is NOT affected, and the remaining nodes can continue to operate without interruption. Many of the low cost LANs use a bus topology and twisted-pair wire cabling. Figure 6-2.A bus network topology. A disadvantage of the bus topology is that generally there must be a minimum distance between workstations to avoid signal interference. Another disadvantage is that the nodes must compete with each other for the use of the bus. Simultaneous transmissions by more than one node are NOT permitted. This problem, however, can be solved by using one of several types of systems designed to control access to the bus. They are collision detection, collision avoidance, and token passing, which we will cover shortly. Also, there is no easy way for the network administrator to run diagnostics on the entire network. Finally, the bus network can be easily compromised by an unauthorized network user, since all messages are sent along a common data highway. For this reason, it is difficult to maintain In a star network, each component is connected directly to the central computer or network server, as shown in figure 6-3. Only one cable is required from the central computer to each PCs network interface card to tie that workstation to the LAN. The star is one of the earliest types of network topologies. It uses the same approach to sending and receiving messages as our phone system. Just as a telephone call from one person to another is handled by a central switching station, all messages must go through the central computer or network server that controls the flow of data. New workstations can be easily added to the network without interrupting other nodes. This is one of the advantages of the star topology. Another advantage of star topology is that the network administrator can give selected nodes a higher priority status than others. The central computer looks for signals from these higher priority workstations before recognizing other nodes. The star topology also permits centralized diagnostics (troubleshooting) of all functions. It can do this because all messages must first go through the central computer. This can prove invaluable in making sure that network security has not been breached. The main disadvantage of the star topology is its reliance on the central computer for performing almost all the functions of the network. When the central computer fails, all nodes also stop functioning, resulting in failure of the entire network. Figure 6-3.A star network topology.
OPCFW_CODE
[Feature Request]: Add Support Multiple NUT Servers Enhancement of issue I like to be able to point to different NUT Servers Solution On the HOST definition on the docker compose file, add the option to add multiple hosts separated by coma "," Alternatives I like to avoid to run multiple instances and separate dashboards when I have more than 1 NUT instance server Additional Context I like to be able to monitor multiple locations or potential customers from a single Docker instance (Centralized). I'm in the same boat. I have 12 of these UPSs around my house all connected to various devices and 8 NUT servers, so it'd be nice to have a way to view them all. Oh, yes please. I only have 4 but would love to unify the view somewhere other than Home Assistant. BTW, good start! Oh, yes please. I only have 4 but would love to unify the view somewhere other than Home Assistant. BTW, good start! Interesting you note Home Assistant. I uninstalled PeaNUT and went to using Home Assistant since it is the only thing that supports multiple NUT servers. It's very manual to create the dashboard and set everything up though. Also, getting it to send emails when your power is out isn't that simple. It's so much manual work. I would also find this a useful feature. I have several UPS, each with a RPi zero dedicated for the nut server. Same here, would be useful as number of networked UPS is growing with time. Same request from my side. I've installed peaNUT for each UPS that I have but I'm not getting the same information as I get on NUT in homeassistant. All peanut containers give the same value. Not sure why ? I would love this feature. Please implement connecting to multiple NUT servers. Would someone be able to try out my latest test version? brandawg93/peanut:test It should enable multi server support! Simply go to the settings page to add more servers. Just tried the brandawg93/peanut:test image, it works great with at least two NUT servers. Thanks I just tested it as well, I can add all four of my NUT servers without issues. Two of them are on non-standard ports also. I'm not sure what your current plans are, but being able to set custom names for each server would be useful. The NUT server built into the Synology NAS Diskstation doesn't let you change any info. At least not easily. @Semicolon7645 I would say that is more of a NUT server side issue than peanut. But there should be configuration files that allow you to set a description. They may not be easy to get to, but I’m sure they’re there somewhere. 😅 The way the it is handled is that the UI scripts rewrite the NUT config files when you hit apply. So if I manually edit the config it will get overwritten the next time the UI writes new settings values. I could edit the scripts, but they will get overwritten when Synology updates their software. It's certainly not an optional situation. Oh. That config is different. Take a look at the config folder in this repo. That’s what it should look like. Those are the files I use for testing. Released in v4.0.0!
GITHUB_ARCHIVE
To be honest with you guys, this is the topic or area which is close to the hearts of almost all of the Scrum practitioners and the most widely discussed among the Scrum teams. Specifically, in my case, I just love taking up this topic with the people I interview. Let me take you to journey on how it got originated and what it is all about. As per Scrum Alliance “Scrum falls within “Agile,” which is the umbrella term for several types of approaches to getting any complex, innovative scope of work done. The concept is to break large projects into smaller stages, reviewing and adapting along the way.” As per the survey was done by version one, scrum is the most popular framework being used globally. Scrum is a lightweight process framework for agile development and it distinguishes from other agile processes by explicit ideas and practices, divided into the three categories of Roles, Artifacts, and Time Boxes. Let’s focus on the artifacts part for now and dive into further details on its components. An artifact is a concrete by-product created in the course of product development. Scrum artifacts characterize work or value in numerous methods that are beneficial in providing transparency and prospects for inspection and adaptation. In Scrum, artifacts are “information radiators” which serve to encapsulate the shared understanding of the team at a certain point in time. Now that we have understood the definition of it, let’s dive further and check most important scrum artifacts that are adding value to scrum. To make it easy, a product backlog is a list of all the things that are required in the product. It is an ordered list of all features, functions, requirements, enhancements, and fixes that institute the modifications to be made to the product in upcoming releases, as in the details below: A product backlog is a dynamic entity and hence it keeps evolving, for the teams, it is a live unit. To keep this product backlog healthy, the product owner has to make sure that the below items are in place and being closely monitored. It’s the Product Owners responsibility to build up a stack of item s in the backlog and prioritize them as per the business goals and the global approach. The product backlog is a dynamic list of items and as we call it in agile, it is ‘live document’ that should be frequently updated based on changing project requirements all the way through development. A product roadmap is a high-level pictorial synopsis that plots out the vision and path of your product. A product roadmap connects the why and what behind what you’re building. It’s a guiding tactical document as well as a plan for executing the approach. It can get the internal participants in alignment and assist in the discussion of options and situation forecasting. Building a product roadmap aids in communicating the path and advancement to the teams and the stakeholders. This document consists of the high-level initiatives and the plan for accomplishing the work that supports the product strategy. Crafting a product roadmap should be a continuous process throughout the lifecycle of a product. One should gather requirements and features from a variety of sources, including customers, partners, sales, support, management, engineering, operations, and product management. It is up to the product management team to prioritize incoming ideas to make sure the roadmap aligns against the business goals. The sprint backlog consists of all the stories or items the team committed for a particular sprint. It is a commitment from the scrum team to the stakeholders for a sprint. During the sprint planning meeting, the scrum team pulls the highest priority items from the product backlog which are usually in the form of stories. They discuss the final acceptance and zero in on the points to be allocated for the story. During this ceremony the team also creates tasks which are required to complete the stories, they drill down to the lowest level of details so that nothing is missed, to ensure quality. According to the Scrum Guide, the Sprint Goal is an objective to be met by the Sprint through the implementation of part of the Product Backlog. The Sprint Goal helps provide focus on an objective we want to achieve and allows the flexibility to negotiate the work to achieve that objective. The burndown is a chart that shows how quickly the team is burning the efforts to reach the completion. It shows the total effort against the amount of work we deliver each iteration. The x-axis in the chart shows the time in days and y-axis represents the total hours of work estimated in a sprint. Some of the teams use story points in the y-axis instead of total estimated efforts. Both the approaches are fine as long as the team understands the idea behind it. Its purpose is to enable that the sprint commitment is on the track to deliver the expected solution within the timeline (sprint). “A product vision represents the core essence of its product or product line. It also sets the direction for where a product is headed or the end state for what a product will deliver in the future.” – Aha! Your product vision should not be a plan that shows how to reach your goal. Instead, you should keep the product vision and the product strategy – the path towards the goal – separate. This enables to change your strategy while staying grounded in your vision. (This is called to pivot in Lean Startup.) Once the scrum team has started working on the sprint backlog, there is a need to track the progress so that there are no surprises in the end. I have witnessed many teams who initially start the sprint with a lot of zeal and positivity but end up feeling frustrated due to impediments or roadblocks, they feel the hindrance in their work and hence start cribbing on the end results. It is really important to monitor the sprint progress, there are different techniques a team can use. “The Product Increment is the sum of all Product Backlog items completed during a Sprint and all previous Sprints. At the end of a Sprint, the new Product Increment must be in a usable condition and meet the Scrum Team's Definition of Done.” – Wibas. Ownership of the product increments should belong to the release engineers in most organizations and should be fully available to the product owner. The definition of done is created by the scrum teams to make an agreement with and with the stakeholders on the getting the stories accepted. It also ensures the quality of work to be delivered by the end of the sprint. The components of “definition of done” differ from team to team. From the above discussion, we now understand the artifacts that add value to the Scrum process. The artifacts from the base for Scrum implementation, effective use of these stated artifacts can actually help in improving the product and its delivery, and most the quality. I mentioned quality because you can define your definition of done in such a manner that it focuses on quality, even the way a user story is created is a big contributor to the quality angle. You can learn more about Scrum artifacts through our Scrum Tutorial. Your email address will not be published. Required fields are marked *
OPCFW_CODE
SRCDS when left running for extended periods of time will use more and more system resources, this is especially true for very poorly coded mods or when you have lots of plugins loaded. When this happens, your SRCDS server will start to lag really bad and your entire server computer will slow to a crawl.Daemon can solve this by monitoring and restarting SRCDS before it reaches these danger levels. Reconfiguring the FireDaemon Pro SRCDS Service First you need to configure your SRCDS service to shutdown FireDaemon Pro if it crashes. The "Upon Program Exit" option in the "Settings" tab should set to "Ignore" or "Shutdown FireDaemon". Install and Configure daeMON - Download and install the latest version of FireDaemon Pro. - Download daeMON and unzip it into the directory of your choice (eg. C:\Program Files\daeMON). - Make a note of the short name of the SRCDS service you want to monitor (eg. srcds). - Start the FireDaemon Pro Service Manager from the Start/Programs menu or desktop icon. Click on the Create A New Service Definition button in the Toolbar or press Ctrl+N. Fill out the panel as per the screen shot below (adjust your paths and parameters to suite) - you can use the TAB or SHIFT+TAB keys to move between fields: The working directory is where you have placed the daeMON. Note the Parameters list. This is the list of services (in startup order) that you want to monitor. Each service short name is prefixed with a -s. If you want to change the monitoring facility frequency use the -f flag.The following settings are good in most cases: High CPU load threshold (as a percentage): 75 High memory threshold (in MB): 1024 Monitoring frequency (in seconds): 60 Ignore high CPU/Memory for X monitored intervals (in minutes): 5 - You should really change the desktop interaction flag as it's not necessary to see what's going on (but you can leave it on if you are curious): If you want to capture the debug log output of the utility then you can enable it as per the Output Capture section in the screenshot below: - Alternately, you can make the daeMON dependent on the services it is monitoring. For example: - Then click the Install (tick) button and daeMON should be installed and monitoring your service. Should you accidentally shutdown a service or a process dies, then all the services that you specified in the Parameters list will be restarted. If you want to monitor multiple SRCDS servers, then you should create a new daeMON for each service rather than add another -s flag. The reason you should do this is because daeMON will restart every server if just one of them uses too much resources. Setting up a separate daeMON for each SRCDS server will prevent this from happening. Don't worry about the CPU/Ram usage for having lots of daeMON services; one daeMON uses around 4MB of RAM and under 1% CPU. To find out how to receive email alerts when SRCDS restarts go here.
OPCFW_CODE
Customers with the EVOK Hosted Exchange service and who have not assigned the DNS management of their domain to EVOK should enter the following information in the DNS zone of their domain in order for the Hosted Exchange service to function properly. In general, the DNS settings are accessible from the administration interface (Control panel) of your web host. You can check the compliance of your domain settings with EVOK Hosted Exchange with the following tool: https://evok.academweb.tech/dnscheck This tool also allows you to know on which servers (and indirectly with which provider or host your DNS are managed) These records define the servers for receiving and processing messages from your domain. They must be configured with equal priority (10) Add the following servers with priority 10: This registration is required in order to allow Outook and other clients or devices to automatically obtain the configurations of the Exchange servers. Also, some important features of Microsoft Exchange such as out of office messages are based on this recording. If your DNS administration interface does not allow you to create SRV type records, please contact the technical support of your DNS service provider and ask them to add the record for you Create an SRV type record in which you replace domain.suffix with the following line (or fields according to the table below) _autodiscover._tcpTTL IN SRV 0 0 443 exchange.evok.ch. This registration allows anti-spam systems to check that messages sent with your domain name actually come from our servers and help improve security and reduce fraudulent or unsolicited messages (spam). If the SPF record is set incorrectly, some servers may refuse to relay emails. Create a TXT type record with the following content: v=spf1 a mx include:evok.ch -all You can test your SPF registration at the following link: https://evok.academweb.tech/dnscheck If you also send emails from servers other than EVOK’s servers (for example if you have a website that is not hosted by EVOK and from which you send emails with your domain name , you must also allow this server in the SPF record otherwise messages sent from this server may be refused by recipients. Some web services like Salesforce, MailChimp or others require specific SPF settings. Please refer to their respective documentation) Some email clients will try to access the domain (https://mydomain.ch) in https to retrieve an autodiscover file. If there is a type A @ (domain root) record and the SSL certificate returned to that address is not valid, a certificate error may be displayed. If you have a website and you want to use the domain without www to access it (eg http://mondomain.ch), it is important to activate SSL and that the certificate is also valid for this name. If this is not possible, the @ type A record should be deleted to remove the certificate errors. If your website is also hosted by EVOK, you can activate a free SSL certificate in your site’s administration console (https://admin.evok.ch) or ask us to do it for you. You can test that the certificate corresponds to the domain on the following link: https://evok.academweb.tech/dnscheck
OPCFW_CODE
Error running full crawl in hybrid search – “AzurePlugin was not able to get Tenant Info from configuration server”. As this error appears on a clean installed environment, the issue appears to be a bug in SP2016 which MS is yet to provide a solution. If you need to configure SharePoint 2016 Hyrbrid Search, please refer here for detailed steps. To run a full crawl, you need to have a search service configured with content source pointing to on-prem. The documentation for setting up Hybrid Search also contains the Windows PowerShell script (OnBoard-HybridSearch.ps1) required for enabling server-to-server authentication Run the PowerShell script and provide the SharePoint Online site url at the prompt along with authentication of a site collection Admin. If connection is successful, the script will proceed and you should be presented with an output similar to the screen below. In this print screen you can see the Tenant ID, Authentication Realm and the connected endpoint address. At this point on-boarding is completed. After that you need to create a content source and run full crawl However, while attempting to run full crawl we are getting an error in the Crawl log An unexpected error occurred in the Azure plugin. This item will be retried in the next incremental crawl. ( AzureException AzurePlugin was not able to get Tenant Info from configuration server; SearchID = C2564792-BE82-56B6-C815-369B729EBC93 ) After googling, I came to know this can happen for proxy setting (http://www.spjeff.com/2016/12/12/fixed-azureplugin-was-not-able-to-get-tenant-info-from-configuration-server-cssa/) Unfortunately, since I was not using any proxy I had to find an answer myself. I had to looked in to the ULS to get more clue and what I found stunned me. AzureServiceProxy::GetCerts caught AggregateException: Unable to connect to the remote server AzureServiceProxy::GetCerts: Failed to get encryption certificates from cert server * for realm *, documents will be send unencrypted (if unecrypted submit is allowed) AzureServiceProxy::GetAzureTenantInfo caught AggregateException: Unable to connect to the remote server, unable to get ServiceProperties, submit is blocked AzureServiceProxy caught Exception: *** Microsoft.Office.Server.Search.AzureSearchService.AzureException: AzurePlugin was not able to get Tenant Info from configuration server at Microsoft.Office.Server.Search.AzureSearchService.AzureServiceProxy.GetAzureTenantInfo(String portalURL, String realm, String& returnPropertyValue, String propertyName) at Microsoft.Office.Server.Search.AzureSearchService.AzureServiceProxy.SubmitDocuments(String azureServiceLocation, String authRealm, String SPOServiceTenantID, String SearchContentService_ContentFarmId, String portalURL, String testId, String encryptionCert, Boolean allowUnencryptedSubmit, sSubmitDocument documents, sDocumentResult& results, sAzureRequestInfo& RequestInfo) *** I looked in the permission and found that Default content access account doesn’t not have read permission on web application. I manually provided read permission but did not work. I again run On-boarding script after adding the permission manually and crawl the content, everything worked. Here is the crawl history which is showing that previously failed all contents have been successfully crawled. I have verified the same from crawl database too Now all on-premise contents are available in O365.
OPCFW_CODE
Names are important as they imply meaning, and in many experiences such as Microsoft Teams, names serve as a key navigational aid for end users in locating the correct Team quickly and easily. However, with many organizations, it is difficult – if not impossible to enforce consistent Microsoft Teams naming standards, even if Team and Group creation is limited to a small number of individuals. While this enforcement cannot be completely overcome without a tool like Orchestry, the first step is to define a single or set of consistent naming standards that will enable better adoption and success when it comes to its usage of Microsoft Teams. Why is Microsoft Teams Naming Important? The name of a Team and its Channels is hugely important as it serves as the primary mechanism by which users can currently navigate and browse their list of workplaces in the Teams Application. Studies in information-seeking behavior tell us that things like a Team’s labels are used by the human brain to quickly assess and estimate (via scanning) what this label represents. The method by which users decide which link to select is often referred to as information scent, which can be understood as the “user’s imperfect estimate of the value that source will deliver” – which in our case is whether the team they open is in fact the team they are looking for. As described by the Nielsen Norman group, the user considers a link based on its label and any contextual information available to them. Within Microsoft Teams, this boils down to two things: the Team Name and the Team Logo – not a lot to go on. This underlines how essential good naming can be. Without consistent Microsoft Teams naming standards, this process becomes taxing and inaccurate as end users have no reliable “memory” from which to draw to assist in accurately identifying whether certain information is the information they are seeking. What are the Consequence of a Bad Team Name? The consequences of poor or inconsistent naming boil down to two main problems: findability and redundancy. A variety of studies have shown that these are huge problems in the digital workplace: - Findability: Multiple studies have shown that users spend a huge part of their day (some suggesting as much as 2.5 hours a day) simply searching for information. In the context of Teams, especially in organizations with a high volume of teams Workspaces, bad naming can seriously hamper the findability of teams which creates friction and irritation for end users trying to jump quickly between tasks. - Redundant Effort: Poor naming can also lead to duplication of teams and effort, as end users who cannot find the object of their query will quickly abandon a search, sometimes creating a new space for this information and replicating content that may already exist elsewhere. Not only does this problem occur in Microsoft Teams, but the same name used when creating the Team is also carried throughout the Microsoft 365 ecosystem and applied to several related objects (e.g., SharePoint Site, Email address, etc.) which essentially serves to multiply the problems shared above. Microsoft Teams Naming Considerations Adding consistent prefixes to the beginning of Microsoft Teams team names can be a useful way to add organization, structure and consistency to your teams. Because our eyes have a tendency to scan left to right, a prefix can be valuable as it creates a column of essential information down the left-hand side of the Teams experience. - Prefixes can be useful but do not make these overly long as they can lead to the Team name being cut off. Generally, limit yourself to acronyms or prefixes no longer than 12 characters. - While emojis can be tempting to utilize, keep in mind that these can cause issues for searchability and are not supported in all the areas where a Team’s name gets applied. Spaces are proven to make names more scannable while improving overall readability for end users, which further aids with finding the right name in Microsoft Team. They should, however, also be used with some thought, especially when considering prefixes and suffixes you may choose to implement. - When using prefixes or suffixes, we are now combining different “components” into the name, and it is typically helpful to aid users in differentiating the delineation between these segments. One way to do this is to keep spaces within a team’s Workspace name but use another delimiter (such as a dash or underscore) for the prefix or suffix. This allows the brain to quickly assess the team’s category from the team’s name. An understandable response to remedying Teams’ findability is to add more detail to the team Name, ultimately adding more length to each name – but this can lead to other problems. Microsoft Teams only allows a certain team name length before it becomes truncated (trimmed). The length available depends on where the Team name is being shown: - Pinned View: 45 characters - Normal View: 34 characters - Keep your team names to under 30 characters as a rule to ensure they are fully visible Naming Conventions in Action - Project Teams and Sites - PRJ-[Project ID]-[Descriptive Name] e.g., PRJ-84719-Eclipse Renovation - Department Teams and Sites - DEPT-[Department Name] e.g., DEPT-Human Resources - DEPT-[Classification Acronym]-[Department Name] e.g., DEPT-SENS-Human Resources - Regional Department Teams - DEPT-[Region]-[Department Name] e.g., DEPT-US-Human Resources - Department Sub-Teams - [Department Acronym]-[Descriptive Name] e.g., HR-Total Benefits - Guest Access Enabled Teams and Sites - EXT-[Descriptive Name] e.g., EXT-Partner Hub - Guest-[Descriptive Name] e.g., Guest-Partner Hub - +G_[Descriptive Name] e.g., +G_Partner Hub - [Descriptive Name]_+G e.g., Partner Hub_+G - Public Teams - Public-[Descriptive Name] e.g., Public-Chess Club - PUB-[Descriptive Name] e.g., PUB-Chess Club - P_[Descriptive Name] e.g., P_Chess Club - [Descriptive Name]_P e.g., Chess Club_P - Internal vs External Client/Partner Teams - INT-[Client Name] e.g., INT_Tesla - [Client Name] e.g., Tesla Start Improving Microsoft Teams Naming There are a number of things that this article discusses that we recommend your organization consider: - Institute a consistent Microsoft Teams naming policy for common Workspace types - Avoid using names that are over 30 characters long and get truncated - Avoid the use of certain words like “Test”, “Demo”, “Temporary”, People’s Names, or Acronyms that are poorly understood - Put in place processes to ensure users have ways to discover Workspaces that already exist before the same information is recreated elsewhere How Orchestry Can Help The challenge with any of the Microsoft Teams naming best practices is that they are difficult to enforce consistently without a great deal of manual human intervention. A complete solution like Orchestry can help go well beyond what is available out of the box (or with additional licenses like Azure P1) by: 1. Automating the creation of Teams using established naming conventions: These naming conventions can vary based on the type of workspace being created, and may include both static text (e.g., a Workspace type prefix like “PRJ”), as well as dynamic values such as metadata selected (e.g., a region like “Asia”) when requesting the Workspace: 2. Preventing the creation of redundant teams: One way this can be achieved is via a Workspace name validation which searches other workspaces with an identical or similar name, and makes intelligent suggestions to the requestee. This can additionally be supported with a transparent Workspace Directory which allows users to search through the entire catalog of Teams and SharePoint Sites within the organization before they make a new request: 3. Blocking undesirable words: This can be achieved by configuring the system to avoid the use of certain words that may cause confusion, or are likely to represent workspaces that should not be created: Simplify Microsoft Teams Naming to Enhance User Experience Effective navigation with Microsoft Teams is vital, especially in large or complex organizations where individuals are likely to have a long list of teams to which they belong. Unfortunately, there are a very limited set of options available to help users identify and find (logo and team name) the teams they are seeking, which makes Microsoft Teams naming especially important. Without effective Microsoft Teams naming conventions and tools in place to avoid duplication, users are likely to struggle finding what they need and may even duplicate work that already exists. Solutions like Orchestry can help by automating these challenging processes, enforcing governance rules, and ultimately enabling a more consistent and organized Teams environment. Get a FREE trial of Orchestry today and see the incredible core capabilities in action!
OPCFW_CODE
import unittest from gospellibrary.catalogs import CatalogDB from gospellibrary.item_packages import ItemPackage import bs4 import requests from cachecontrol import CacheControl from cachecontrol.caches import FileCache session = CacheControl(requests.session(), cache=FileCache('.gospellibrarycache')) class Test(unittest.TestCase): def test_para_html(self): item = CatalogDB(session=session).item(uri='/scriptures/bofm') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) p = bs4.BeautifulSoup(item_package.html(subitem_uri='/scriptures/bofm/1-ne/11', paragraph_id='p17'), 'lxml').p del p['data-aid'] actual = str(p) expected = '<p class="verse" id="p17"><span class="verse-number">17 </span>And I said unto him: I know that he loveth his children; nevertheless, I do not know the meaning of all things.</p>' self.assertEqual(actual, expected) def test_subitem_html(self): item = CatalogDB(session=session).item(uri='/scriptures/ot') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) doc = bs4.BeautifulSoup(item_package.html(subitem_uri='/scriptures/ot/ps/117'), 'lxml') self.assertEqual('<p class="title-number" data-aid="128444354" id="title_number1">Psalm 117</p>', str(doc.find(id='title_number1'))) self.assertEqual('<p class="verse" data-aid="128444356" id="p1"><span class="verse-number">1 </span>O praise the <span class="deity-name"><span class="small-caps">Lord</span></span>, all ye nations: praise him, all ye people.</p>', str(doc.find(id='p1'))) def test_subitems(self): item = CatalogDB(session=session).item(uri='/manual/family-finances') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) subitems = item_package.subitems() self.assertEqual(len(subitems), 1) self.assertEqual(subitems[0]['id'], '128026696') self.assertEqual(subitems[0]['uri'], '/manual/family-finances/family-finances') self.assertEqual(subitems[0]['title'], 'Prepare Every Needful Thing: Family Finances') def test_subitem(self): item = CatalogDB(session=session).item(uri='/manual/family-finances') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) subitem = item_package.subitem(uri='/manual/family-finances/family-finances') self.assertEqual(subitem['id'], '128026696') self.assertEqual(subitem['uri'], '/manual/family-finances/family-finances') self.assertEqual(subitem['title'], 'Prepare Every Needful Thing: Family Finances') def test_related_audio_items(self): item = CatalogDB(session=session).item(uri='/manual/new-testament-stories') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) subitem = item_package.subitem(uri='/manual/new-testament-stories/chapter-36-jesus-tells-three-parables') subitem_id = subitem['id'] related_audio_items = item_package.related_audio_items(subitem_id) self.assertEqual(len(related_audio_items), 1) self.assertEqual(related_audio_items[0]['id'], 28) self.assertEqual(related_audio_items[0]['subitem_id'], subitem_id) self.assertEqual(related_audio_items[0]['media_url'], 'https://media2.ldscdn.org/assets/scripture-and-lesson-support/new-testament-stories/2010-11-370-chapter-36-jesus-tells-three-parables-complete-256k-eng.mp3') def test_related_video_items(self): item = CatalogDB(session=session).item(uri='/manual/new-testament-stories') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) subitem_id = 128125781 related_video_items = item_package.related_video_items(subitem_id) self.assertEqual(len(related_video_items), 1) self.assertEqual(related_video_items[0]['id'], 36) self.assertEqual(related_video_items[0]['subitem_id'], str(subitem_id)) self.assertEqual(related_video_items[0]['video_id'], '1288181761001') self.assertEqual(related_video_items[0]['poster_url'], 'https://mediasrv.lds.org/media-services/GA/videoStill/1288181761001') self.assertEqual(related_video_items[0]['title'], 'Chapter 35: The Good Samaritan') def test_package_related_content_items(self): item = CatalogDB(session=session).item(uri='/scriptures/ot') item_package = ItemPackage(item_id=item['id'], item_version=item['version'], session=session) subitem = item_package.subitem(uri='/scriptures/ot/ps/117') subitem_id = subitem['id'] related_content_items = item_package.related_content_items(subitem_id) self.assertEqual(len(related_content_items), 1) self.assertEqual(related_content_items[0]['id'], 11429) self.assertEqual(related_content_items[0]['subitem_id'], subitem_id) self.assertEqual(related_content_items[0]['word_offset'], 11) self.assertEqual(related_content_items[0]['byte_location'], 2658) self.assertEqual(related_content_items[0]['origin_id'], 'p2') self.assertEqual(related_content_items[0]['ref_id'], 'note2a') self.assertEqual(related_content_items[0]['label_html'], '2<em>a</em>') self.assertEqual(related_content_items[0]['content_html'], '<p data-aid="128444358" id="note2a_p1"><a class="scripture-ref" href="gospellibrary://content/scriptures/dc-testament/dc/84?verse=45#p45">D&amp;C 84:45</a>; <a class="scripture-ref" href="gospellibrary://content/scriptures/dc-testament/dc/93?verse=24#p24">93:24</a>.</p>')
STACK_EDU
You don’t need a CompSci, math, technology, or in fact, any degree to succeed as a developer. This profession is roomy enough to accommodate many education levels and backgrounds. Let’s back that up with some stats. The StackOverflow 2016 Developer Survey received responses from 50,000 developers. They found that 56% of respondents did not have a degree in CompSci or a related field. Don’t most job postings specify degree requirements? Even if a developer doesn’t have a degree in math, science, or other tech field, what about job postings? Don’t most employers ask for a degree of some kind? Again, StackOverflow tackled this question. They searched through 4,499 job listings on their site and found that 61% didn’t mention a degree requirement at all. The other 39% listed a degree either as a prerequisite or as a preference. So if a CompSci degree isn’t required, then what is? It comes down to one thing: skills. You won’t get a job as a developer if you can’t show you possess coding skills. How do you prove to an employer that you possess coding and software development skills? What do employers consider when hiring? There are four areas that employers consider when scoping out new hires. Depending on the specific job, they seek accomplishments in one or more areas. - Education: This can come in the form of a tech-related BA or Associates degree, another type of BA or Associates degree, or compressed education received in boot camps or other non-degreed training courses. - Experience: Employers consider how long you’ve been working in a particular field, subject area, or task type. - Portfolio: Can you show work completed for a current or previous employer? Can you point to projects you’ve completed on your own outside of employment? Do you have a showcase website that demonstrates your skills as a coder? - Certification: There are a number of tech certifications that employers recognize. Obtaining one of these certifications is a stamp of approval that you have coding and software development skills. How can I learn the developer skills that employers want? There’s a skills gap in the tech profession. Employers need developers but have a hard time finding enough people trained to in those skills. That’s why boot camps and other training organizations like Zip Code Wilmington exist. The curriculum of our boot camp was designed with direct input from our corporate partners in an effort to bridge that skills gap. Using that input, we ensure our students are trained in the very skills employers seek in their entry-level developers. This focus has delivered results for our boot camp attendees. Consider these numbers: - 88% of our enrolled students graduate - 93% of our graduates accept paid roles within 3 months of graduation - Average annual earnings of placed graduates before Zip Code Wilmington: $30,173. After Zip Code Wilmington: $63,071. People from all walks of life and backgrounds have successfully completed a Zip Code Wilmington boot camp. Here are just a few of their stories. Chris was once a social worker, and now with development skills under his belt he refers to himself as a writer and creator. Watch his story. Katie was a pharmacist and a student at University of Wilmington. She was attracted by the fact that she could learn coding in 3 months. Watch Katie describe her experience. Tyrell transitioned from warehouse worker to software developer for a Fortune 500 company. Listen to Tyrell’s story. What about you? Is it time to write your own Zip Code Wilmington story? Learn more about our Boot Camp and apply today.
OPCFW_CODE
I have a PA-200. Internal net is 192.168.0.0/24 eth1/2 , inside L3 interface (default gw) - 192.168.0.254 One external ip address is using for outside inteface, eth1/1. For connection to Internet I typically use pair inside-outside with: 1. NAT : dynamic-ip-and-port to outside interface address nat-rule 2. Security policy "allow from inside to outside , any dest address" Now, I need provide access for FTP-Server. I created DMZ interface eth1/3 - 172.16.0.254/24 FTP-Server is directly-connected to DMZ-port and has ip 172.16.0.1 For this scenario i created 1. NAT : bi-directional between FTP-Server and Outside . 2. Security policy for Outside and DMZ. What I can do as next step for provide connection between inside and DMZ? I create security policy allow inside-dmz (to 172.16.0.0/24) and dmz-inside (to 192.168.0.0/24) If i do ping 172.16.0.1 from 192.168.0.1 than i see that all packets are matching to first NAT-rule "inside to outside" and that is wrong way. What is wrong in my steps? How I can exclude traffic from "default "NAT-rule. Althought I tryed create no-nat rule than it do not work. Thank for help. your default NAT rule must be wrong my guess is that you didn't add zones properly and one or all zone fields contain an ANY your nat rules should be FROM internet TO internet DO destination translation to FTP-internal (don't do bidirectional) FROM lan & dmz TO internet DO dynamicIP/port sourcenat this way your connection from 192.168 to 172.16 can never hit a NAT rule and you'll be good to go Try the following: - eth1/2 IP address 192.168.0.254/24 - eth1/3 IP address 172.16.0.254/24 - Create a service group object with all the ports for the FTP service. [svc-group-ftp] - FTP server's IP 172.16.0.1/24 and gateway 172.16.0.254 - Security rule to allow desired traffic from inside to DMZ, and from DMZ to inside. This one you have already. - Source NAT rule dynamic-ip-and-port from DMZ to outside eth1/1 to enable the FTP server to access the internet. Same as the one you have for inside zone but for DMZ. You can also keep the one you have and add the DMZ zone in the src zones. - Destination NAT rule for outside traffic to your DMZ src zone outside, dst zone outside, dst interface eth1/1, service [svc-group-ftp], src address any, dst address [your-public-ip], src transation none, dst transation static ip 172.16.0.1 - Security rule to allow outside traffic to DMZ FTP server src zone outside, src address any, dst zone dmz, dst address [your-public-ip], application any, service [svc-group-ftp] Click Accept as Solution to acknowledge that the answer to your question has been provided. The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it! These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole! The LIVEcommunity thanks you for your participation!
OPCFW_CODE
Position at Funding Circle At Funding Circle there’s a mission that drives everything we do. Around the world, we want to help small businesses get the finance they need to thrive, creating thousands of jobs as they drive the economy forward currently lending 3.5 Billion a year. With teams in London, San Francisco, Amsterdam and Berlin, we’re fusing finance and tech to create new opportunities for those who want to achieve more. Being a high growth FinTech “scale-up”, it’ll be no surprise to hear Tech Recruitment is high priority here. We are looking for a strong Talent Acquisition Partner to join the team and build effective relationships across our product and engineering teams, advise hiring managers and interviewers on recruiting best practices and collaborate with business leaders on all matters of hiring strategy, team fit and offer negotiation. We are a passionate group whose bread and butter is learning new technologies and fostering a collaborative and inclusive environment - we’re looking for partners in crime who feel the same. - Experienced working in a hyper growth company and know how to come up with creative sourcing techniques to attract awesome talent. - You don’t rely on agencies, searching job boards or a database to find candidates (we don’t use these here). - You know how to collaborate with tech leaders and advise them on hiring strategy, while managing expectations and keeping them up to date on progress. - You have excellent verbal and written communication skills and know how to keep it short, sweet, and to-the-point when communicating with senior leadership. - You have experience establishing and building effective working relationships across all levels of the business, including C-level executives. - You’ve got a keen interest in staying in the know with Tech. - You know your way around different sourcing channels like Github, Connectifier, Lusha, Stackoverflow, Twitter, etc. - You love the challenge of hiring elusive candidates, coach hiring managers to improve the offer acceptance rate, and full-cycle recruiting in it’s entirety. - You’ll work diligently to maintain data integrity in our ATS and report on recruitment data and metrics to ensure scalability of our recruitment strategies. - You’re super self-driven, can deliver and have a “get stuff done” attitude. - You believe in people and the power of conversation. No hiding behind emails with candidates! - You can uncover technical aptitude and are able to distinguish between good and average Engineers. - You enjoy being part of a team and believe in standing together: We’re a small, fast-growing team, and everyone needs to pull for each other, share best practices to make the entire organization better. - Finally, you can have a bit of fun with us along with way. Recruitment is a tough job and we have ambitious hiring goals. It’s not always easy but we’re on an incredible journey and #LiveTheAdventure together. Why Join Us? Happy employees are productive employees, that’s why we offer a hearty benefits package. From learning and development and commuter stipends, to a competitive salary, equity, and health benefits, we’ve got you covered! That being said, have you heard about what we're doing?! Our mission is what really motivates us to come to work each day: - We're supporting small business, the engine of economic growth. - We're helping facilitate higher yields for investors and lower interest rates for borrowers. - We can fund loans extremely quickly, all online! - We have a clear competitive advantage globally in areas like domain expertise and regulatory processes. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.Funding Circle provides equal employment opportunity to all individuals regardless of their race, age, creed, color, religion, national origin or ancestry, sex, gender, disability, veteran status, genetic information, sexual orientation, gender identity or expression, pregnancy, or any other characteristic protected by state, federal or local law.
OPCFW_CODE