Document
stringlengths
395
24.5k
Source
stringclasses
6 values
From: Christoph Koegl (yahoo_at_[hidden]) Date: 2001-05-29 10:57:56 First, I duly pay credit to all Boost contributors out there; you make my programming life so much more enjoyable every day, thank you very much for that. I have some remarks regarding the Rational Number library. 1. The employed algorithms for addition and for multiplication of rationals (which incidentally work for any quotient field of an integral domain) are attributed to Nickolay Mladenov by the author (Paul Moore). To give credit where credit is due, it should be noted that the exact same algorithm was published by Peter Henrici (Author of some well known higher calculus textbooks) in 1956. The exact reference is (in an abbreviated style): Henrici, P. (1956): A subroutine for computations with rational numbers. J. ACM I think a related note should be added to the source code. The above paper is the earliest article (to my knowledge) on explicitly representing rationals (or rather quotient fields) in software. 2. Interface/Utility functions: The documentation of the gcd and lcm functions fail to exactly define their semantics. Rather than stating that "the" greatest common divisor resp. "the" least common multiple of n and m are computed it should state which one of the possible GCDs/LCMs is chosen. Note that, e.g., for 12 and -32 (taken to be integers) 4 and -4 are greatest common divisors ("greatest" in this respect has nothing to do with -4 < 4 in the canonical ordering of the integers). As given, the gcd function requires an ordering < on the underlying integer type and uses it in some way to single out one of the GCDs. To be really useful the documentation needs to state something along the lines of "we employ the canonical Euclidean algorithm for GCD computation" and "GCDs computed by gcd are never less than zero (wrt. < and 0 of the underlying integer type)". The documentation also fails to mention its behavior in the important singular cases gcd(0,0) and lcm(0,0). It wisely chooses the somewhat standard conventions of gcd(0,0) = 0 and lcm(0,0) = 0. But note that 0 and 0 neither have a greatest common divisor nor a least common multiple according to the mathematical defini- tions of these notions. 3. Some of the informal descriptions of the performance characteristics of the various operations can be made more precise without being chattier. The documentation could state performance bounds in terms of operation counts of the underlying integer type. Paul Moore notes that his remarks are based on the current implementation and therefore subject to change, but for many of the operations he chose a (sometimes obvious, sometimes rather nonobvious) optimal implementation in terms of operations of the underlying integer type. So I propose the following changes (or perhaps clarifying additions?) to the documentation: (*) Increment and decrement operations are essentially as cheap as addition and subtraction on the underlying integer type. Increment and decrement operations perform at most one addition resp. subtraction of the underlying integer type. (*) (In)equality comparison is essentially as cheap as the same operation on the underlying integer type. (In)equality comparisons perform at most two (in)equality comparisons of the underlying integer type. [Yes, I know that in fact inequality is reduced to equality.] (*) The gcd operation is essentially a repeated modulus operation. The only other significant operations are construction, assignment, and comparison against zero of IntType values. These latter operations are assumed to be trivial in comparison with the modulus operation. The gcd operation performs never more than 5 * log( k ) + 1 modulo operations of the underlying integer type, where k is the maximum of the absolute values of the arguments (assuming not both are 0, in which case no modulo operations are performed). [The above is a crude adaption of a well known bound on the number of divisions performed by the Euclidean algorithm, see e.g. D. Knuth, TAoCP/2.] (*) The lcm operation is essentially a gcd, plus a couple of multiplications and The lcm operation performs exactly as many modulo operations as the gcd operation on the same arguments plus at most one division and one multiplica- tion of the underlying integer type. (*) The addition and subtraction operations are complex. They will require approximately two gcd operations, 3 divisions, 3 multiplications and an addition on the underlying integer type. The addition and subtraction operations are performed using Henrici's algo- rithm. They therefore use (at most) two gcd operations, four divisions, three multiplications, and one addition resp. subtraction of the underlying integer [Note: I wrote "at most" because one could take advantage of some special cases in the implementation, such when arguments or intermediate results are 0 or gcd's are 1. But explicit treatment of these special cases would possibly slow down the typical cases (by way of additional comparisons) so benefits are unlikely to result from this "overall".] Note that as written the documentation of the comparison operations also talks about (in)equality comparisons which are already mentioned earlier. The comparison operations, as implemented and as documented, make use of special cases by investing some comparisons against IntType(0). This could also be done with addition/subtraction and multiplication/division, as stated above. Paul, do you have any measurements or usage profiles that suggested your choices of not making use of special cases in the field operations? That's all for now, and (again) thanks for some incredibly brilliant libraries! -- ================================================================================ Christoph Koegl, Dept. of Computer Science, University of Kaiserslautern E-Mail: christoph_at_[hidden] WWW: http://www.familie-koegl.de/ -------------------------------------------------------------------------------- There are no stupid questions, but there are a LOT of inquisitive idiots. -------------------------------------------------------------------------------- Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
OPCFW_CODE
Will my preferred hardware support QOS / Traffic Shaping? I have posted this else where but did not get any responses so now I am at the source! I am an IT consultant and I usually get my small clients to use these: However today I got my first client that will be using VOIP phones. So I will need to provide QOS services to them. I consulted 16.3 or page 335 in the pFsense handbook and it lists all of the NICS that support that. However all I know is the little ALIX board has three Via VT6105M 10/100 cards. Wll those support QOS? Here is the boards info: Here's the URL on the board: Here's the PC Engines system board manual: Any help is greatly appreciated! Can't directly help you with your question about the NICs, but consider this. QOS is only as good as the network it's on. If you're client is getting/making calls over the interet, then QOS will not help since you won't be able to apply QOS to your internet connection unless the ISP is able to apply QOS also. You lose all control over the calls after they leave your gateway. (your pfsense box with QOS) If you're using an asterisk based distro, then checkout the forums on www.pbxinaflash.com They're really helpful there. Search there for QOS. Depending on the the amount of users, internal vlans may help you more. Just my $.02. valnar last edited by If you're client is getting/making calls over the interet, then QOS will not help since you won't be able to apply QOS to your internet connection unless the ISP is able to apply QOS also. Not necessarily. You can control TCP flows with buffering and randomly drop inbound TCP packets to make room for UDP flows. This uses TCP's slow-start mechanism to your advantage. This can all be done at the remote site without any cooperation at the head-end or Internet in general. chpalmer last edited by QOS is only as good as the network it's on. And to add- VOIP is only as good as the network its on. Ive seen a few VOIP implementations fail due to the fact that companies thought they could just add a layer to their network without considering overall the realistic needs of their VOIP infrastructure. I've run on of those with 4 IP Phones and the QoS was fine, the connection speed was 16/2 and there were no issues. Thanks for everyones help. I understand the outside QOS issue. The vendor of the phones recommends DSL lines for its dedicated bandwidth. The QOS is there to stop video, email downloads, or computer updates from plugging up the internet pipe. I went ahead and ordered four of them so I will know in the next few weeks how it goes. I have never setup pFsense to work with a ppoe connection, but I know its supported and I will learn. The book details it out.
OPCFW_CODE
View Full Version : Unlimited force in SP? 06-06-2002, 04:21 PM Is this possable?...I notice when mind controling dessan or tavion..bolth of them have totally unlimited force...So It's gotta be possable somehow..any ideas? 06-06-2002, 06:01 PM i dont think it is as the single player source hasnt been released 06-06-2002, 07:19 PM I believe it is called cheating, browse the other boards for your answer. 06-06-2002, 09:13 PM there isnt a cheat for unlimited force use, only thing you can do is type or bind give force or give all in console to replenish your force power. 06-06-2002, 09:51 PM Every map has a script called either "start", "<mapname>_start", or "start_level", that configures Kyle's force powers, weapons, and health/shield/ammo/force status. You could use the scripts that came with the second SDK tools package, open them in BehaveED, and assign all of those parameters to whatever you like, compile them as IBI files, and save them in a PK3 in your base folder. It's not as simple as just pulling down the console and typing "setforceall 3 or 5" and "give all", but it's a more permanent solution. 06-09-2002, 06:46 AM Theres a cheat called: Using it will refill your meter.. I can't find a way to make unlimited force, but that cheat should tide you over. 06-22-2002, 12:29 PM I remember seeing somewhere (i don't remember where) a way to make a script that can be toggled. Now say i have a cfg file with the give force command. No all that must be done is have the script be toggled on. it will repeat until toggled off. so long as cheats are enabled this should work. there are four kinds of commands in a sense: 1) Those which just happen weapon 4 ] 2) Those which are toggled on for the length of a key press and are toggled off when it is released [ +speed ] 3) Those which are toggled on and then toggled of by their opposite command [ +force_grip and -force_grip ][note that if this is the only command bound to a key it acts like 2 [ +force_grip ] , other wise it needs its opposite [ +force_grip; victory; -force_grip ] 4) Those which are toggled on and off [ toggle cl_run toggles whether running or walking] so if the cheat give force was in a script that was of the forth group it would just keep being entered (just like if you bound it too a key and taped the key down) since the command fits into the first group. do i make any sense (probably not) hope this helps. 06-22-2002, 06:58 PM Just as a suggestion, I've never tried it myself; there's a section at the bottom of the weapons.dat that defines the max amount of ammo for each weapon you can hold (and I think how much you start off with). It also includes "force" as an ammo type. I thought, when I looked at it, that theoretically, if you were to set the "ammo_max" to 999, or whatever, that you'd have pratically the same thing as infinite force power. Again, just a thought, may be worth a try. 06-22-2002, 08:34 PM And did you try it? 06-24-2002, 11:08 AM Try binding "wait; give force" to your force key. vBulletin®, Copyright ©2000-2016, Jelsoft Enterprises Ltd.
OPCFW_CODE
Unless you’ve been hiding under a rock for the last few years, the growth of machine learning and its predominance in the technological world will not come as a surprise to you. From the point of view of a developer, this change has unravelled in many forms: data scientists are now among the most researched (and well-paid) roles in the industry, deep learning frameworks have multiplied exponentially and companies struggle every day with the consequences of deploying and maintaining data-driven systems that have a huge reliance on data quality. Deep learning today is so ubiquitous that the director of AI at Tesla refers to it as the new “software 2.0”. You might not realise, however, that while AI was changing the software industry, it was software itself that was quickly changing the world of AI, by fuelling its adoption by society and, more importantly, changing the scope of what was feasible in practice. To say something extreme, I would like to argue here that what remains from the AI boom, once all hype is accounted for, is nothing more than a great story of how software, and good software at that, can actually change the world. How software sparked it all As you might know by now, behind all the talk on deep learning and AI there is a (relatively) old idea, going by the name of “artificial neural networks”. Neural networks, as they are used today, are not that different from a function in a programming language: they take an input (generally a vector of numbers), and they provide an output, such as the probability that the input belongs to a given class or not. Differently from a programming function, however, neural networks have a large number of parameters (generally called “weights”, in accordance with the biological terminology) that can be adapted through the use of a supervising algorithm depending on the error they make. Without getting too technical, suffice to say here that this process of adaptation requires an operation called “backpropagation”, which allows computing in a quantitative way the change to be applied for every weight. While backpropagation is rather straightforward from a mathematical point of view, its implementation (and, more importantly, its efficient implementation) is rarely so. This is probably one of the reasons why, in several decades, neural networks have repeatedly sparked the curiosity of the AI community, only to be later abandoned in favour of other (sometimes easier) methods. The latest iteration of this hype cycle started around 2006, when several groups of researchers tried again to reignite the interest in neural networks, inspired by the huge availability of data and computing power compared to previous years. While the research world is not always famous for its attention to software development, this time it was realised from the outset that, for everything to work, things needed to be different. Working on the theoretical aspects of the revival of their field, many researchers decided to work equally hard on the software part. One of the main results of this effort was Theano, a small Python library dedicated to making backpropagation automatic and highly efficient. In terms of research software, Theano was a rarity: immediately open sourced on GitHub, heavily documented, easy-to-use (at least compared to the alternatives), with an extensive community of users on StackOverflow. In the middle of growing computing power and the first practical successes of deep learning, Theano became a catalyst for a small revolution: in just a short time, the number of people who were able to implement neural networks by using it grew exponentially, with more than a thousand forks of the original repo in a few years. From neural networks to differentiable computers In a sense, Theano was the cause of its own demise; once people understood the power of making these ideas accessible to anyone, libraries based on the same concept started to multiply wildly. Some, such as Keras, were originally built on top of Theano itself while others, such as TensorFlow, came from huge IT companies such as Google and Facebook open sourcing their own efforts. As a result, Theano quickly became obsolete, with development officially stopping in 2017. The closing statement announcing the coming stop accurately summarises the change that happened over just a few years: The [deep learning] ecosystem has been evolving quickly, and has now reached a healthy state: open-source software is the norm; a variety of frameworks are available, satisfying needs spanning from exploring novel ideas to deploying them into production; and strong industrial players are backing different software stacks in a stimulating competition. Which can be further summarised by saying that today’s revolution in AI is as much a victory of ideas as it is a victory of good software practices. Software was instrumental in making these ideas accessible to everyone; primarily researchers that were not expert in neural networks, but also small companies, makers, and developers from all over the world. The best legacy of Theano, in my opinion, is found in the common tagline “democratising AI” which today has become the slogan of many IT companies, from Google to Microsoft and NVIDIA (see “On the Myth of AI Democratization”). There is a way in which Theano (and everything that was to come) was for deep learning what object-oriented programming has been for software development. It made writing code for neural networks simple and, more importantly, modular. It freed researchers to think and experiment at a higher level of abstraction, with neural networks that were order of magnitudes more complex that whatever was done before. While neural networks started by a loose biological inspiration (hence the name), today they are more suited to the mentality of a programmer than a biologist, with a design that is inspired less by biology inasmuch as modularity and hierarchy. Two short examples will suffice to clarify this analogy. First, consider the case of generative adversarial networks, a framework for generating things (e.g. new pictures of cats from a database of known photos). GANs were proposed in 2014 and quickly became one of the major breakthroughs in modern deep learning, inspiring a variety of other works and ideas, with applications ranging from image translation to cybersecurity. Fundamentally, they are composed of two neural networks interacting to obtain the final result, and they are the absolute brainchild of this software revolution. There is nothing remotely resembling biology in them, but their modular formulation is such that their implementation in most deep learning frameworks is a breeze (making them work well, on the other hand, is an entirely different matter). Ian J. Goodfellow, now at Google Brain, one of the creative minds behind GANs and many other deep learning ideas (source). Another example is the neural differentiable computer (whose artistic rendition opens this piece): an attempt to provide neural networks with a form of long-term “memory” that is still coherent with the idea of backpropagation. Everything in it, from the idea to its name, transpires the new cognitive mindset with which deep learning researchers are now equipped. The power of software The relation between academic research and good software has always been, at best, a problematic one (with some notable exceptions). Researchers typically do not have enough incentives or competence to write code that goes beyond the mere usability. Deep learning is a marvellous tale of what happens when the two work together. It is difficult to imagine such a boom in the use of deep learning if it was not backed by powerful (and simple) libraries. At the same time, the power of these libraries is directly reflected in how researchers and experts are thinking about their very topic, which some are even proposing to rebrand as “differentiable programming”. Irrespective of whether artificial intelligence will continue in this direction and will make good on all its promises, the last years will remain a testament to the power that good software has in shaping the world and the minds of people.
OPCFW_CODE
Algebra 2 Course Policy Overview: Algebra is fundamental; in Algebra 2, students will develop a wide variety of mathematical tools that they’ll use in every subsequent math class. Students will develop an extensive vocabulary of mathematical terms, symbols and functions; students will also learn how to manipulate and solve various kinds of equations, including those involving polynomial, exponential, and trigonometric functions. Algebra 2 is a two-semester required college preparatory course. Students will learn through participation in lessons, class discussion, group work, and other enrichment activities. Semester 1: Linear Functions; Quadratic Functions; Quadratic Equations and Complex Numbers; Polynomial Functions; Rational Exponents and Radical Functions Semester 2: Exponential and Logarithmic Functions; Sequences and Series; Trigonometric Ratios and Functions; Probability; Data Analysis and Statistics. Pencils and erasers, notebook (I suggest, but do not insist on, a math-only notebook with graph paper), scientific calculator (this CANNOT be your phone or your computer; neither will be allowed on tests, and also CANNOT be a graphing calculator), your BYOD computer. Your school-issued Big Ideas math textbook can remain at home. You will not be excused from class to retrieve materials from your locker; chronic failure to bring materials will jeopardize your grade. If you face difficulty in acquiring these class materials, please see me in private. Classroom Expectations: These fall into three categories: - Be prepared. Note: this section includes the Late Homework Policy. - Be on time! Repeated tardies will result in administrative action, in accordance with SRVHS policy. It’s also disrespectful of the time of your fellow students (see below). - Bring daily class materials every day! See “Class Materials” above. - Do your homework! Learning math requires doing math; new skills and concepts must be cemented on a daily basis. In addition, many test problems will closely parallel homework problems. I strongly suggest that you initially try to do homework on your own, in a room with no distractions, before seeking help. Most homework will be assigned online through the Big Ideas website and will be due when the bell rings to start the next day’s class period. - Late Homework Policy: In the case of an excused absence, missed classwork and homework will receive full credit if turned in within two days (although you are strongly encouraged to turn it in earlier, so as to keep up with the class). Otherwise, late classwork and homework will receive half credit if (a) complete and (b) turned in before the next unit test. For the SRVUSD Homework Policy, please see the side link. - Be present. You should be in class, and paying attention. - Please come to class! Repeated absences make it harder to understand the material. If you miss class, it’s your responsibility to find the assignments you missed on School Loop and make them up promptly (see Late Homework Policy above). Similarly, if you miss a test, it’s your responsibility to find me and schedule a make-up test. - Please make your phone silent, and put it away for the duration of class. Your computer should also be away except when computers are being used for classwork. - No headphones. - Stay on task. Side conversations are distracting, and math requires your whole concentration! - Be respectful. Be kind. Be the good people I know you can be. - “To get respect, give respect.” Act respectfully towards your teacher and fellow students, with the expectation that they will act respectfully towards you. - Act with common courtesy and common sense. Assume good intentions in others. - Respect the ideas of others. This is especially important, as we will be exploring mathematical ideas every day. If you disagree with somebody else’s idea, say so politely. - In accordance with district policy, harassment in or out of the classroom will not be tolerated. This includes (but is not limited to) harassment based on race, ethnicity, religion, gender, or sexual orientation. For the SRVUSD Anti-Harassment Policy, please see the side link. Tests: Quizzes and tests will be frequent, and a major component of the grade (see below). You should prepare for tests by reviewing notes, study guides, and homework problems. You all know proper test-taking behavior; if you cheat on a test, you’ll receive a grade of “0”, which will severely drop your semester grade. - Your final grade will be based on: homework and classwork (10%); class participation (5%); quizzes and tests (70%); and exams (15%). This grading policy is in close alignment with the other teachers of this subject at SRVHS. - Your final grade is a reflection of your work throughout the ENTIRE semester. If you care about your final grade, the time to start worrying about it is NOW. If you have a C three-fourths of the way through the semester, it’s an uphill climb just to finish with a B. - No individualized extra credit will be given; please don’t ask me if you can “do extra work” to bring up your grade. - I will update your grade on School Loop periodically. You are responsible for checking it, and bringing any errors (respectfully) to my attention.
OPCFW_CODE
Novel–The Mech Touch–The Mech Touch Chapter 3218: Clash of Prides summer rhetorical “Thanks, friend. Could we communicate now?” She asked her mech. “That’s easier said than done.” Ves quietly muttered. “These hardheads don’t appear to be eliminating up soon.” “And then what? Permit my own personal mech make use of my insufficient safeguard?” “The next time, you together with I need to sit together with each other and also a fantastic have a discussion on how to appearance the personality in the existing mechs we design.” Gloriana insisted as she ongoing to glimpse in the information readouts. “Your life mechs became more powerful, and that’s very good, but it’s like raising a son without productive parenting. As we don’t keep an eye on our child, he may get older to become delinquent!” an briefe introduction to geography of health “Will I have to carve a hole within this new experienced mech as a way to pull out Orfan from her c.o.c.kpit?!” Venerable Tusa expected as his Dimly lit Zephyr hovered closest to the out-of-regulate expert mech. “I didn’t assume that something such as this would arise!” Ves defended him or her self. “In nearly every instance, my living mechs are happy to get used by their aviators. They can be programmed to not ever cure their customers as hostile. This teaching must be even much stronger when it comes to expert mechs. They’re specially designed to utilize one pilot merely to the exclusion of everyone more. It doesn’t seem sensible why the Vanguard Task is able to reserved its encoding!” “Don’t move unless obtained!” Ves responded above the communication route. “You’re likely to cope disastrous problems for the Vanguard Venture and Venerable Orfan’s trust when you forcibly take them a part. This example is simply not unsalvageable. The hazard isn’t too wonderful right now so allow the condition engage in out. This have a problem is to take a great deal from both of them. They can’t maintain this confrontation for a long time.” Nevertheless Ves realized that it really was far better to enable the skilled initial and skilled mech to visit terms and conditions independently, it didn’t seem most likely that the would ever take place. He prefer to go ahead and take risk to get involved rather than to allow this already precarious condition to explode. “Maybe… I should you need to take a step of religion.” First, the complete expeditionary fleet relocated away from the potentially dangerous experienced mech. It had been quite frustrating to instruct every s.h.i.+p inside the fleet to advance without any satisfactory caution or planning, but nobody kicked a fuss on this occasion. “Venerable Orfan! Awaken and stop seeking to take over your expert mech! You’re not intended to handle your existing mech with this fas.h.i.+on.” Creating mechs lively had been a increase-edged sword. Whilst these self-mindful and personal-wondering mechs awarded a lot of benefits to mech pilots, there had been always a possibility until this could become another route. Even with her objections, Venerable Orfan was available to test out Ves’ proposition. This beat was going on for a short time now plus it got already drained a great deal of her psychological energy. She wasn’t able to keep up her defense for long anyhow, why not give this other remedy an opportunity? Although her a.n.a.logy sounded rather goofy, it assisted Ves obtained some perception. Gloriana handled this concern from your point of view associated with a dad or mom elevating a child. Maybe that had been a sensible way to set this issue into situation. Presently, the Vanguard Project’s resonance s.h.i.+eld matured more dark whilst its condition contorted into a spiked golf ball. Its arms and legs jerked uncontrollably when its air travel program begun to release bursts of thrusts that delivered it floating in occasional instructions. For better or much worse, the skilled mech and professional pilot had to come to an accord on their own. The only method so that they can put down their rivalry and interact with each other would be to accept one another as means. Baseball Card Adventure: Satch And Me When Ves and also the other people been told what Venerable Orfan shouted against her very own experienced mech, they needed to palm their encounters. “The next occasion, you and also I have to be seated with each other where you can decent communicate in order to form the individuality of the dwelling mechs we structure.” Gloriana was adamant as she continued to look on the details readouts. “Your dwelling mechs are getting to be stronger, and that’s excellent, but it’s like boosting a child without energetic raising a child. If we don’t keep close track of our youngster, he might mature becoming a delinquent!” Helping to make mechs lively was actually a 2x-edged sword. Although these personal-mindful and self-wondering mechs granted plenty of good things about mech aircraft pilots, there is always plausible this could develop into another route. piper woods victor ny Ves attempted his ideal to determine an answer. “I feel the pro mech isn’t trusting you given that you aren’t relying on it sometimes. It happens to be directly plugged into your body and mind. You can’t hide your true feelings towards it provided that you are interfacing using it. What you ought to do is usually to be a much better guy and provides reconciliation.” This is quite an embarra.s.sing event after all! It reflected poorly on Ves and the other designers that they can formulated a professional mech that couldn’t even get and its individual expected expert pilot. Ves got no selection but to be on the manage space and observe the problem have fun with out from a healthy length. The Mech Touch Primary, the complete expeditionary fleet transported away from the potentially harmful skilled mech. It absolutely was quite problematic to advise almost every s.h.i.+p from the fleet to go without any enough forewarning or groundwork, but nobody kicked a hassle this time. “Venerable Orfan! Awake and quit looking to rule your professional mech! You’re not meant to address your life mech within this fas.h.i.+on.” Gloriana did not have excellent terms for this particular disappointment. “This is your mistake, Ves. I don’t think I have ever discovered specialist mechs changing against their very own pilots until right now. Only you may bungle this up because your mech is living. Why haven’t you included this likelihood?!” “Be mindful about its orientation! Don’t change the expert mech around. In the event it ever does a combat charge for reasons unknown, then don’t allow it to blow wide open an opening straight into my important manufacturer s.h.i.+p!” “The next occasion, you and I need to sit down jointly where you can great chat in order to structure the personality of the living mechs we style and design.” Gloriana insisted as she continuing to glance for the details readouts. “Your existing mechs have grown to be much stronger, and that’s fantastic, but it’s like elevating a son without energetic being a parent. When we don’t keep watch over our boy or girl, he could get older to become a delinquent!” Her strength faded as she tried her far better to communicate her motivation to bargain and work while using Vanguard Job. “I.. am not.. going to just let my own mech call up the vaccinations! I’m the aviator right here! Who the h.e.l.l do you reckon you happen to be?! I will never enable myself developed into a giggling carry from the galactic mech community! If you feel you are able to flip me into your initial specialist aviator who seems to be simply being piloted by her pro mech, then reconsider that thought!” The one reasons why Ves hadn’t create his guard against this probability nowadays was because it never really happened. The Devil Tiger was the mech that had the best possibility of turning against its customer, but his mommy acquired hijacked his 1st masterwork mech before he could see his experimental prepare go to fruition. “Don’t proceed unless requested!” Ves replied above the correspondence route. “You’re about to package devastating harm to the Vanguard Project and Venerable Orfan’s self-assurance in case you forcibly move them away from each other. This example is not really unsalvageable. The possibility isn’t too terrific presently so let the scenario have fun with out. This battle is to take a lot out of both of them. They can’t maintain this confrontation forever.” “And what? Simply let my own mech take advantage of my deficiency of security?” “Idiot!” Ves cursed. Novel–The Mech Touch–The Mech Touch
OPCFW_CODE
Developers are a reserved bunch of individuals since they work behind the scenes but in essence, the work they do could make or break a project. A cryptocurrency or blockchain project is a business in itself and developers play a crucial role in distinguishing a business from competitors and helping it become more competitive. Lucid, through their proposal seeks to refurbish the developer ecosystem as they realize that a developer’s efforts help to improve the clients’ experiences, bring more feature-rich and innovative products to market, and make setups more safe, productive, and efficient. Lucid in a Nutshell Their serialization-lib abstracts the complexity that comes with building Cardano transactions like balancing transactions, coin selection, calculating fees and script costs, attaching datums, reference scripts, serialization, and more. This abstraction ensures they can provide a friendlier path, allowing developers to be able to focus purely on the development of the dApp. We can say that Lucid helps bring value to the developer space by making it friendlier and easier for developers to build transactions, create dApps, and interact with Cardano. Cardano’s Developer Ecosystem For some time now, developers have been shying away from Cardano because it lacks a friendly and easy-to-use library for building transactions, interacting with Plutus smart contracts, and building dApps. Learning from past experiences, Cardano will get over this hurdle. Last year developers on the Cardano chain faced concurrency issues where it was difficult for multiple different agents to interact with the same smart contract at the same time. A solution to this was found through aggregating multiple interactions to settle on the same state. Building dApps on Cardano, light wallets to be specific, is a bit cumbersome because they cannot verify smart contract execution as the network lacks a library for smart contract execution validation that can run in the browser. This makes life hard for developers and users as they are not protected should someone maliciously try to access their light wallets. Hardware wallets too face a design flaw of not running smart contracts since they have a very limited memory and computation capacity. This means that the solution to what ails the Cardano developer ecosystem should come from an approach that helps them develop the easy way. The Lucid Team The Lucid team is led by Alessandro Konrad whose role is that of the creator and developer for the Lucid library. Looking at Konrad’s Github profile, you can see his previous and current contributions to the Cardano ecosystem including SpaceBudz, Nami Wallet, Berry pool, among others all of which are open-source and widely adopted by the community. Jenny Brito will be joining the Lucid team as the Administration and Logistics lead. Funding Budget Breakdown Alessandro Konrad: Lucid library creator, architect, and project developer. Hourly rate: $60/hr. Total hours: 680 Total in USD: 40800 Initial library (Research, Coding, Testing, and Implementation) – 380hrs – $22,800 Deno integration (Research, Coding, Testing, and Implementation) – 60hrs – $3,600 Vasil integration (Research, Coding, Testing, and Implementation) – 180hrs – $10,800 Library expansions (Research, Coding, Testing, and Implementation) – 60hrs – $3,600 Total: 680hrs – $40,800 Administration and Logistics: Jenny Brito: Continuously assisting Alessandro with administration, communication, logistics, and documentation. Hourly rate: $20/hr. Total hours: 200 Total in USD: 4000 Research (Processes and developments) – 80hrs – $1,600 Documentation (Redacting, filing, and proof-reading) – 70hrs – $1,400 Data analysis and feedback (Redacting, filing, and proof-reading) – 40hrs – $800 Project Catalyst related engagements – 10hrs – $200 Total: 200hrs – $4,000 Budget total: $44,800 A Reprieve for Developers Developers have been using libraries and frameworks for a long time now, not only for saving time but also for reusing the solutions to the problems that other developers have already figured out. Lucid’s proposal provides a reprieve to Cardano developers with their serialization-lib that abstracts the complexity that comes with building Cardano transactions. This allows developers to focus on creating unique features for their dApps without wasting time. The Lucid libraries are designed to assist developers improve the performance and efficiency of their dApp development process. Image courtesy of pixabay
OPCFW_CODE
Game Terrain Database Model I am developing a game for the web. The map of this game will be a minimum of 2000km by 2000km. I want to be able to encode elevation and terrain type at some level of granularity - 100m X 100m for example. For a 2000km by 2000km map storing this information in 100m2 buckets would mean 20000 by 20000 elements or a total of 400,000,000 records in a database. Is there some other way of storing this type of information? MORE INFORMATION The map itself will not ever be displayed in its entirety. Units will be moved on the map in a turn based fashion and the players will get feedback on where they are located and what the local area looks like. Terrain will dictate speed and prohibition of movement. I guess I am trying to say that the map will be used for the game and not necessarily for a graphical or display purposes. I would treat it differently, by separating terrain type and elevation. Terrain type, I assume, does not change as rapidly as elevation - there are probably sectors of the same type of terrain that stretch over much longer than the lowest level of granularity. I would map those sectors into database records or some kind of hash table, depending on performance, memory and other requirements. Elevation I would assume is semi-contiuous, as it changes gradually for the most part. I would try to map the values into set of continuous functions (different sets between parts that are not continues, as in sudden change in elevation). For any set of coordinates for which the terrain is the same elevation or can be described by a simple function, you just need to define the range this function covers. This should reduce much the amount of information you need to record to describe the elevation at each point in the terrain. So basically I would break down the map into different sectors which compose of (x,y) ranges, once for terrain type and once for terrain elevation, and build a hash table for each which can return the appropriate value as needed. It depends on how you want to generate your terrain. For example, you could procedurally generate it all (using interpolation of a low resolution terrain/height map - stored as two "bitmaps" - with random interpolation seeded from the xy coords to ensure that terrain didn't morph), and use minimal storage. If you wanted areas of terrain that were completely defined, you could store these separately and use them where appropriate, randomly generating the rest.) If you want completely defined terrain, then you're going to need to look into some kind of compression/streaming technique to only pull terrain you are currently interested in. That will be awfully lot of information no matter which way you look at it. 400,000,000 grid cells will take their toll. I see two ways of going around this. Firstly, since it is a web-based game, you might be able to get a server with a decently sized HDD and store the 400M records in it just as you would normally. Or more likely create some sort of your own storage mechanism for efficiency. Then you would only have to devise a way to access the data efficiently, which could be done by taking into account the fact that you doubtfully will need to use it all at once. ;) The other way would be some kind of compression. You have to be careful with this though. Most out-of-the-box compression algorithms won't allow you to decompress an arbitrary location in the stream. Perhaps your terrain data has some patterns in it you can use? I doubt it will be completely random. More likely I predict large areas with the same data. Perhaps those can be encoded as such? I think the usual solution is to break your domain up into "tiles" of manageable sizes. You'll have to add a little bit of logic to load the appropriate tiles at any given time, but not too bad. You shouldn't need to access all that info at once--even if each 100m2 bucket occupied a single pixel on the screen, no screen I know of could show 20k x 20k pixels at once. Also, I wouldn't use a database--look into height mapping--effectively using a black & white image whose pixel values represent heights. Good luck! If you want the kind of granularity that you are looking for, then there is no obvious way of doing it. You could try a 2-dimensional wavelet transform, but that's pretty complex. Something like a Fourier transform would do quite nicely. Plus, you probably wouldn't go about storing the terrain with a one-record-per-piece-of-land way; it makes more sense to have some sort of database field which can store an encoded matrix.
STACK_EXCHANGE
Jam-upfiction – Chapter 430 – Indigo Azure Sea Market listen yell share-p3 Novel–Fey Evolution Merchant–Fey Evolution Merchant Chapter 430 – Indigo Azure Sea Market spiteful share “If you don’t contact me from then on, I’ll go to that shallow near-sh.o.r.e seas in the Tuning in Heron Holding chamber of Trade to ask about any headlines.” When Tune in been told Lin Yuan’s phrases, he frowned slightly and pondered. Tune in paused for a moment and then extra, “But I understand Fresh Master won’t permit the Zheng family members eliminate me.” In fact, the Guild Alliance was packed with soul qi professionals, and the majority of them ended up from Indigo Azure Area. These character qi professionals existed in Indigo Azure Area all year round, so an individual might know some valuable information. Lin Yuan lifted his eye brows and responded, “That’s beyond doubt.” On the rear of the Platinum Long-Backed Swan, Lin Yuan could already faintly observe the description of Indigo Azure Area. This location because of the seas was too huge. On the rear of the Platinum Longer-Guaranteed Swan, Lin Yuan could already faintly view the outline for you of Indigo Azure Community. This area because of the water was too large. Following considering for years, Listen closely suddenly smiled and mentioned, “The Zheng household is a veteran faction. As the three significant factions in Indigo Azure Metropolis, they could be reported to be superior. “There is often a Guild Alliance branch near that shallow near-sh.o.r.e water. You can increase a thing by really going there to research.” Listen addressed Lin Yuan’s problem, “I can’t say what type of pro the Zheng friends and family will send. It must be either a california king-cla.s.s or emperor-cla.s.s pro. The master-cla.s.s specialist is a bit more likely.” The Mother of Bloodbath pondered for a second and claimed, “Since they’re two great-standard dim-variety Imagination Particular breed of dog feys, their determination should both be relatively formidable. Lin Yuan brought up his eye brows and responded, “That’s for certain.” Modern Machine-Shop Practice “If you don’t contact me from then on, I’ll head over to that shallow near-sh.o.r.e seas with the Being attentive Heron Holding chamber of Commerce to find out about any reports.” When Lin Yuan been told Never-ending The summer months claim that, he recalled the amount of time as soon as the Mum of Bloodbath had divided the corpse of that pinnacle Suzerain/Belief II crow without any waste materials. No matter if Lin Yuan was high in the heavens, the vicinity within his appearance could only encompa.s.s 1/10 or 2/10 of this huge town. “Many of the ocean faith based supplies and feys you can find uncommon and peculiar factors. You will always find individuals who purchase them, but there are occassions when even Cla.s.s 4 Design Experts misjudge.” “Almost all the neighborhood factions and folks in Indigo Azure Area will offer psychic components and feys they have got amassed within the water through these 10 years at this particular seas market place. As they quite simply continued their path, they were receiving better and nearer to the Indigo Azure Community. Lin Yuan sensed how the adjoining temperatures was significantly beyond as he is at the Royal Funds. The environment on top of the sea also faintly taken a salty ocean breeze. After stating that, Lin Yuan imagined for a second and had taken out another hose of black-yellow gold blood and added, “Take this pipe of Diamonds dragon blood flow at the same time. When the Perfect Dragon’s Lips Orchid can’t induce the dragon-group bloodline on the Lava Iguana’s physique, this tube of our blood will be able to produce a extra effect.” Tune in paused for a moment then added, “But I do know Fresh Become an expert in won’t let the Zheng family members get rid of me.” “Almost each of the local factions and people in Indigo Azure Community will provide divine resources and feys they offer gathered on the sea throughout these few years at the ocean market. “There is actually a Guild Alliance branch near that shallow near-sh.o.r.e seas. You may be able to get some thing by planning there to analyze.” “I’ll carefully divide when we get back on the Noble Funds to see if I will manage the bloodstream along with the Blood vessels Legislation and promote their mobile pastime to protect the self-discipline with their mind on the greatest.” Ahead of the Zheng loved ones was sure how solid the emperor-cla.s.s expert who had destroyed the pinnacle emperor-cla.s.s professional really was, they could never just let their own individual emperor-cla.s.s authorities easily acquire hazards. Soon after Lin Yuan completed talking to Listen closely, he turned to Endless Summer months, who has been resting beside him in a lotus posture, and claimed, “Endless Summertime, when Listen comes back on the Hearing Heron Chamber of Commerce’s aged mansion, protect him at night like you have on the Noble Budget.” Prior to the Zheng loved ones was confident how sturdy the emperor-cla.s.s professional who had wiped out the pinnacle king-cla.s.s expert really was, they will never permit their particular emperor-cla.s.s pros easily take potential risks. When Lin Yuan noticed Unlimited Summer time say that, he recalled time if the Mommy of Bloodbath had divide the corpse of these pinnacle Suzerain/Belief II crow with virtually no waste materials. Someone who could silently kill a pinnacle queen-cla.s.s experienced with Diamonds X/Fantasy IV fey at the very least had emperor-cla.s.s power. No seasoned faction would casually make use of emperor-cla.s.s expert. Also, Pay attention acquired distributed an allegiance oath to Lin Yuan and was an element of Lin Yuan’s exclusive faction, Atmosphere City. Lin Yuan wouldn’t enable the Zheng loved ones place on the job the people in their own faction. Rather, it was to verify no matter if the loss of life with the pinnacle ruler-cla.s.s professional they had mailed possessed a single thing concerning Listen closely. This is frankly an evaluation. Leave a Reply
OPCFW_CODE
[Slackbuilds-users] ATT: anyone using python3-PyQt5 dave at slackbuilds.org Thu Apr 23 12:02:21 UTC 2020 On 2020-04-23 11:57, Tim Dickson <dickson.tim at googlemail.com> put forth the proposition: > I also spotted that qt5-webkit is marked as a dependency of python3-PyQt5 > if it has been split into python3-PyQt5 and python3-PyQtWebEngine does that > change the qt5-webkit dependency of python3-PyQt5 to just qt5 , or is the > webengine different from webkit? > Regards, Tim I've just had a test build - python3-PyQt5 and python3-PyQtWebEngine will build without qt5-webkit, but I think it will take some testing to see if each application at the end of the chain would run or not. So at the moment it looks like qt5-webkit could be optional for > On 23/04/2020 06:18, Dave Woodfall wrote: > > Hi all, > > This is a heads-up for anyone using python3-PyQt5. The version we > > have now has had support for QtWebEngine python bindings split into > > a separate package - python3-PyQtWebEngine. > > Here is a list of SlackBuilds that currently depend on python3-PyQt5 > > and that may /possibly/ need python3-PyQtWebEngine. > > Since python3-PyQt5 is a dependency of python3-PyQtWebEngine, it's a > > matter of swapping one for the other in REQUIRES. > > academic/Mnemosyne > > academic/veusz > > audio/carla > > desktop/anki > > desktop/kolorcontrol > > development/jupyter-qtconsole > > games/pybik > > libraries/QScintilla-qt5 > > multimedia/openshot > > network/Electrum > > network/onionshare > > network/persepolis > > office/ReText > > python/pyqode.qt > > system/fs-uae-arcade > > system/fs-uae-launcher > > system/laptop-mode-tools > > So far, `anki' is the only one that I know that needs it and I've > > adjusted it in my branch. > > _______________________________________________ > > SlackBuilds-users mailing list > > SlackBuilds-users at slackbuilds.org > > https://lists.slackbuilds.org/mailman/listinfo/slackbuilds-users > > Archives - https://lists.slackbuilds.org/pipermail/slackbuilds-users/ > > FAQ - https://slackbuilds.org/faq/ > This email has been checked for viruses by AVG. -------------- next part -------------- A non-text attachment was scrubbed... Size: 801 bytes Desc: not available More information about the SlackBuilds-users
OPCFW_CODE
More people are realizing that beyond the hype, the cloud can be a real game changer. But how that effects each tier of the software stack I think is up for some debate. Deciphering the future path development is something of a mystic art, of which I will attempt to partake. So why is Google betting on the browser? Even at a time when most couldn’t imagine the browser providing a truly interactive experience Google Maps made its debut and changed everything. But Google has a problem, The browser as a platform can be a very hostile development environment. So it’s no surprise Google IO 2010 sessions were flooded with ways to make rich client side applications on the browser simple and less hostile. Google’s prize fighter in this effort is GWT. The GWT compiler abstracts away the browser specific issues improves the performance of the client side application and encourages code reuse between client and server. GWT attempts to allow developers focus on the application experience without all the browser specific worries that are usually associated with complex rich browser applications. This shift from avoiding client side development to adopting it as the primary development strategy has some interesting consequences. First is the movement of the presentation layer from the server side to the client side. Because of this, the HTML template engine is no longer be useful to modern browser developers. Also, Ajax moves from being a cute way of creating the appearance of an interactive application to the primary way an application retrieves data from the web head. In this new paradigm the web head becomes what is essentially an RPC service. To any one that is paying attention this shift has huge implications for web frameworks like ruby on rails, that have built their framework around server side templates and easy creation of restful web services. What this means is that the bulk of the application is no longer on the server side, but on the client side. This is a big win for scalability as the web head has less to keep track of and is freed to be more stateless, which is just what you want if you want to scale your web application to millions of users. One last thing this paradigm shift means; SOA gains much more relevance in the web application space. A rich client consuming (REST based) web services is a match made in heaven for SOA. In the cloud space PaaS is probably the most under estimated and un-interesting to most early adopters of cloud. Most major customers of cloud services are more interested in figuring out how their current software platforms can utilize the cloud than develop applications targeting the cloud platform. The fear for cloud adopters is of vendor lock-in. If you develop applications for Google App engine then you decide you want to use some other cloud vendor, your stuck. You have developed your application for Google’s PaaS; and it’s not exactly transferable to other cloud vendors or for that matter on premise. This issue alone is the biggest detractor to PaaS. Cloud vendors are just not seeing a lot of traction in the PaaS space, and I think this is the primary reason. What we need is a PaaS independent platform developers can target which will work no matter what cloud vendor you choose. The Good news is Google recognizes the PaaS issue and is hoping to capitalize on the success of the Java standards in the cloud. In the Google IO 2010 Key note they announced a partnership with VMware to deliver an open development platform that all cloud vendors could implement and allow developers to target a vendor neutral PaaS. This could be huge, as this vendor neutral development platform could bring more companies to the cloud and truly change the way we look at hardware from a development standpoint. No longer are we worried about how our application will scale within the hardware infrastructure. Developers will be free to focus on the client side user experiences and web services on the server side should just scale as needed thanks to the cloud. One button deployment to a production system can truly be a reality lifting a huge burden off change and release and developers in general. How PaaS could do this may be the subject of a future post. But for now, I think its safe to say PaaS has a big future ahead of it.
OPCFW_CODE
Need to set python_interpreter for roles other than rebuild_inventory When running... quickstart.sh localhost The deploy fails with: TASK [setup/undercloud : Generate ssh configuration] *************************** Sunday 13 March 2016 10:52:20 -0400 (0:00:00.038) 0:21:41.223 ********** fatal: [host0 -> localhost]: FAILED! => {"changed": true, "failed": true, "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"} We may want to set ansible_python_interpreter at a higher level...either by restoring the group_vars/all.yml file, or by just passing it on the ansible-playbook command line via -e. Fixed proposed in https://review.gerrithub.io/#/c/266137/ The build image playbooks are never called from quickstart.sh. This change restores the setting for build image playbook. https://review.gerrithub.io/266417 I think it might be better to document that we need to pass that in globally if using anisble-playbook directly from a venv. Either approach works...I (personally) favor "it just works" over "did you remember to set _____ per docs?" I also get that we don't want it cluttered throughout. {shrug} It's not just a question of clutter. It's the maintenance issue of having to remember to put it in particular places anytime we have task that is delegated to localhost. One alternative that I looked at earlier but didn't really pursue is dropping the setting in group_vars/all.yml, where it will be picked up as a default value as long as Ansible finds the group_vars directory. If we are always running from inside the tripleo-quickstart directory this is easy. ahhhh i see. I didn't consider the localhost delegated task issue. The group_vars alternative is interesting. I'm still learning Ansible, is that something only accessible if found (it would be in localhost case...not so much for remote case)? Hrm...in the scenario where one is running either image building or quickstart...on the local host on purpose...that would be problematic. I see why a "doc it, use -e, and move along" makes sense now ;) Thanks for explaining! I'm still learning Ansible, is that something only accessible if found (it would be in localhost case...not so much for remote case)? Group variables, like playbooks, only need to exist on the host where you're running ansible. The group_vars and host_vars directories are located relative to the playbook you're running. And possibly also relative to your current CWD. And just to be clear: The issue is when we delegate a file: task (or related module) to localhost; the file module is what requires libselinux-python. now the errors I was/am hitting make more sense. Thanks very much for taking the time to explain. https://github.com/redhat-openstack/tripleo-quickstart/commit/5a90503c9a76e51c8afc4caaa2a457c195621b58 resolves this for the quickstart case.
GITHUB_ARCHIVE
With great power, there must also come --- great responsibility! -- Stan Lee (Narrative in Amazing Fantasy #15) All around us, in enterprises, schools, hobbies, and at home, we have computers. They are the modern day work tool. We live and work through these digital boxes. I remember, from my university days, students walking around like mindless drones whenever the university's workstations stopped working. "What should we do now?" It doesn't matter if you're a student or the president. We are all dependent on digital computer systems. All computer systems are dependent on the system administrator. The system administrators have more power combined than all international councils combined (after all some system administrators must be keeping the council's operations running via their computer systems. There is a reason this shirt from Thinkgeek is funny: It's funny because it's true. Their job bears a heavy responsibility. It probably contributes to their much rumored ill-temperament that users and crackers continually try to use their systems in a way the systems aren't designed to work (I would however say this problem is a result of the design and implementation of the software behind the system). Because of constant attacks and exploit trials it shouldn't come as a surprise that the recommended way to run a system on a network is with an "open-up-only-if-necessary" approach. By closing ports, system administrators free themselves from attacks that exploit those ports. By opening up only necessary ports the system becomes safer. But as the system becomes safer from malicious users, normal users must deal with the added burden of not being able to try out or develop new solutions. Users will have to restrict themselves to ports the system administrator is familiar with and trusts. The system administrator might open up the port for MSN instant messaging or Skype but close the port for Jabber. Lotus Notes port might be open because that's what Lotus Notes uses but the IMAP port might be closed (there is also the possibility that the Lotus Notes system administrator doesn't enable IMAP). Any system administrator would become irritated by a user's request to open a port just because they want to try out this or that software. What irritates me is that while they choose to close ports (often ports used by for free software because they don't know that software very well) they still choose closed, proprietary software which they know nothing about the internal runnings of and open up the ports to that application (well I am also irritated over my system administrator who opened up the SSH port on a DHCP connection which meant I continually lost my port access everytime I got a new IP address -- but that's another story). This security measurement is still understandable. The whole system is their responsibility. You just don't open up everything and invite people in. That would just be a bad judgment call. Even though it restricts users freedom to use what software they want to use. But it just doesn't seem right that they close ports but open up access to software they don't know anything about. And they all open up port 80 for HTTP access. When evil-doers know what's always open why shouldn't they just focus all the effort to exploit that opportunity. Instead of making the exploitations of malicious users harder by deploying a more diverse system, system administrators just shown them the door. But wait, there is a way to restrict HTTP access. Block access to certain sites, i.e. don't let your users visit malicious sites. This improves security (which is why browsers have begun maintaining a list of sites, users shouldn't go to). However, some system administrators decide to use some crappy web content-control software, like WebSense, to further restrict users. Content-control software has been criticized before (e.g. for choosing sides or for false alarms). An awful decision that restricts productivity while trying to increase it. This crappy content-control might increase security (probably not that much) but is today more used to increase productivity and decrease bandwidth use. Users can't stream media, users can't access sites which are categorized as "a personal blog" or Facebook. This all sounds good. Why would anyone want to visit Facebook when they should be working. That's just bad for productivity (some employees do waste a lot of precious time and thus productivity by hanging out on Facebook and blog sites but those sites might also enable them to do things the system administrators can't foresee). I believe an old way of thinking is getting in the way of a new way of working. We increasingly reach out to peers for information and peers are increasingly putting information online. Content-control software restricts this access. I have been searching online for solutions to a problem and had to go out of my way to get access to information Google pointed to (which had the answer I was looking for). Would it have been better if I had contacted a private company who'd have charged my employer an arm and a leg for information that's available online for free, after I had gone through all of the hassle of contacting the other company and waited while they dug up the information (which would probably be the same information or something worse)? Although I'm not on Facebook, I see the potential in the huge network you can just ask for help. We shouldn't block these sites, we should help employees use these sites more efficiently. We're restricting development of information retrieval and sharing. That's just stupid and irritating and a bloody waste of potential productivity. But we face an even more threat from system administrators. I have sat on meetings with heads of IT departments who brag about finally being able to control what software users can use. They brag about stopping users from installing anything except the software they make available. That's just sadistic and it's nothing to brag about. It's the wrong kind of development. I understand that this is supposed to decrease complexity in the system and make the system as a whole more maintainable. But they are effectively deciding how people should work and what tools to use. They are, again, making it easier for exploiters and attackers to focus on specific products. But far worse is the effect this will have on employees. They are effectively creating worker drones. They are defining how we should work and by doing so destroying employee creativity and problem solving. This would be kinda (but still not at all) understandable if system administrators were professionals who knew everything about our work and tasks. But they are only system administrators (albeit powerful ones). We hire professionals to do their work the best way possible, but then we also hire system administrators who go ahead and decide how the professionals should work. That's just plain wrong. System administrators have too much control. This is not a bright future. We're going faster towards the digital dark ages than I anticipated. Those system administrators that take away the freedom to choose the best tools are restricting development and productivity. System administrators have a lot on their plate. They have a big responsibility and do all they can to be responsible but it's just outrageous that they have the power to constrain and restrict work. Information technology has the potential to increase productivity, creativity, and development. So why are we trying to take it away? Systems administrators have the power to constrain development by constraining how we do our work. I for one can't wait to get my most recent Amazon purchase Hacking Work: Breaking Stupid Rules for Smart Results. I hope the book will show me that development can't be constrained and that the future is bright. Copyright: ThinkGeek T-Shirt
OPCFW_CODE
M: The 53,651 meme - sharpshoot http://redeye.firstround.com/2006/05/53651.html R: jimream The web is evolving very fast these days. It seems to me that because of our connections and web browsing habits, people reading this are often disconnected from how "normal" people view the web. Let me assure you, this web 2.0 craze we are seeing is no bubble. What will happen? well, in a sense, web 2.0 is really interactive web 1.0. The future leaders of the social web will analyze the strengths and weaknesses of today's leaders: Google, Facebook, Myspace, Del.icio.us, Wikipedia, and Craigslist. (technology increases exponentially: never forget this) When a website incorporates all the positives of these types of websites into 1 all encompassing tool for organizing the unlimited information of the Internet, then we will see something great. We will see an evolution of these tools and it will not only be something people can use to get more efficiency out of life, it will actually improve people's lives and societies (globally). If you think the effects social networks and human/computer interaction are amazing, wait until you see the children of these sites. It is *absolutely* not a coincidence that the founders of Reddit played WoW. They realize that power of a website is directly correlated to the amount of user input into a website. The dilemma we entrepreneurs are faced with today is creating systems that encourage maximum participation. How does this tie into the 53,000 theory? None of the big winners, and I am talking the big ones, facebook, myspace, google, msnpages, orkut, (these are the sites that lead the world in user participation) succeeded because of blog recommendations. They succeeded because they were better ways of experiencing the Internet, not because some "expert" on techcrunch told the geeks it was a cool product. Well, there are many amazing "web 2.0" websites out there that will never be used by the masses until their friends, not techcrunch, invites them. On the Internet, the best solution always win. Humans are economical; they do what is best for themselves. When we create a search/browsing tool that is at the same time more rewarding and fun than myspace/wikipedia/delicious/digg we will see the whole world adopt this method, the same way the world has adopted the Google search, the same way all the "cool" people are on myspace. This website will not only be as "cool" and as fun, it will actually enhance people's lives. This is the future of the web. It is also no coincidence that VC's like the one whom this post is referring to, have seen a lot of "like delicious but XXX" or "Digg killers" This is not just hype, one day it will happen. One day there will be a delicapedieddit that will emerge as the new Internet powerhouse. R: nurall Another way of looking at this is to realize that beta testers are hard to come by, and one could leverage from the already existing pool of beta testers (53,651). Most startups want to improve their pre-money valuation before going to the VC, the most important ingredient for achieving that is a stable system that is a result of successive relevant iterations of the features. This could as well be part of any Web 2.0 company's road map for the first few months upon going live. It is needless to say that for a VC to be convinced, it is important to have the right kind of users. If there is even a little bit of overlap between the 53,651 web savvy users and the ideal end-user, it is safe to assume that viral marketing will take care of itself. An example that glares in the face is Google. Their systems are the way they are, thanks to their #1 beta testers, their employees. And one could argue that there are other beta testers at various levels, namely the actual end users. This statistic seems to do more good than harm, if regarded positively. The eventual end users of the system could just be an extension of the 53,651 initial users. Go Techcrunch!!! R: pg I think it's reasonable to design for the 53,651. What they use, others will later. The Apple II was designed for the 53,651. R: greendestiny As long as the 53,651 are actually interested in using it, not just having a look or thinking about the startup behind it. I imagine thats the problem with traffic from places like techcrunch. R: python_kiss TechCrunch has 351k feed subscribers not 53,651. I remember when Mashable covered mainstream startups but, in face of competition, positioned itself towards a smaller niche (covering social networks). Today Mashable has close to 80k subscribers. There is nothing wrong with aiming small. R: e1ven They had 53K at the time of writing ;) That article is from nearly a year ago.. I think that that is part of the key to understanding why the 53K are targetted- Because anything they're talking about now has a good chance of making it big later on. It's a risk, and it doesn't always (seldom?) pays out, but if I were to infer into the minds ofany dozens of developers, I'd think they're looking at the Techcrunch readers as the type of Early Adopters that they want/need to get them started- The type of people who are going to get excited about a product, to tell their friends. These are the people who will spend a day trying new technologies, whereas most "normal" people only try it once a friend recommends it. Those targeting the TC-crowd want to find that friend. -Colin R: r0b Yes, they had 53K a year ago, and now they have over 300K Doesn't that prove the post wrong? Clearly those 53K were a very powerful group... R: ericwan I'd say there're two kinds of startups which are unknown to mainstream America. One is the kinds the general public have actually used, but just may not realize that is a service provided by a startup company. This includes the likes of Slide or Meebo, which many more people have put their widgets on their blogs/Myspace than going on to their proprietary website. These startups are fine; they may just need more PR to the mainstream media. The startups that are really hard to reach mainstreams are those me-too startups, which are those "social network for XX" or "youtube cross wikipedia" kind of thing that users could not tell a slight difference from the sites that they have been using. R: r0b Check out this graph in an old post by Seth Godin: <http://sethgodin.typepad.com/seths_blog/2005/12/squid_soup_part_2.html> His point is that in order for a new idea or product to catch on, it first needs to be adopted by the "innovators" -- the "geeks". Then, and only then, can it spread to the broader population. If you try to skip the geeks and go straight to the mass market, you will fail. I'm not sure I buy that as a hard and fast rule, but I think the concept is generally solid. If a company makes it on TC, it will subsequently and consequently grow beyond TC. R: Readmore This is an interesting problem that I've wondered about myself. We all talk about Web 2.0 not being a bubble but maybe that's just because the 'average joe' doesn't know anything about what's going on. Other than a few stories about MySpace and YouTube most people never hear about any Internet startups. How do you cross that divide without spending alot of money on TV commercials? During the first bubble they were great at getting people's attention but bad at software, it seems like now things are exactly the opposite. R: zkinion I think the main point in this posting is not how bad it is to start off with the first adopters, but instead to think how they will be alot different than mainstream users, and what you learn from them (click through, advertising, new users, etc...) might not be exactly how things will work with the mainstream crowd. R: python_kiss All business start off as micro-niche. Friendster's demographic was considered a niche just four years ago. Now it is considered "mainstream". The one thing startups do risk for aiming small is VC investment. Venture Capitalists are reluctant to write a 7 figure check to a startup aiming for the 53,651 audience. R: benatkin I think it's a good idea to target the early adopters. If you don't, you risk basing your product on old technology.
HACKER_NEWS
change label for CDS coordinates To avoid confusion about the original meaning (and any other interpretations, correct or incorrect) of "CDS", change the label to "start and end coordinates". Even if "CDS" would be technically correct, enough people don't realize it so changing to something ploddingly unambiguous won't go wrong. I guess this is from V1? Currently in V2 we just have "II, 1500197-1502095 (1899nt)" with no CDS bit. Would "start and end translation coordinates" be clearer? Hmm, I still think CDS is the correct way to refer to the coding sequence genome coordinates. The problem is that people (incorrectly IMHO) are expecting the number 1899nt to represent the translation lenght. However it does't if it's the start and end of a spliced gene in the genome. The CDS length (or the number we report) is the entire length of the coding sequence with introns in the genome, and I think that is correct. The problem is how to explain this..... Maybe it should be the translated length in nucleotides. I think the meaning has changed over time. it used to be "from coding DNA sequence", which to me would make the CDS length the start and end of the CDS in the DNA not the start and end of the edited sequence. The edited sequence seems to be what people expect. So we could report the nucleotide length of the translation in this case.... Genomic location II, 1500197-1502095 (1899nt) start to stop Genomic location II, 1500197-1502095 (1899nt) plus UTRs I removed the milestone as I con't imagine it will impede anyone. So here we just need text changes? If so could you provide suitable? @mah11 If we just stick with the current single set of coordinates, its current label ("genomic location") is fine. If you want to show with and without UTRs, the version from Jun 1 above (https://github.com/pombase/website/issues/59#issuecomment-305459606) would do. Removed discuss label. Just need to add CDS start and end coordinates. What text should we have for RNA genes and pseudogenes? What text should we have for RNA genes and pseudogenes? III, 2111204-2116520 (5317nt) III, 2111204-2116520 (5317nt) Sorry, hit submit too soon. For now I've implemented it like "III, 2111204-2116520 (5317nt)" for genes without a translation, which is what we have at the moment. Is that enough in that case? For now I've implemented it like ... It's on the main site now. Is it OK? it looks a bit odd, but I think it is clear what it means Maybe we could say (CDS start/end) and (+UTRs) to make the text shorter. Is my suggestion naff? I think it would be clear to users . You said it is the "coding sequence genome coordinates" but according to HGVN +1 is assigned to translation initiation codon in the CDS. What you are talking about is the "genomic reference sequence"? Its the genomic location of the CDS though? I still maintain that CDS was "invented" to describe the genomic location in an EMBL/Genbank file, so its the coordinates of the coding sequence in whichever sequence you are referring to. i.e the genome here: FT CDS complement(23589..23978) well yeah, but according to the site I linked to, different people use the term differently, which is why I find it confusing. but what else could it mean in the context we are using it above? I think it's OK to close.
GITHUB_ARCHIVE
This page contains errata for the Complete Lojban Language resulting from the 4th Grammar Baseline proposal, which makes the PEG grammar the official grammar. While this page documents differences between the CLL and PEG grammar, it will be worded as if the CLL is in error. As this document evolves a tagging system will be creating to disambiguate the cases in which the PEG grammar is in error from the cases in which the CLL is in error. - Section 18 contains a list of Lojban names for grammatical terms. If the PEG grammar introduces other grammatical terms (e.g., any of those in gerna tecyvla), this section should be updated to include them. - Section 6, example 6.15 and 6.17 use the word |.a| in the Lojban, but gloss it to the letteral A. This is because the example, using zei, does not permit something like |cy. zei .abu| to be interpreted as a "c type-of a lujvo." This has been corrected in the PEG grammar, and the example should be modified to use .abu, just as the gloss does. - Section 14, example 14.4, discusses a situation in which the elidable terminator 'ku' is required. This specific example is no longer valid, as the 'ku' can be elided without ambiguity in PEG. - Section 4 claims that "ba'e" may not have "bu" attached. The "Marking Words" section of Magic Words document clarifies this, and permits "ba'e bu" to mean "the ba'e letter." "ba'e" should be removed from the list of cmavo not permitted in front of BU. - Section 10 does not properly describe the way ZOI works, by failing to articulate that the token stream is divided into words before looking for delimiters. This issue is extensively discussed at BPFK Section: ZOI, and the CLL needs to be updated to reflect the way the PEG works. - Section 10 states, on zoi syntax: "Its syntax is .zoi X. text .X., where X is a Lojban word (called the delimiting word) which is separated from the quoted text by pauses, and which is not found in the written text or spoken phoneme stream." In the PEG grammar, zoi-open and zoi-close are any-word, rather than any-lojban-word, permitting non-Lojban words to be used as delimiters. - Section 13 should be reviewed obsolete parts completely rewritten. This section in large part describes limitations of the YACC parser. As well, it needs to substantially expanded to include more examples. - Example 13.3 uses the phrase "zo si si si" as a self-erasing example. Magic Words now defines "zo si" to be a single word, grammatically, shortening this phrase to "zo si si". - Example 13.5 uses four "si" to erase a zoi phrase. Magic Words now defines ZOI-clause to be a single word, only requiring a single si to erase it. - Example 13.6 uses two "si" to erase a zo phrase. Magic words now defines ZO-clause to be a single word, only requiring a single si to erase it. - Section 15 should mention that fa'o can be used as a zoi delimiter, as I believe it enumerates every other case in which fa'o may appear in the grammar and not indicate the end of text. - Section 15 should mention that fa'o can be quoted by zoi, as it mentions zo and lo'u ... le'u. - Section 16 needs to be carefully checked against Magic Words and updated accordingly. More detail to follow. This chapter needs to be completely rewritten to provide an overview of the PEG morphology and grammar.
OPCFW_CODE
As we all know, working with image resources in .NET is hard. In this particular case, when I have a .resx file in my project I find it difficult to update images, add new icons and so on because the editor associated to .resx files is XML-based. So for example when I download a project from the Web and the project has a .resx file I can't edit images easily. Microsoft SDK v.1.1 includes a sample application called ResEditor (<program files>\Microsoft Visual Studio .NET 2003\SDK\v1.1\Samples\Tutorials\resourcesandlocalization\reseditor) for editing resource files, but it has some problems: 1) The main method doesn't receive parameters; therefore you cannot set it as your default editor for .resx files. 2) The dirty flag for the document is not working well. Before closing, the program always prompts to save even if you haven’t modified the document. 3) You can't edit individual items such as icons or bitmaps. So I developed a resource editor for .resx integrated to Visual Studio using VSIPExtras and the ResEditor sample as base code. Basically, the ResEditor was built using the VSIPExtras wizard for a new package, and after that I built the editor with the refactored ResEditor sample. - Dirty flag management - You can edit and save an icon or bitmap with Visual Studio editor and it is updated in the resx editor!! - If the bitmap has more than 256 colors it is edited with mspaint. - You can edit ImageLists Installation (Visual Studio 2003 required): - Extract the downloaded files to a folder. (i.e.: C:\ResEditor) - If you copy the files to a folder other than C:\ResEditor, edit regEditor.reg and replace C:\\ResEditor with you path. (Use "\\" insted of "\" i.e "c:\\Program Files\\MyPath" ) - Run regEditor.reg - Open Visual Studio 2003 command prompts and type "devenv /setup" - Open Visual Studio 2003. - In the Open File dialog select a resx file and select Open With. (Dropdown in the Open button) - Select the ResEditorEx as your resource editor (if you want set it as your default :) I have updated this post because the links were broken. You can download the new bits and code here - This editor depends on the primary interop assemblies of VSIPExtras. VSIPExtras is not released yet so the primary interop assemblies may change in the next beta or release version. Improvements to the code are welcome. For instance, I’d like to enhance the editor with undo&redo support, support for edit new types, ...
OPCFW_CODE
import palettes from './color-palettes'; const hash = {}; const defaultPaletteName = '_default'; const cursor = {}; /** * Returns the hash key for this combination of type, key and palette. * * @param {string} type * @param {string} key * @param {string} paletteName * @return {string} */ function getColorHashKey(type, key, paletteName) { return `${type}_${key}_${paletteName}`; } /** * Returns the defined color for this combination of type, key and palette. * If no color is defined, returns null. * * @param {string} type * @param {string} key * @param {string} paletteName * @return {string|null} */ function getSavedColor(type, key, paletteName) { const hashKey = getColorHashKey(type, key, paletteName); if (typeof hash[hashKey] === 'undefined') { return null; } return hash[hashKey]; } /** * Returns true if a color is already defined for this combination of * type, key and palette. * * @param {string} type * @param {string} key * @param {string} paletteName * @return {boolean} */ function colorIsSaved(type, key, paletteName) { return getSavedColor(type, key, paletteName) !== null; } /** * Defines the color for this combination of type, key and palette. * * @param {string} type * @param {string} key * @param {string} paletteName */ function saveColor(color, type, key, paletteName) { const hashKey = getColorHashKey(type, key, paletteName); hash[hashKey] = color; } const Colors = { defaultPaletteName, /** * Returns the CSS color for the type, key and palette. * * For a specified type and key in a palette, always returns the * same color. If no color is yet selected for this combination, * use the next color in the palette. If no palette is defined, * use the '_base' palette. * * @param {string} type * @param {string} key * @param {string} paletteName * @return {string} The color as CSS value (ex: #15a8f3) */ get(type, key, paletteName) { let color; const finalPaletteName = this.ensurePaletteName(paletteName); if (colorIsSaved(type, key, finalPaletteName)) { color = getSavedColor(type, key, finalPaletteName); } else { color = this.getNextColor(type, finalPaletteName); saveColor(color, type, key, finalPaletteName); } return color; }, /** * If the palette exists, returns the name unchanged. Else return * the default palette name. * * @param {string} paletteName * @return {string} */ ensurePaletteName(paletteName = null) { if (paletteName === null || typeof paletteName !== 'string' || typeof palettes[paletteName] === 'undefined') { return defaultPaletteName; } return paletteName; }, /** * Returns the next color of the palette for this type (different * types may use the same palette, but have different cursor) at its * internal cursor and advances the cursor to the next. Once the * cursor reaches the last color, returns to the beginning. * * @param {string} type * @param {string} paletteName * @return {string} */ getNextColor(type, paletteName) { const cursorKey = `${type}_${paletteName}`; if (typeof cursor[cursorKey] === 'undefined') { cursor[cursorKey] = 0; } const paletteColors = palettes[paletteName]; let paletteCursor = cursor[cursorKey]; if (paletteCursor >= paletteColors.length) { paletteCursor = 0; } const color = paletteColors[paletteCursor]; cursor[cursorKey] = paletteCursor + 1; return color; }, }; export default Colors;
STACK_EDU
Results 1 to 9 of 9 is this safe? obviously the account would have no privileges whatsoever and there would certainly be no sudo group. the only privileges would be to run the specified applications. if ... Enjoy an ad free experience by logging in. Not a member yet? Register. - 03-11-2010 #1 - Join Date - Dec 2009 No Password - Limited User is this safe? obviously the account would have no privileges whatsoever and there would certainly be no sudo group. the only privileges would be to run the specified applications. if not, please explain why. - 03-11-2010 #2 - Join Date - Dec 2009 bump, it's a simple question really. :P - 03-11-2010 #3No Password - Limited User Please expand that a little bit, I lost my magic glass ball Other than that, if you can already judge, that the question is easy to answer, then why do you have to ask in the first place?You must always face the curtain with a bow. - 03-11-2010 #4 Just take sensible precautions And you will be fine. I would suggest looking into using chroot to limit their access, just like we do for service users. - 03-12-2010 #5 awjans gives good advice. chroot'ed, the risk is probably rather low. A large proportion, probably most, Linux root exploits require a local user account as a foothold for elevating privileges. So, a non-chrooted passwordless user opens up the foothold to try from. Network security can mitigate a lot. - 03-12-2010 #6 Setting up a chroot environment is a big *** full of work and I always try to avoid that if not really needed. You may only allow the user to login through the login screen. When sshd is disabled for the user, he must stay physically in front of your terminal to login. All other scenarios where a user can login remotely without providing a password is plain stupid from the concept. As you don't provide any more information, I can just suppose you may should take a look at "ACL" (acl dot bestbits dot at). - 03-12-2010 #7 - 03-12-2010 #8 - Join Date - Dec 2009 ok thanks, i was considering a chroot inside a virtual machine with automatic snapshot restore through a shell script. its just a small home server to run a vpn access server and a p2p client. although the virtual machine and the host will be running only the bare essential networking services and a tight iptable policy, i would like to run a sshd for the host only. obviously the limited user account would have no password but the root account would have a 30 character pass. what kind of privilege escalations attack can be used on a passwordless limited user account? im probably totally missing something major here. the objective is really to just have a server that will boot and run with no user input. - 03-15-2010 #9 this is completeleny non-sense. you need a server that just runs without input (thus no users access it) and want to have a passwordless user? for what exactly do you need the anonymous user? I ask because I'm rather confident that you are just mistaking at some point of your masterplan, such that you think you would need something that you really don't want to have. a user that can access the machine is a open door. the user can execute most common software and potentially get owned of your system. listing the possibilities is hard, just think that there could be a buffer-overflow bug in some software. the user CAN once he got access to a system execute malicious code without any restrictions. the only thing that keeps him out is the login screen. if you open it ... well ... i think your immagination will help you for the rest. my point is: i really don't know why I should ever let someone anonymous into my computer. I have no reason to do that. a computer can have a unlimited amount of users (with the assumption you have unlimited hardware resources) that can login. with dsa/rsa signatures you can even authenticate without ever providing any password. on my systems I prevent the login of someone "bad" at all costs. that's why I installed for example fail2ban, which monitors the logfiles of sshd and bans IP addresses for a certain amount of time that try to login and fail more than N times. This little cool thingy can even detect DDOS login attacks, which can in fact lever your security by probing a big quantity of passwords in quite short periods.
OPCFW_CODE
Jenkins is a free and open-source automation server written in Java. It can be deployed on a single server or as a distributed application. It is one of the most popular open-source solutions for continuous integration and continuous delivery of software applications. Continuous integration (CI) is a software development practice that requires developers to integrate their code into the main repository (usually on a daily basis) as early and often as possible in order to detect integration errors, build new features, and provide feedback for all stages of the software life cycle. A platform like Jenkins is a CI framework that can be used online or installed locally on your computer. It provides you with an easy-to-use interface for collaborating with your team members on GitHub, Bitbucket, or other repositories that use Git in order to create continuous integration pipelines. Continuous delivery (CD) is a software development practice that enables small, frequent releases of software applications and services. It is faster than the traditional approach, which typically involves a single large release every six months. Continuous delivery can include deploying new code every day, every hour, or even several times an hour. The shorter time intervals enable flexibility in response to changes in business requirements or underlying technology while also lowering the cost and risk associated with long periods of time between releases. A platform like Jenkins is a CD framework that coordinates and manages the different steps required to produce a CD system. The role of Jenkins is not just to build the code, but also to test and deploy it. A plugin-based architecture that allows extending the basic functionality of Jenkins with self-written plugins, e.g. for source code management or other tasks. The available plugins are listed in the Plugin Manager within Jenkins and can be installed by simply clicking on them. It also has its own REST API, so you can create your own custom tools that integrate with Jenkins without needing to know how to code or anything about Jenkins’ architecture internals. You are a sysadmin looking for a solution that will help you save time while deploying small applications to your machines. After looking at some options, you come across Jenkins, which claims to be able to solve all your problems. You dive in, read the documentation and install Jenkins. The documentation guides you through the installation process, but you can not quite get it installed correctly. Installing and configuring the necessary software components for a complete, working build system is not as easy as it may sound. That’s why we created this step-by-step tutorial on how to install and configure Jenkins on AlmaLinux 8. In order to install Jenkins on AlmaLinux 8, you will need: - A 64-bit AlmaLinux 8 machine with a working Internet connection. - Root access to your server. You can get it by following this guide. - System Requirements: according to the Jenkins official website, a basic installation of Jenkins needs a minimum 2 GB of RAM. Jenkin requires 50 GB of free disk space for the installation, plus 1 GB of free disk space for each build slave you want to add. In addition, you will need one CPU core and one GB of RAM per concurrent build worker that you expect to support. Updating Your System Before you get started with installing and configuring Jenkins, you should update your system to the latest available version of the software packages. For that, ssh to your server and run the following command. The epel-release package provides updated packages from the Extras development repository which are not yet part of a major RHEL release. The Extras repo contains packages that are not included in Red Hat’s standard set of packages but are nevertheless built for RHEL releases. This includes language packs, support for newer versions of adaptive icons, and other functionality updates.Advertisement - sudo dnf check-update && sudo dnf update -y - sudo dnf install epel-release Java is a programming language that is based on C. It’s considered to be one of the most popular programming languages because it has been used in many software such as Android and Google Chrome. Java is a cross-platform programming language that can run autonomous applications on both Windows and Linux operating systems, as well as MacOS, Solaris, FreeBSD, and other UNIX systems. Jenkins, at its core, is a Java program that requires you install the Java Runtime Environment (JRE) and the Java Development Kit (JDK) on your system in order to function properly. This demo will install the OpenJDK 11 on the system. The OpenJDK is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). It is a development and runtime environment for building applications, microservices, and other server systems that run on the Java virtual machine (JVM). The OpenJDK is based on the Oracle’s Java Development Kit version 8 with Project Jigsaw support. This means you can run Jenkins in OpenJDK 11 with Project Jigsaw without any compatibility issues. Run the following command to install OpenJDK 11 on your system. sudo dnf install java-11-openjdk -y Once the installation is complete, you can run the command to check if it’s working correctly. You will see the following output. Now that you have Java installed, you’re ready to install Jenkins. The AlmaLinux base repository does not include any of the Jenkins packages, so first, you will need to add the official repository from its developer. It’s the only repository allowed to distribute software packaged for a specific supported distribution. In this case, it’s the Jenkins developer’s own repository for Redhat and its derivatives. Run the following command to import the Jenkins key to the system. This key is a security mechanism used to validate the authenticity of a software package. sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key Run the following command to add the Jenkins repository to the system. cd /etc/yum.repos.d/ && curl -O https://pkg.jenkins.io/redhat-stable/jenkins.repo Run the sudo dnf makecache command to refresh the metadata cache of all enabled repositories to ensure that local disk repository data is up to date. This can be used when updating/installing packages or if the metadata has been corrupted. sudo dnf makecache Run the following command to verify if the Jenkins repository has been added to your syste. It is a way to check on the current list of repositories, which indicates to the package manager which repositories have been enabled. In other words, it is a way to see which repositories are currently being tracked by dnf. sudo dnf repolist Run the following command to install Jenkins on your system. dnf install -y jenkins Once the command has finished installing, run the following command to start the Jenkins service. sudo systemctl start jenkins Run the following command to check the status of the Jenkins service. sudo systemctl status jenkins Jenkins is a continuous integration service that can monitor executions of repeated jobs, such as building a software project or jobs run by cron. Monitoring the status of Jenkins can help us know whether they are running as expected. This knowledge may be helpful in troubleshooting any issues with jobs that it runs that are not successful. You will see the following output. Configuring Your Firewall Jenkins is your automated build server, it helps with continuous integration and deployment for your projects. Jenkins has the ability to allow SSH connections to perform builds and tasks on remote slave nodes. However, you will need to configure your firewall to allow Jenkins access to these servers. You will need to open port 22 (SSH) and optionally port 8080 (Web client) for Jenkins to be able to connect to the remote servers your applications reside on. These ports are usually closed by default when using cloud-based virtual machines. If you are setting up Jenkins on your own hardware, you will need to allow these ports through your firewall or router. Run the following command to open these ports on your firewall. sudo firewall-cmd --permanent --zone=public --add-port=22/tcp sudo firewall-cmd --permanent --zone=public --add-port=8080/tcp Run the following command to apply your changes. The sudo firewalld –reload command ensures that the rules and configurations currently in force will be reloaded if present. This can be useful to apply changes made through the firewall-cmd tools to the running system. sudo firewall-cmd --reload Finally, run the following command to check if the rules have been added successfully. sudo firewall-cmd --list-all Accessing Jenkins Web UI Now that you have your Jenkins server up and running but you want to access it in the web browser. You can access Jenkins by visiting its IP address on port 8080. For example, if your IP is 192.168.1.100 and the default port is 8080, then navigate the following address to go to Jenkins: 192.168.1.100:8080 When you try to access the Jenkins web UI, you will get an Unlock Jenkins screen asking you to go to /var/lib/jenkins/secrets/initialAdminPassword, as shown below. This is encrypted data that contains the password you used to log in to the dashboard. It stores the initialAdminPassword in an encrypted form. This ensures that a malicious user who has gained access to your Jenkins server does not have the password that you used on first login. Return back to your terminal, where you should be still logged in as the root user. Enter the following command to decrypt it: You will see the output that contains the password to the Jenkins web UI. Copy and paste this into your favorite editor and save it somewhere on your machine. You can now use this password to access your Jenkins web interface. On the next screen, select install suggested plugins. A plugin is nothing more than a directory with some files in it. When you install the plugin, Jenkins detects the directory and enables the features that are specified in the files. When you first install Jenkins, you should choose the option to install suggested plugins. This option installs all of the plugins that are needed for a basic Jenkins setup. No worries, you can always change or add more plugins later in the Plugins section of the web interface. On the Create First Admin User screen, provide your username, E-mail address, full name, and password. Click on Save and continue to go to the next screen. On the next screen, keep the default and click on Start using Jenkins You will be taken to the Jenkins dashboard, as shown below. When you first installed Jenkins, it probably came with a blank page as a default page. You can see this if you look at the source of the page – there’s nothing there. However, over time, as you start adding jobs and plugins, the page will transform into a dashboard that displays visual information about your projects. In this post, you learned how to install and configure a Jenkins server. This is only the beginning, however; it is a very valuable automated build environment that can be leveraged in your daily practice. Leave your comments and suggestions in the section below if you have any feedback or questions.
OPCFW_CODE
feat: default tracer prototype Description With the OTel Ruby API as it currently stands Tracer is the one stop shop for everything that a user needs / wants to do with a span. This design is intentional and was chosen for it's simplicity. There are currently questions around how interactions with context and the current span should be handled: see https://github.com/open-telemetry/opentelemetry-specification/issues/1019. This description is ridiculously long as it assumes little to no familiarity with the Ruby API. If you are familiar with the project, skip the Changes in this PR section towards the bottom. For folks less familiar with the project, I'll show how the API currently works for a handful scenarios, and then go on to explain the changes added in this PR and what improvements they bring. Tracer operations First, get a handle on a tracer # obtain a named tracer tracer = OpenTelemetry.tracer_provider.tracer('my-app', '0.0.1') The rest of the examples use this tracer Span creation Start span # create and return a span / does not modify context span = tracer.start_span('a-span') Start span in current context # create a new span, set it in the active context tracer.in_span('a-span') do |span| # execute this block of code in the active context end Span and context managment Read the current span From the current (implicit) context span = tracer.current_span From an explicit context span = tracer.current_span(some_context) Set span in a context Implicit context new_context = tracer.context_with_span(span) Explicit context new_context = tracer.context_with_span(span, parent_context: some_context) Execute block with span in a context tracer.with_span(some_span) do |span, context| #run this block with span set in the active context end Changes in this PR The Tracer#current_context and Tracer#span_with_context methods do not explicitly depend on state from a tracer instance, so they could be class (static) methods on a tracer. However, this introduces some unneccessary complications to the API. As an alternative solution, this PR introduces an easy to access default tracer on the top-level OpenTelemetry module. It's available as OpenTelemetry.tracer and can be used whenever a user needs to access tracer methods, but does not have an explicit handle on one. For example, users might want to grab the current_span out of the ether to add an attribute, or event. OpenTelemetry.tracer. current_span&. add_event('an-event', attributes: {'k1': 'v1', 'k2': 'v2'}) The OpenTelemetry.tracer method is just a delegate to the global tracer provider. When called without parameters, it returns a tracer named "default". It can also take arguments for name and version which becomes a shortcut for OpenTelemetry.tracer_provider('tracer-name', 'tracer-version'). Alternatives Originally I added a default_tracer method that delegates to the global tracer provider, obtains a tracer named 'default', and memoizes and returns the result. It has the benefit of not having to lookup the tracer each time it's invoked. I switched to the tracer delegate method as it solves the same use case, is less verbose, and is usable in other scenarios. I think it's worth the tradeoff, but could be convinced otherwise. https://github.com/open-telemetry/opentelemetry-specification/pull/1063 has merged. It technically allows for what we have in this PR where current_span and context_with_span are instance methods on a tracer. It seems like there would be a slight preference for these to be static methods on the Trace module, which we could do. We could also have them in both places, which might be slightly frowned upon. Either way, I can live with any of these options, I'm just looking for opinions. If you have one, let me know and I can get this cleaned up for review. Here's how the various options look from a client perspective: Static methods on Trace module. span = OpenTelemetry::Trace.current_span span = OpenTelemetry::Trace.current_span(some_context) new_context = OpenTelemetry::Trace.context_with_span(span) new_context = OpenTelemetry::Trace.context_with_span(span, parent_context: some_context) OpenTelemetry::Trace.with_span(span) do |span, context| # Run this block with span set in the active context. end This is actually pretty good. I'll come back to this later. Static methods on a class in Trace module (may be named TracingContextUtilities). span = OpenTelemetry::Trace::TracingContextUtilities.current_span span = OpenTelemetry::Trace::TracingContextUtilities.current_span(some_context) new_context = OpenTelemetry::Trace::TracingContextUtilities.context_with_span(span) new_context = OpenTelemetry::Trace::TracingContextUtilities.context_with_span(span, parent_context: some_context) OpenTelemetry::Trace::TracingContextUtilities.with_span(span) do |span, context| # Run this block with span set in the active context. end I think we can all agree this option looks terrible. Methods on Tracer (using 'global default' tracer instance). span = OpenTelemetry.tracer.current_span span = OpenTelemetry.tracer.current_span(some_context) new_context = OpenTelemetry.tracer.context_with_span(span) new_context = OpenTelemetry.tracer.context_with_span(span, parent_context: some_context) OpenTelemetry.tracer.with_span(span) do |span, context| # Run this block with span set in the active context. end This requires the same number of characters as the first option (static methods on the Trace module). Superficially it looks pretty good, but not necessarily any better than the first option. Methods on Tracer (using instrumentation's tracer instance). span = tracer.current_span span = tracer.current_span(some_context) new_context = tracer.context_with_span(span) new_context = tracer.context_with_span(span, parent_context: some_context) tracer.with_span(span) do |span, context| # Run this block with span set in the active context. end This is obviously the shortest and most pleasing to type and look at. It has some conceptual problems, though. These "methods" are all really simple functions of the explicit or implicit context and an explicit span. They don't create new spans or modify the span provided or returned in any way. The Tracer API's primary purpose is to create spans in the (ahem) context of an InstrumentationLibrary. Methods/functions that manipulate the binding of (implicit or explicit) contexts and spans are really outside of this scope. In fact, the context has a broader scope than an individual tracer instance, and the current_span in the implicit context may (and probably will) have been created by a tracer other than either the global tracer or the tracer the user has in their hands. This is confusing IMO, and suggests the methods above do not belong on Tracer. Another concern is that the existence of a global tracer will lead new users to misuse and misunderstand the API. In particular, the following pattern will lead to spans not associated with an InstrumentationLibrary: OpenTelemetry.tracer.in_span('...') do |span, context| # Run this block with span set in the active context. end Given all that, I'd strongly prefer to move the current_span, context_with_span and with_span methods to be 'static' methods on OpenTelemetry::Trace. This means moving the CURRENT_SPAN_KEY constant to that module as well. I'd also propose removing current_span_key from ContextKeys. From my reading of its uses, we don't need to (and shouldn't) expose that on ContextKeys - people can just use the API on OpenTelemetry::Trace instead. I might be mistaken, but it looks like we can remove the ContextKeys module altogether. I've implemented the first option in #439. It is obviously a lot more invasive, but I think the result is better and more in the spirit of the spec. I've implemented the first option in #439. It is obviously a lot more invasive, but I think the result is better and more in the spirit of the spec. I agree. I'll go ahead and close this. I guess one thing that isn't in #439 is the default tracer (the OpenTelemetry.tracer delegate). Is this something we're interested in? If so, I can bring this PR back in some form. I guess one thing that isn't in #439 is the default tracer (the OpenTelemetry.tracer delegate). Is this something we're interested in? If so, I can bring this PR back in some form. I don't think that's something we should do. As mentioned above: Another concern is that the existence of a global tracer will lead new users to misuse and misunderstand the API. In particular, the following pattern will lead to spans not associated with an InstrumentationLibrary: OpenTelemetry.tracer.in_span('...') do |span, context| # Run this block with span set in the active context. end
GITHUB_ARCHIVE
The Gimp - Creating background gradient I have the below css code for a web gradient on my page, I would like to make a background image that is exact to this gradient using the Gimp. Anyone have expertise doing this that might be able to lend some advice? Thanks background-image:-webkit-linear-gradient(90deg, rgba(51, 51, 51, 1.00) 0.0% , rgba(26, 26, 26, 1.00) 50.5% , rgba(51, 51, 51, 1.00) 50.7% , rgba(77, 77, 77, 1.00) 100.0% ); GIMP can't parse that directly, althoug GIMP 2.8 ships with a Python script that can output gradients in this CSS syntax You could make a python-script to parse CSS gradient syntax into GIMP Gradients, and them use this gradient on an image. Of course it is overkill if you are needing that just once - I'd recommend creating a new gradient in GIMP, and manually edit the recorded file (in ~/.gimp-2.8/gradients folder if you are on *nix, else check for the user gradients folder in the preferences). GIMP's gradient file is straightforward - a text only file that goes like: GIMP Gradient Name: Untitled 2 0.000000 0.243464 0.486928 0.000000 0.000000 0.000000 1.000000 0.000000 1.000000 0.000000 1.000000 0 0 0 0 0.486928 0.743464 1.000000 0.000000 0.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 0 0 0 0 So this is a single gradient, with two segments - each line has the start-point, endpoint of each segment, the starting ARGB color, ending ARGB color, and ,...don't care, just keep the four zeros at the end: most likely they are used to describe the type of color in each endpoint, and we want 0. Those rgba colors correspond to the following html-notations : rgba(51, 51, 51, 1.00) - #333333 (Color A) rgba(26, 26, 26, 1.00) - #1a1a1a (Color B) rgba(51, 51, 51, 1.00) - #333333 (Color A) rgba(77, 77, 77, 1.00) - #4d4d4d (Color C) You could try creating a rectangular image (with height twice the width). Fill the top square half with a gradient of color A to color B, and the bottom square with a gradient from color A to color C. Then you set it as your background image with "repeat" property enabled.
STACK_EXCHANGE
Joining a new company can be scary At the beginning of October 2019, I joined the CodeFund team as a full stack Ruby on Rails Engineer. It has turned out to be one of the greatest decisions I have ever made but at the moment, it was one of the hardest and most important decisions I have ever made. I started coding in high school but I didn’t always think I would be a programmer. Up until my senior year of college, I thought I was going to work with virtual reality or user experience design. In my pursuit of being a UX designer, I took a graphic design internship at the beginning of my senior year of college. Due to a crazy turn of events, I ended up graduating college and joining the same company I interned at, but as a Ruby on Rails developer. For the next 8 months, I spent most of my free time learning as much about ruby & Rails that I could. I went to conferences, read books, did tutorials, went on podcasts, and eventually found myself in the world of open source. Meeting Nate & Eric I met Nate and Eric through a mutual friend at the end of 2018. In my eyes, Eric was the enthusiastic startup founder, with a drive I truly admired. Nate was the wise old programmer I envisioned levitating on top of a mountain. At the time, Nate was working on a cool library called StimulusReflex. On a sheer whim, I reached out to him one evening and let him know I was interested in the project and wanted some advice on using the library. At the time, I was getting really big into open source, and had fallen in love with the ecosystem. I desperately wanted to give back to the community that had already given me so much and I thought StimulusReflex might be my chance. To my surprise, Nate asked if I wanted to pair program that weekend on creating a demo application with the library. That weekend, and several weekends afterward, Nate let me tag along as we built some cool things with StimulusReflex. At the same time, I was establishing myself at my current company as a capable programmer who was always ready to learn. My company had begun to go through some turnover and several of the developers I looked up to had moved on to new things. I had always been obsessed with growing my skills and solving new problems and for the first time, and I didn’t feel challenged anymore. However, my weekend pairing sessions with Nate were filling the void and I was excited to soak up all of the knowledge that Nate was willing to share with me. After a few weeks, Nate shared that CodeFund was looking to hire a developer and he wanted me to apply. Queue the imposter syndrome. Fear of failure and crippling perfectionism had plagued me my entire career, which is what has motivated me to work hard and be persistent in learning during my off-hours. At first, I dismissed Nate’s attempts to get me to apply. I thought he considered me to be much better at programming than I actually was. CodeFund is also a fully remote company, and I was scared of taking that leap after barely being out of college for more than a year. I told him I wasn’t interested, but I inwardly was. CodeFund was the company I had always dreamed of joining. A small team, with an open source application, and amazing goal of funding open source developers that I considered myself one of. Working remote, no office drama or politics, and amazing mentors that would help me continue to grow at a time where I felt like I was plateauing. A long-time mentor told me, when I brought up Nates recruitment attempts, that he uses a rule of three when considering changing jobs: - Will it help me grow? - Will I like the culture more? - Do I believe in the cause? Number one was an easy check, as I couldn’t stop gushing to my friends about how smart Nate was and how much I was learning from him. Number two is hard to judge without actually being exposed to the culture of the company on a daily basis. I was having trouble at my current company because I had begun to feel like I didn’t really fit in and office politics were beginning to rear its ugly head. I did know that Eric and Nate were passionate, empathetic people and we all got along great. Number three is what ended up being the deciding factor for me. CodeFund was helping to solve a real problem in the open source community and it was a problem I felt very passionate about. By this point, I had started trying to create open source libraries, and contribute to libraries in any way I could. Being able to help with funding maintainers and benefiting the community at large made me very excited, whereas my societal benefit potential at my current position was little to none. Fast forward a month and I couldn’t be happier that I made the leap and joined CodeFund. Looking back on all of the sleepless nights I spent trying to decide to accept the chance that Eric and Nate were offering me, it’s become apparent that I was very, very afraid. I was afraid that I wasn’t as smart as they thought I was. I was afraid of disappointing my managers at the company I was at. And I was afraid that I wasn’t good enough of a programmer to help make an impact in the open source community. Take a chance If anyone out there is also scared of taking a chance, and proving that the negative thoughts in your head are wrong, I would offer you a simple piece of advice: Look past all the fears you have of what could go wrong, and instead, imagine where you could be if everything went right. Every day, I am grateful that I took the leap and joined CodeFund, and I can’t wait to continue helping the community that I care so much about.
OPCFW_CODE
By Abhishek Nandy Develop apps and video games utilizing the bounce movement sensor. This ebook starts off with a quick advent to jump movement, then covers getting the jump movement operating and constructing a jump movement improvement surroundings. Leap movement for builders also covers the lifestyles cycle of the way you have interaction with bounce movement and the workflow of creating a whole app. You’ll see how one can use varied programming languages for easy and regular development. What you are going to Learn - Look on the fundamentals of jump Motion - Develop apps for the jump movement sensor - See how assorted languages paintings with jump Motion - Discover the way forward for jump Motion Who This booklet Is For Students, builders, video game builders, and tech enthusiasts. Read Online or Download Leap Motion for Developers PDF Best tablets & e-readers books I used to be very pissed off with my buy and that i used to be considering to jot down a overview out of frustration. despite the fact that, after i have noticeable the 5-star stories from different clients, i could not think my eyes. One assessment used to be raving in regards to the code samples (absolutely ridiculous) after which I observed another reader leaving a remark for the assessment announcing that he is been engaged on the pattern code for weeks and nonetheless could not make it paintings and that i can relate to that. For iOS five on iPad 2 and iPhone 4/4s detect countless numbers of counsel and tips you should use along with your iPad or iPhone to maximise its performance as you employ your iOS five cellular machine as a robust verbal exchange, association, and productiveness instrument, in addition to a feature-packed leisure equipment. as well as studying all concerning the apps that come preinstalled in your iPhone or iPad, you know about the superior third-party apps at present on hand and notice important suggestions for the way to most sensible make the most of them. This short considers many of the stakeholders in cutting-edge cellular equipment surroundings, and analyzes why widely-deployed safeguard primitives on cellular gadget structures are inaccessible to program builders and end-users. present proposals also are evaluated for leveraging such primitives, and proves that they could certainly enhance the safety houses on hand to purposes and clients, with no decreasing the houses at present loved through OEMs and community vendors. Have you ever considered construction video games on your cellphone or different instant units? no matter if you're a first–time instant Java developer or an skilled specialist, starting Java™ ME Platform brings fascinating instant and cellular Java software improvement correct for your door and machine! starting Java™ ME Platform empowers you with the pliability and tool to begin construction Java functions in your Java–enabled cellular machine or cellular phone. - Beginning Facebook Game Apps Development - Killer Presentations with Your iPad: How to Engage Your Audience and Win More Business with the World’s Greatest Gadget (Business Books) - BlackBerry Storm2 Made Simple: Written for the Storm 9500 and 9530, and the Storm2 9520, 9530, and 9550 (Made Simple (Apress)) Additional info for Leap Motion for Developers The General tab provides important introductory information about using the Leap Motion sensor SDK. As the Leap Motion SDK and Leap Motion are being set up, you can check the progress details by recalibrating the device on the Troubleshooting tab as shown in Figure 2-15. To repeat the process, click Recalibrate Device again. 24 Chapter 2 ■ Setting Up Leap Motion Figure 2-15. We can recalibrate the device 25 Chapter 2 ■ Setting Up Leap Motion The calibration status is complete when you have set up the Leap Motion device for the first time (Figure 2-16). 10586] (c) 2015 Microsoft Corporation. All rights reserved. C:\Users\abhis>cd\ C:\>f: F:\>cd F:\LeapPython F:\LeapPython> Next we check the directory structure using dir (Figure 3-20). py file. 48 Chapter 3 ■ Setting Up Leap Motion for Python Figure 3-20. Checking the files Check the connected and disconnected status from the PC as the program runs to verify the functionality of the app (Figure 3-21). Figure 3-21. Checking the functionality 49 Chapter 3 ■ Setting Up Leap Motion for Python Obtaining Values from the Leap Motion Device In the next program everything happens in the onFrame function. All rights reserved. C:\Users\abhis>cd\ C:\>f: F:\>cd F:\LeapPython F:\LeapPython> Next we check the directory structure using dir (Figure 3-20). py file. 48 Chapter 3 ■ Setting Up Leap Motion for Python Figure 3-20. Checking the files Check the connected and disconnected status from the PC as the program runs to verify the functionality of the app (Figure 3-21). Figure 3-21. Checking the functionality 49 Chapter 3 ■ Setting Up Leap Motion for Python Obtaining Values from the Leap Motion Device In the next program everything happens in the onFrame function.
OPCFW_CODE
// Route the request by hostname package hostroute import ( "fmt" . "github.com/mailgun/vulcan/location" . "github.com/mailgun/vulcan/request" . "github.com/mailgun/vulcan/route" "strings" "sync" ) // This router composer helps to match request by host header and uses inner // routes to do further matching type HostRouter struct { routers map[string]Router mutex *sync.Mutex } func NewHostRouter() *HostRouter { return &HostRouter{ mutex: &sync.Mutex{}, routers: make(map[string]Router), } } func (h *HostRouter) Route(req Request) (Location, error) { h.mutex.Lock() defer h.mutex.Unlock() hostname := strings.Split(strings.ToLower(req.GetHttpRequest().Host), ":")[0] matcher, exists := h.routers[hostname] // search for wildcard domains if !exists { hostname := strings.Split(hostname, ".") for key, value := range h.routers { keys := strings.Split(key, ".") if len(hostname) != len(keys) { continue } matches := true for i := len(hostname) - 1; i >= 0; i-- { if keys[i] == "*" { continue } if hostname[i] != keys[i] { matches = false break } } if matches { return value.Route(req) } } return nil, nil } return matcher.Route(req) } func (h *HostRouter) SetRouter(hostname string, router Router) error { h.mutex.Lock() defer h.mutex.Unlock() if router == nil { return fmt.Errorf("Router can not be nil") } h.routers[hostname] = router return nil } func (h *HostRouter) GetRouter(hostname string) Router { h.mutex.Lock() defer h.mutex.Unlock() router := h.routers[hostname] return router } func (h *HostRouter) RemoveRouter(hostname string) { h.mutex.Lock() defer h.mutex.Unlock() delete(h.routers, hostname) }
STACK_EDU
A comment in NSS_CMSDecoder_Cancel says there is a leak. Striving for leak free, I'm want to call the cancel function, in case the S/Mime code did not finish the processing. I thought I should call cancel instead of finish in the cleanup phase. We should check the code whether there is really a leak, and whether the same leak happens when calling NSS_CMSDecoder_Finish. Assigned the bug to Julien. Moved to target milestone 3.8 because the original NSS 3.7 release has been renamed 3.8. For the record, the function looks like this : * NSS_CMSDecoder_Cancel - stop decoding in case of error /* XXXX what about inner decoders? running digests? decryption? */ /* XXXX there's a leak here! */ Chris, do you have any recollection of this issue ? cc'ing Stephane, I think it would be worth spending some time on this issue, to assure there isn't a leak in this area of the code - since a comment in the code says there is most likely one. Remove target milestone of 3.8, since these bugs didn't get into that release. I looked at the code and I'm not confident there is a leak . The main risk I see is with the ASN.1 decoder . The calls always use arenas, which are getting freed properly in NSS_CMSMessage_Destroy . Also, the NSS_CMSDecoder_Finish function does even less, since it does not destroy the message, and presumably that function is getting called in success cases. But we don't see leaks with it. That's in theory, at least. To be certain, we may need to actually try it and check with purify if calling this API causes a leak with a tool like purify of solaris 10's libumem . Perhaps we could modify cmsutil to call this API during the NISCC test suite and observe if this introduces leaks . In the fourth quarter of 2003, Julien and I did *extensive* testing of libSMIME for leaks. We used the NISCC test suite, which has over 1.3 MILLION test messages, most of which have errors, so it extensively tests the error paths. There were quite a few leaks when we started this work. There were NONE when we were done. So I think it is quite unlikely that there remains a leak in the code paths that involve the code cited above. I personally think that bugzilla bugs should report observed and confirmed bugs, not conjectured or suspected bugs. The comments in the code quoted above show the author's uncertainty that the error paths were correct, but are not proof nor observation of a leak. Perhaps we should return this bug to UNCONFIRMED state. I personally think the bug in invalid unless/until someone finds a Perhaps we should change the code comments quoted above to say something like "The author was not sure that all error paths were adequately handled. This code path should be subjected to additional leak testing." and then resolve this bug invalid. Mozilla's LXR shows that NSS_CMSDecoder_Cancel is not reachable by any NSS test program. So the NISCC S/MIME tests we ran did not cover this function. (In the entire Mozilla cvs repository, NSS_CMSDecoder_Cancel is only called by PSM's nsCMSDecoder::destructorSafeDestroyNSSReference: I will propose a couple of things we can do about this Created attachment 183056 [details] [diff] [review] Enhance cmsutil to demostrate proper error handling and use of NSS_CMSDecoderCancel This patch makes cmsutil handle the failures of the NSS_CMSDecoder_XXX functions and call NSS_CMSDecoder_Cancel instead of NSS_CMSDecoder_Finish on NSS_CMSDecoder_Update This patch requires fixing NSS_CMSDecoder_Cancel first. When NSS_CMSDecoder_Update fails, it calls SEC_ASN1DecoderFinish(p7dcx->dcx) and sets p7dcx->dcx to NULL, so NSS_CMSDecoder_Cancel needs to test p7dcx->dcx before calling SEC_ASN1DecoderFinish(p7dcx->dcx). Created attachment 183057 [details] [diff] [review] NSS_CMSDecoder_Cancel fix 1 The first way to fix NSS_CMSDecoder_Cancel is to simply define it in terms of the closely related NSS_CMSDecoder_Finish function. Basically we call NSS_CMSDecoder_Finish and throw away the result. Created attachment 183061 [details] [diff] [review] NSS_CMSDecoder_Cancel fix 2 If we want to let NSS_CMSDecoder_Cancel have its own code, it should be as close to the error path in NSS_CMSDecoder_Cancel as possible. It definitely needs to test p7dcx->dcx for NULL. I don't think the ordering of SEC_ASN1DecoderFinish and SEC_ASN1DecoderFinish matters, but I reversed the order just to be safe because p7dcx->dcx contains a reference to p7dcx->cmsg (the destination of Created attachment 183063 [details] [diff] [review] Patch for NSS_CMSMessage_CreateFromDER This function needs to at least handle NSS_CMSDecoder_Start failure. It may also want to check the return value of NSS_CMSDecoder_Update and call NSS_CMSDecoder_Cancel on failure. I recommend we do the following for this bug. 1. Take either fix 1 or fix 2 for NSS_CMSDecoder_Cancel. 2. Fix cmsutil.c and NSS_CMSMessage_CreateFromDER the same way. They both need to handle NSS_CMSDecoder_Start failure. Optionally, they should also check the return status of NSS_CMSDecoder_Update and call NSS_CMSDecoder_Cancel I checked these patches in to the tip . Checking in cmd/smimetools/cmsutil.c; /cvsroot/mozilla/security/nss/cmd/smimetools/cmsutil.c,v <-- cmsutil.c new revision: 1.53; previous revision: 1.52 Checking in lib/smime/cmsdecode.c; /cvsroot/mozilla/security/nss/lib/smime/cmsdecode.c,v <-- cmsdecode.c new revision: 1.9; previous revision: 1.8
OPCFW_CODE
<?php namespace DigitalSplash\Classes\Core\ShippingProvider\TheCourierGuy; use Curl\Curl; use DigitalSplash\Classes\Core\ShippingProvider\ShippingProviderInterface; use DigitalSplash\Classes\Core\ShippingProvider\TheCourierGuy\Models\QuoteContentsModel; use DigitalSplash\Classes\Core\ShippingProvider\TheCourierGuy\Models\QuoteDetailsModel; use DigitalSplash\Classes\Database\Logs\ShippingTheCourierGuy AS ShippingTheCourierGuyLog; use DigitalSplash\Classes\Helpers\Helper; class TheCourierGuy implements ShippingProviderInterface { public string $serviceURL; public string $username; public string $password; public string $token; private string $salt; public string $fromPostCode; public string $toPostCode; public string $toTownName; public QuoteDetailsModel $detailsModel; public array $contentModels; public function __construct() { if (IS_LIVE_ENV) { $this->serviceURL = "http://tcgweb16931.pperfect.com/ecomService/v19/Json/"; $this->username = "tcg4@ecomm"; $this->password = "tcgecomm4"; } else { $this->serviceURL = "http://adpdemo.pperfect.com/ecomService/v19/Json/"; $this->username = "tcg4@ecomm"; $this->password = "tcgecomm4"; } $this->token = ""; $this->salt = ""; $this->fromPostCode = "6730"; $this->toPostCode = ""; $this->toTownName = ""; $this->detailsModel = new QuoteDetailsModel(); $this->contentModels = []; } public function RequestQuote(): array { $retArr = []; $this->SetSalt(); $this->SetToken(); $fromPostResult = $this->GetPlacesByPostCode($this->fromPostCode); $fromPostCodeArr = $fromPostResult[0] ?? []; $toPostResult = $this->GetPlacesByPostCode($this->toPostCode); if (count($toPostResult) === 0) { $toPostResult = $this->GetPlacesByTownName($this->toTownName); } $toPostCodeArr = $toPostResult[0] ?? []; $this->detailsModel->fromPlace = $fromPostCodeArr["place"] ?? ""; $this->detailsModel->fromTown = $fromPostCodeArr["town"] ?? ""; $this->detailsModel->fromPostCode = $fromPostCodeArr["pcode"] ?? ""; $this->detailsModel->toPlace = $toPostCodeArr["place"] ?? ""; $this->detailsModel->toTown = $toPostCodeArr["town"] ?? ""; $this->detailsModel->toPostCode = $toPostCodeArr["pcode"] ?? ""; $quoteParams = [ "details" => $this->detailsModel->BuildModel(), "contents" => $this->contentModels ]; $quoteResponse = $this->MakeCall("Quote", "requestQuote", $quoteParams, $this->token); /* * then the user needs to choose the service most desirable to them and then use * the "updateService" method to set the desired service, * then use "quoteToWaybill" convert the quote to a legitimate waybill * */ if (Helper::ConvertToInt($quoteResponse["errorcode"]) === 0) { //We are using the first one returned $updateServiceParams = [ "quoteno" => $quoteResponse["results"][0]["quoteno"], "service" => $quoteResponse["results"][0]["rates"][0]["service"] ]; $updateResponse = $this->MakeCall("Quote", "updateService", $updateServiceParams, $this->token); $retArr = $updateResponse["results"][0] ?? []; } return $retArr; } public function AddContentModel(QuoteContentsModel $model) { $this->contentModels[] = $model->BuildModel(); } private function GetPlacesByTownName(string $townName="") { $retArr = []; $params = []; if ($townName !== "") { $params["name"] = $townName; } if (count($params) > 0) { $response = $this->MakeCall("Quote", "getPlacesByName", $params, $this->token); if (isset($response["results"]) && is_array($response["results"])) { $retArr = $response["results"]; } } return $retArr; } private function GetPlacesByPostCode(string $postCode="") { $retArr = []; $params = []; if ($postCode !== "") { $params["postcode"] = $postCode; } if (count($params) > 0) { $response = $this->MakeCall("Quote", "getPlacesByPostcode", $params, $this->token); if (isset($response["results"]) && is_array($response["results"])) { $retArr = $response["results"]; } } return $retArr; } private function SetToken(): void { $md5pass = md5($this->password . $this->salt); $params = [ "email" => $this->username, "password" => $md5pass, ]; $response = $this->MakeCall("Auth", "getSecureToken", $params); if ($response["errorcode"] == 0) { $this->token = $response["results"][0]["token_id"] ?? ""; } } private function SetSalt(): void { $params = [ "email" => $this->username ]; $response = $this->MakeCall("Auth", "getSalt", $params); if ($response["errorcode"] == 0) { $this->salt = $response["results"][0]["salt"] ?? ""; } } private function MakeCall(string $class, string $method, array $params=[], ?string $token=null) { $params = [ "params" => json_encode($params), "method" => $method, "class" => $class ]; if ($token != null) { $params["token_id"] = $token; } $curl = new Curl(); $curl->setOpt(CURLOPT_HEADER, false); $curl->setOpt(CURLOPT_RETURNTRANSFER, true); if (IS_LOCAL_ENV) { $curl->setOpt(CURLOPT_SSL_VERIFYHOST, 0); $curl->setOpt(CURLOPT_SSL_VERIFYPEER, 0); } $curl->get($this->serviceURL, $params); $response = $curl->response; $responseArr = json_decode($response, true); $status = $curl->error || Helper::ConvertToInt($responseArr["errorcode"]) === 0 ? ShippingTheCourierGuyLog::SUCCESS : ShippingTheCourierGuyLog::ERROR; unset($curl); ShippingTheCourierGuyLog::saveFinal( "", 0, $this->serviceURL, $params, $response, $status ); return $responseArr; } }
STACK_EDU
Thanks, Etel Hi Etel, I really appreciate the assistance! The config file is ignores, at least for me. Run the following command included in Git for Windows to start up the ssh-agent process in Powershell or the Windows Command Prompt. The Account settings page opens. On Linux, this is a symptom of a permissions problem, permissions should be 700. Type the same passphrase in the Confirm passphrase field. T o change the key's contents, you need to delete and re-add the key. The public key is shared and used to encrypt messages. Instead of nano, I should have used the vi text editor. One assumption is that the Windows profile you are using is set up with administrative privileges. GregB, I look at it like this: any server for which I create a password-less key is as secure as my laptop, it's an extension of the security perimeter of my laptop. After trying it, I noticed that this line was also I linked in the previous version of the post. It is actually fairly simple, if you know what to type. Refer to the page for more details. To make this work, you will need to do 2 more steps. You cannot copy the text from the console viewer. Save the private key file and then follow the steps to. A way around this is to simply use symlinks to each individual key file and known hosts, and let config reside on the linux side. You can use git or hg to connect to Bitbucket. You can give a passphrase for your private key when prompted—this provides another layer of security for your private key. Afterwards Git Gui communicated with GitHub silently - no need to enter any credentials. I think the config file is not having an effect. If you have problems with copy and paste, you can open the file directly with Notepad. If you get an error message with Permission denied publickey , check the page for help. To change the key's contents, you need to delete and re-add the key. There are three slightly different ways proposed in the comments — , , and. Github won't let you re-use the same ssh key for both accounts so you need 2 keys. From the save dialog, choose where to save your private key, name the file, and click Save. The command creates your default identity with its public and private keys. While you're in Git Bash, you should mkdir. Now we can just hit the Install button and finish the installation. First of all, thank you! If you've already added keys, you'll see them on this page. Thanks again for all of your help! First we need to generate key pair. When I exit and re-enter the shell, I'm once again unable to use git. And it provides access to almost all of the cli tools of Linux. Plus: some systems doesn't support solutions for remembering key's password, entered by users, and asks for it, each time key is used. It made my day and fixed the issue with Git provided that your private key is not password protected. When finished, the output looks similar to: Ssh-keygen. But I found it easy to just add to my ~. However, I'm not persuaded there is a benefit in the git config file. Next we want to put the public key to the remote server. I'm using the standard ssh. Note: Comments indicate that this doesn't work in all cases. Also, your comments about the permissions and which side controlling the file permissions was helpful. The Account settings page opens. The authenticity of host 'ssh. The private key is kept safe and secure on your system and is used to read messages encrypted with the public key. Choose an appropriate option or select Download an embedded version of Mercurial for Sourcetree along to use. I haven't found a solution for this. It may take a minute or two. Questions and Troubleshooting How can I have Git remember the passphrase for my key on Windows? When pasting in the key, a newline often is added at the end. This file should have an extension of. After you have the key at that location, Git Bash will recognize the key and use it. Add the key to the ssh-agent If you don't want to type your password each time you use the key, you'll need to add it to the ssh-agent. Give it a title that describes what machine the key is on e. Use your existing key or. Bitbucket sends you an email to confirm the addition of the key. If you have Notepad++ installed, select Notepad++ and click Next. Is there a way to copy the entire line in the file, even if my console doesn't display it all? If you've already added keys, you'll see them on this page. Important Avoid adding whitespace or new lines into the Key Data field, as they can cause Azure DevOps Services to use an invalid public key. It doesn't matter whether or not you include the email address in the Key. Once saved, you cannot change the key. If you have the necessary permissions on the Windows machine, and your policies permit it, I would suggest installing Cygwin , especially considering that you have prior experience with Linux. Click No if you don't have one and want to use Sourcetree to create one.
OPCFW_CODE
I’m quite new at JMonkey and I’m having some problems when loading my models made with Blender. I have some textured models and I am able to load them into JMonkey properly. However, when I execute the game, the textures of the models are correct only if I attach the node containing the model to the rootNode in the initizialization methods (initSystem(), initGame()). If I attach the objects containing the models in execution time (in the update() methods) the textures are always changed and the models take the textures from the other spatials of the scene. Then, sometimes when another object is attached the textures of the previously attached models change for the textures of the new one. This doesn’t happen when the objects are attached in the initialization methods and I can’t find the problem. I’m not applying any TextureState directly to the rootNode. I hope I explained myself properly. I guess you mess that up in your own code. The described functionality works as it should, read through the tutorials to really understand how jME works. Thank you for your answer normen. I have tried to include an updateRenderState() after attaching each of the objects to the main node and the problem seems to be solved. I don’t really understand why it wasn’t working before since I had already included a line like this at the end of the update method: I thought this should update the state of all the children of rootNode, but it may work in anohter way. Thanks for developing JMonkey engine, it is a great help in game creation. Oh, you are using jME2. Yeah, that requires you to update all kinds of things all the time… I really suggest going for jME3 directly. Yes, sorry I forgot to specify that I am using JME2. I will be using JME3 for sure in my next projec, but I tried to change my current project and it seems to need a lot of changes, so I’m going to finish it this way. Thank you! I would recommend to switch to jME3. I myself worked with jME2 until some weeks ago. Then i just tried out jME3 and loved it. It makes your life so much easier and provides you with everything you need. The portation may be a bit annoying and take its time, but its worth it. I am using Jmonkey for the first time for a project at college, and I first discover version 2. Later on, I tried to move to jME3 because of some features that I wanted to include, but I found it quite annoying, as you have said Now, I’m just about to finish my project and I don’t have time to switch, since the deadline is near. However, I will use jME3 for sure in my next projects
OPCFW_CODE
package com.httplogmonitoringtool.models; import java.util.Collections; import java.util.HashMap; import java.util.LinkedHashMap; import java.util.Map.Entry; import java.util.stream.Collectors; /** * HTTP statistics * * @author Remi c * */ public class HTTPStats { /** * Common HTTP statistics counters */ private final HashMap<HTTPStatsType, Long> statsValues = new HashMap<HTTPStatsType, Long>(); /** * All common HTTP Status code counters */ private final HashMap<HTTPStatsStatus, Long> statsStatus = new HashMap<HTTPStatsStatus, Long>(); /** * Counts section hits */ private final HashMap<String, Integer> hitSections = new HashMap<String, Integer>(); /** * Counts user requests */ private final HashMap<String, Integer> userCount = new HashMap<String, Integer>(); /** * Counts remote hosts requests */ private final HashMap<String, Integer> remoteHostsCount = new HashMap<String, Integer>(); /** * alert average value */ private int alertAverage = 0; /** * Most hit display limitation */ public final static int MOST_HIT_SECTION_DISPLAYED = 3; /** * init HTTPStats */ public HTTPStats() { clear(); } /** * increase common HTTP statistics counter for type * * @param type */ public void increase(HTTPStatsType type) { this.increase(type, 1); } /** * increase common HTTP statistics counter for type by value * * @param type */ public void increase(HTTPStatsType type, int value) { statsValues.put(type, (long) (statsValues.get(type).intValue() + value)); } /** * increase common HTTP Status code counter for status * * @param type */ public void increase(HTTPStatsStatus status) { statsStatus.put(status, (long) (statsStatus.get(status).intValue() + 1)); } /** * add section to section hit counter * * @param section */ public void addSection(String section) { hitSections.put(section, hitSections.containsKey(section) ? hitSections.get(section).intValue() + 1 : 1); } /** * add section to user request counter * * @param section */ public void addUser(String section) { userCount.put(section, userCount.containsKey(section) ? userCount.get(section).intValue() + 1 : 1); } /** * add section to remote host request counter * * @param section */ public void addRemoteHost(String section) { remoteHostsCount.put(section, remoteHostsCount.containsKey(section) ? remoteHostsCount.get(section).intValue() + 1 : 1); } /** * clear all stats */ public void clear() { for (HTTPStatsType type : HTTPStatsType.values()) { statsValues.put(type, 0l); } for (HTTPStatsStatus type : HTTPStatsStatus.values()) { statsStatus.put(type, 0l); } hitSections.clear(); userCount.clear(); remoteHostsCount.clear(); alertAverage = 0; } /** * clear sections hit count */ public void clearSections() { hitSections.clear(); } /** * clear users request count */ public void clearUsers() { userCount.clear(); } /** * clear remote hosts request count */ public void clearRemoteHosts() { remoteHostsCount.clear(); } /** * clear specific total content stats value */ public void clearTotalContent() { statsValues.put(HTTPStatsType.TOTAL_CONTENT, 0l); } /** * {@link #hitSections} * * @return hitSections */ public HashMap<String, Integer> getHitSection() { return hitSections; } /** * get most hit sections limited to {@link #MOST_HIT_SECTION_DISPLAYED} * * @return mostHitSection */ public HashMap<String, Integer> getMostHitSection() { int maxCountSectionToDisplay = hitSections.size() > MOST_HIT_SECTION_DISPLAYED ? MOST_HIT_SECTION_DISPLAYED : hitSections.size(); // sort section map by most hit HashMap<String, Integer> sortedMap = hitSections.entrySet().stream() .sorted(Collections.reverseOrder(Entry.comparingByValue())).limit(maxCountSectionToDisplay) .collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); return sortedMap; } /** * get the most present user * * @return */ public String getTopUser() { if (userCount.isEmpty()) { return ""; } // sort user map by most counted HashMap<String, Integer> sortedMap = userCount.entrySet().stream() .sorted(Collections.reverseOrder(Entry.comparingByValue())).limit(1) .collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); return sortedMap.keySet().iterator().next(); } /** * get the most present remote host * * @return */ public String getTopRemoteHost() { if (remoteHostsCount.isEmpty()) { return ""; } // sort remoteHost map by most counted HashMap<String, Integer> sortedMap = remoteHostsCount.entrySet().stream() .sorted(Collections.reverseOrder(Entry.comparingByValue())).limit(1) .collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); return sortedMap.keySet().iterator().next(); } /** * {@link #statsValues} * * @return statsValues */ public HashMap<HTTPStatsType, Long> getStatsValues() { return statsValues; } /** * {@link #statsStatus} * * @return statsStatus */ public HashMap<HTTPStatsStatus, Long> getStatsStatus() { return statsStatus; } /** * {@link #alertAverage} * * @return alertAverage */ public int getAlertAverage() { return alertAverage; } /** * {@link #alertAverage} * * @param alertAverage */ public void setAlertAverage(int alertAverage) { this.alertAverage = alertAverage; } }
STACK_EDU
Planning Class Inheritance for Game Objects This is specifically about the development and planning direction of the game development. I'm creating a fairly basic RPG and am wondering about the approach that I should take. I've been trying to outline it all in a flow chart first, but want to see if this is a good way of going about it. For example, I've created a class called "items", which then currently derives into "Weapons" "Armor" and "Potions". I then derive these classes further into specific types and I keep going and going, basically as specific as I want to get. Before I get too deep into this (again, I'm still planning) - I just want to know if this is a good way of approaching this problem. Don't forget to think about how these objects will be defined by tools and then loaded and constructed at runtime. Consider "data driven development" a good search term for general practices. It's not. What if you want to hit with a shield (e.g. "Shield bash"), i.e. use an "Armor" as a "Weapon", or defend with a sword (e.g. "Parry") i.e. use a "Weapon" as an "Armor"? You'll end up with having a Shield that inherits from Armor and from Weapon. And when you introduce diamond inheritance, you introduce guaranteed headaches. Also, you'll end up a huge hierarchy, in which you could get lost, and in which you'll not want to modify base classes because it'll potentially mess up too much other features. An issue I noted with this kind of hierarchy is when you want a feature on Armor and Weapon, but not on Potion, all of which inherit from Item: durability. To avoid code duplication, you'll write the feature in Item, but you'll also have to add the infrastructure to not use it in specific cases. As time goes by, you'll be adding feature in classes that do not exactly belong there, getting lost some more. (This is just an example tongive you the idea.) The favoured approach is to use Composition over inheritance. Basically, you add behaviours to your items which drive what they can do. If an "Item" has an "attack" component (which has a "damage dealt" property (for instance)) , it means that you can use it as a weapon. If the same item has a "defense" component (with a "guard ratio" property, for instance), it means that the item can be used as an "armour". Another research term is component based architecture. This should get you re-thinking your architecture :) Even a lightweight DSL and a dynamic list of properties in each object with holding the script to run to handle game and system events would be better than C++ inheritance or even Interfaces in other languages. Component Architectures are generally the way to go, they do have some inheritance within their designs but creating discrete properties and then combining them gives you not only a Component Architecture but also the ability to go to Data Oriented (not Data Driven) design. In C++ this can be of even more benefit as the two go rather well hand in hand when systems only need to update their collections of properties.. etc... As you've probably seen from other answers, there are pros and cons to using a polymorphic approach. The main two approaches to game software design fall into two broad categories: Polymorphic (inheritance based, or the "is a -" approach) object. OO purists will push this, but you have to contend with deadly diamond, amongst other potential headaches. That said, it can solve many problems very easily, and all gang of four design patterns rely on this kind of thinking to work. Composition (component based, or the "has a -" approach) objects. Each game entity has a collection of components, that define what it data is relevant to it, and what it can do. Then game subsystems which only care about certain components operate on only entities which have those components. This prevents up a lot of headaches, and completely avoids deadly diamond. Consider the following crude example: You want a sword that supplies your character with a built in shield: How would you do this in an inheritance based architecture. Not easily. In a component (ECS) architecture, it's simple: Create entity: Add sword graphic component. Add combat damage component, add shield defence component.Add sword attack animation. Job done, you now have a combined sword and shield, without having to define any new classes. In a OO architecture, this requires you to replicate code from a previously written class, resulting in code bloat, and two separate places your shield code needs to be maintained in. If you can get away with only writing code once, do so. Personally, I would strongly advise you to research Entity Component Systems (a good tutorial is here), and then make a decision from there. Eh, that's pretty good IMO. Minecraft works that way. I would say that you should keep your item definitions ("this is what a sword does" "this is how potions behave") as single-instance objects and the things the player sees on the ground and pick up are actually item stacks which hold information about how much damage the item has taken, how many items are in the stack, etc. and just forwards off any interactivity to the Item reference. This keeps the overhead down.
STACK_EXCHANGE
How does a "seated chest press" machine differ from a regular bench press? I have a multi-gym machine very similar to the one shown. It allows me to emulate a bench press from a seated position, I think this is exercise is called a seated chest press? I have never done a a real bench press and have very limited experience with free weights generally so in what ways will the two differ? I can imagine safety and psychological differences - you can't drop it on yourself - for a start! Generally, the main difference between free weights and machines is the use/non-use of stabilizing muscles. When you do any free weight exercise, you have to work to keep the weights on the correct path, whereas machines have fixed paths. So in your case, if you did a proper Bench Press, you can imagine having to make sure the barbell travels the correct path from/to your chest, without traveling to far to the front (it lands on your stomach) or back (it lands on your face) or tipping over to one side. With the machine Chest Press, all you have to do is press. If the machine is not uni-lateral (meaning both handles move together), you can even make up for a weak side with the other, promoting imbalances. This makes free weight exercises more complicated, more "functional" and recruit more muscles. In turn, machine exercises enable you to focus more on the desired muscles. A web search along the lines of "free weights vs machines" or maybe this article, which I only skimmed over, could get you a more in-depth answer for sure. that all makes sense, I was wondering as well if there is any difference being upright... there are bench-press machines where you lie flat so would one of those be any different to a seated machine? I have never seen one of those, but if it's just the same thing rotated 90 degrees, it would not make any difference concerning the chest muscles being worked. It would feel a bit different setup wise though, since you'd be lying down and planting your feet on the ground. What does make a difference is the angle in which your arms move in relation to your chest / torso. More "upwards" works the upper chest more, whereas more "downwards" works the lower chest more. You say "bench press...barbell" but a dumbbell bench press is helpful in a couple of ways: It should work the supporting muscles more than with a bar; (Especially if training solo) if coming from resistance machines where failing doesn't mean a risk of dropping the bar on yourself, it's easier to fail safely with dumbbells. Then you've got them right next to the bench for single arm rows - and using the same weight for both exercises can be about right. Chris H, I didn't write anything against the DB Bench Press or about the Barbell Bench Press being superior. I simply used the Barbell Bench Press to illustrate one example of Bench Press vs Chest Press. Also, when people speak of the "Bench Press", they usually refer to the barbell version, and this is how it was phrased in the question. Also, since Mr. Boy was talking about dropping it on himself I assumed he's not speaking about DBs, which as you pointed out (where I also agree with you), would be safer in some regards. This was never a debate about barbell vs dumbbell. @JustinHehli fair enough. I maybe read too much into your sentence that includes "proper Bench Press, ... barbell" (which followed on from very similar wording in the Q). It's certainly possible to drop a DB on yourself bench pressing itis perhaps more . My comment was anyway directed as much at the OP as you, but now I would say it's far more relevant to Mr. Boy than to your answer
STACK_EXCHANGE
extern crate wasm_bindgen; extern crate image; use wasm_bindgen::prelude::*; use image::GenericImageView; #[wasm_bindgen] pub fn greet(name: String) -> String { format!("Hi {} san!", name) } #[derive(serde::Serialize, serde::Deserialize)] pub struct FilterParams { pub resize_scale: f32, pub rotate_angle: u32, pub huerotate_angle: i32, pub blur_sigma: f32, pub brighten_value: i32, } fn scale_pixel(pixels: u32, scale: f32) -> u32 { (pixels as f32 * scale) as u32 } #[wasm_bindgen] pub fn convimg(input_img: js_sys::Uint8Array, filters_checkbox: &JsValue, filter_params_js: &JsValue) -> js_sys::Uint8Array { let filters_vec: Vec<String> = filters_checkbox.into_serde().unwrap(); let mut img = image::load_from_memory(input_img.to_vec().as_slice()) .expect("failed to load image"); let filter_params: FilterParams = filter_params_js.into_serde().unwrap(); let ( img_width, img_height ) = img.dimensions(); for filter_name in filters_vec { img = match filter_name.as_str() { "resize" => img.resize( scale_pixel(img_width, filter_params.resize_scale), scale_pixel(img_height, filter_params.resize_scale), image::imageops::Triangle), "rotate" => match filter_params.rotate_angle { 90 => img.rotate90(), 180 => img.rotate180(), 270 => img.rotate270(), _ => img, }, "grayscale" => img.grayscale(), "huerotate" => img.huerotate(filter_params.huerotate_angle), "blur" => img.blur(filter_params.blur_sigma), "brighten" => img.brighten(filter_params.brighten_value), &_ => img }; } let mut img_buf = Vec::new(); img.write_to(&mut img_buf, image::ImageOutputFormat::Png) .expect("failed to write image to buffer"); unsafe { js_sys::Uint8Array::view(img_buf.as_slice()) } } fn main() { println!("{}", greet(String::from("foobar"))); }
STACK_EDU
How should you begin to learn programming? What language should you pick? What are the long term, safe bets you should ensure you pick up so you’re ready to succeed over the next ten years? These are the wrong questions to answer. Programming is the valuable, practical skill but the discipline that enables and supports it is Computer Science. Begin from that position and the lessons you learn will hold not only during the lifetime of the programming languages you choose, but throughout your own life as well. But Computer Science though, that’s hardly the right place for a beginner. Without knowing any programming how can we appreciate the motivation behind finite automata, Turing completeness, complexity classes, algorithm analysis, recursive design, data types, sorting routines, graph traversals and tree searches? Again, wrong. Computer Science just needs to be introduced more simply. Look at this picture. It’s a visual cue to aid remembering the core themes central to everything to do with Computer Science. It is a representation of the famous fictional detective Sherlock Holmes, in the style of the painter Piet Mondrian. More on this later though. Before diving into programming languages, let us consider the lowest possible level of computation: logic gates. While it’s possible to explain the semantics of what a logic gate does, programmers don’t realise how instinctively they already have most logic gate’s truth tables memorised. First thing’s first though – what’s a logic gate? A logic gate is a like a black box that performs a calculation. It takes a fixed number of inputs and has a fixed number of outputs. Logic gates are used to express boolean logic expressions which mean they deal only with boolean inputs and outputs i.e. an input or output value can only be 0 or 1. One of the most common logic gates is the AND gate. It has two inputs, A and B, and one output X. It works in a similar way to the English word ‘and’. For example, consider the following four responses to the question, “Did Alice and Bob go to the cinema last night?” - “No, neither of them went.” - “No, only Alice went.” - “No, only Bob went.” - “Yes, both Alice and Bob went.” Here, we could model our English sentences using the logic gate if we map the inputs and outputs being 0 or 1 to specific outcomes. So, if A is 0, that means Alice did not go to the cinema last night. If A is 1, that means Alice did go to the cinema last night. The same rules apply for B and whether or not Bob went to the cinema last night. The output value X, of A AND B, is 1 if both Alice and Bob went to the cinema last night, otherwise it’s 0. Writing this down purely as zeros and ones, is called a truth table. While the English sentences used above help try to explain the semantics of what a logic gate is doing, the truth table is brutally logical. A truth table doesn’t so much explain what a logic gate is doing, it tells you what it does. This is very handy for those using logic gates when the Engligh language definition might by subtly different. Consider the OR gate. Its structure is identical to the AND gate. It takes two binary inputs A and B, and has one binary output X. It also corresponds to the English word ‘or’. Consider these four responses to the question, “Would you like Apple juice or Blackcurrant juice?” - “No, I would like neither.” - “Yes, I would like Apple juice.” - “Yes, I would like Blackcurrant juice.” - “Yes, I would like both.” Generally speaking, when someone asks you an ‘or’-like question, they only consider three outcomes: the first option, the second, or neither – it rarely means both options. There exists a logic gate that can model what is generally meant when the word ‘or’ is used in English sentences. This logic gate is called XOR and is pronounced “ex-or” and is short for exclusive or. Consider the responses to the question “Would you like Apple juice XOR Blackcurrant juice?” - “No, I would like neither.” - “Yes, I would like Apple juice.” - “Yes, I would like Blackcurrant juice.” - “No, I’d actually like both, please. If it’s not too greedy.” Essentially it means you can exclusively have one option or the other but not both. For completeness here are the truth tables for OR and XOR. Logic and Understanding This subtlety of understanding is very important to programmers. Computer programs are too complicated for the entire state of all their inputs and outputs to be represented – there are just too many combinations to enumerate. Instead, programmers hold mental models like the English word definitions of AND and OR. When programs don’t work as expected though, programmers have to narrow down exactly where the problem is. They have many complex tools to aid them, but ultimately it comes down to the programmer’s understanding of logic and the rules being used. In times like this, being able to construct a truth table to verify what the output should be for a given input allows the programmer to logically and methodically discover the location of the fault in a program. A Binary Adder Regardless of the programming language you begin with, the required skill set is similar – it’s about how you use the available features of the language to solve problems. In the beginning the process will feel difficult, unclear, perhaps even incomprehensible. Practice will help the most. This is stated in advance of the example below in case the train of thought is not obvious and thus hard to follow – don’t worry, re-read it again later after you’re more familiar with programming and it will be clearer. The challenge we’re about to solve is to create a multi-digit binary adder using only AND, OR and NOT logic gates. The AND and OR have already been seen, and the NOT gate takes just one input and outputs the inverse. A simple logic gate for adding single digit binary numbers will require two inputs and two outputs. The inputs A and B can be 0 or 1. The outputs C and S are also either 0 or 1, and together they represent the sum of A and B. Here is the truth table for what the binary adder should create (S stands for Sum, and C stands for Carry). Hopefully you can see that the C column can be created by calculating A AND B, and the S column can be created by using A XOR B. Unfortunately, even though we know what an XOR calculation looks like, we don’t know how to create it. Every single computer device you access in your daily life whether, a desktop PC; laptop; smartphone; right down to a flashing LED bike light; they’re all built with logic gates. And at a sub-microscopic level, it’s common that we may implement certain logic gates, such as XOR, using other logic gates that are, for example, cheaper to make. We can make an XOR gate by creating an intermediate stage which we’ll call A’ which is A or B, and B’ which is not (A and B). |A||B||A or B||not (A and B)||A’ and B’| By performing a couple of stages and combining the result, an XOR gate has been created using two AND gates, one OR, and a NOT. The ‘dot’ junctions split the A and B inputs so their value can be used twice. If we want an XOR gate in future, we know we can make one using the above combination of logic gates. However, to make things less complicated than they need to be, if we want to use an XOR gate in a new diagram we’ll use this symbol for XOR instead. Now when we see this symbol we don’t have to remember how it was made, just that it’s an XOR gate. We can now create a binary adder circuit like so. This circuit is known as a Half Adder. Combining two of them we can create what’s known as a Full Adder. A Full Adder, in addition to the two inputs A and B, also has a third input which is a Carry input from another adder. Note how the diagram above, is simplified to be a box labelled Half Adder below, with the inputs A and B, and the outputs S and C denoted in the corners. Again, because we know what it does, we don’t need to be concerned with how it’s made when we’re using it to build something else. Here is the truth table of the Full Adder. We’ve renamed the original C to Cout to denote that it’s an output Carry, and the new input as Cin to denote that it’s an input from another Full Adder. We can now chain together several Full Adders like so to create something that can add together binary numbers as large as we want. What seems like such a simple task, adding numbers together, has been accomplished using two techniques that are central to all areas of Computer Science and programming. It’s time to revisit that picture. Logic and Abstraction Sherlock Holmes is a fascinating fictional character. Though deeply flawed, his adherence to logic and deduction mean that he has a clearer understanding of the rest of the world than those around him. It is his mastery of logic that makes him formidable. Piet Mondrian was a pioneer of the abstract art movement. Abstract art, without showing a completely accurate or faithful representation of the subject matter, can communicate an idea any bit as rich as a painting from the more exacting classical era. The harsh lines and solid colours above are completely missing from the natural world, and yet we can make out the famous deerstalker hat and pipe of its subject. An abstract representation of the master of logic – a visual cue to the foundation of Computer Science. Logic. It’s about understanding the rules, understanding how something works. It’s about being able to predict the outcome because you know the model. Abstraction. It’s about knowing how things fit together, knowing the layout or architecture. Being able to simplify in order to more easily navigate from one place to another. Logic and Abstraction combine to aid us as humans. When we mentally step through the process of how two multi-digit binary numbers are added together, we don’t think about the AND, OR and NOT gates. It’s too complicated at that level, there are too many of them. We mentally model the process in terms of Full Adders. We take the complex, label it simply, and lift our understanding up to a higher level. Our understanding of logic helps us to understand how each level of abstraction works, and the abstraction itself keeps it simple enough to be able to understand not only the model as a whole, but every single part of that model. Solving problems and understanding systems with logic, reducing complexity and conceptualising the whole using abstraction. This is the skill that will stay with programmers always, and the one they should learn first.
OPCFW_CODE
require 'dumb_numb_set' require_relative 'local_storage' module Melon class Paradigm include Logit def initialize local = Melon::LocalStorage.new @servers = [] @local = local @read = DumbNumbSet.new @delay = 1 add_server @local end # Adds a storage server def add_server server @servers << server end # Store a take-only message locally def store message @local.store message end # Write a read-only message locally def write message @local.write message end # Read a read-only message from any server (including local). # Blocks by default if no messages are available. # # If `block` is set to `false`, `nil` will be returned if no matching # messages are found. def read template, block = true loop do each_server do |s| if res = (s.find_unread template, @read) @read << res.id return res.message end end if block sleep @delay else break end end nil end # Take a take-only message from any server (including local). # Blocks by default if no messages are available. # # If `block` is set to `false`, `nil` will be returned if no matching # messages are found. def take template, block = true loop do each_server do |s| if res = s.find_and_take(template) return res.message end end if block sleep @delay else break end end nil end # Reads all matching unread read-only message from any server (including local). # Blocks by default if no messages are available. # # If `block` is set to `false`, an empty array will be returned if no matching # messages are found. def read_all template, block = true results = [] loop do each_server do |s| debug s.class results.concat(s.find_all_unread template, @read) end if results.empty? and block sleep @delay else break end end @read.merge results.map(&:id) results.map(&:message) end # Takes all matching take-only message from any server (including local). # Blocks by default if no messages are available. # # If `block` is set to `false`, an empty array will be returned if no matching # messages are found. def take_all template, block = true results = [] loop do each_server do |s| results.concat(s.take_all template) end if results.empty? and block sleep @delay else break end end results.map(&:message) end private # Iterates over each server in random order def each_server begin @servers.shuffle.each do |s| yield s end end end end end
STACK_EDU
In this section of the machine learning playbook, we delve into the critical questions that arise when dealing with the complexities of data. Basically, we tackle the question: what information do you need about the data right from the beginning of your data science project? In case you missed the first article of the series which focused on the project brief, you can find here. The Data Puzzle Before immersing yourself in the intricacies of planning the methodology or implementation of your project, it is crucial to address a fundamental aspect: the data. Seasoned data scientists with hands-on experience in real world solutions and projects will tell you that it is not the algorithm but the data that makes all the difference. The pivotal role of data cannot be emphasized enough in the realm of developing solutions and executing client projects. Hence, it is very important to ask the right questions in order to connect the dots and proactively anticipate potential data challenges inherent in the field of data science. There are several forms of data out there and each has its own set of nuances. In this article, we are going to explore the questions associated with common data types, namely images, text and structured tables. While there are also other forms of data, such as audio and geo-spatial data, we are not going to cover those within the scope of this article. Let’s begin with some general concerns that you might have: - Type of data (Text, image, audio) - Volume of the data (number of rows / columns) - Source of the data (how was it created, and how is it stored) - Known issues or challenges associated with the data In marketing, let’s say the client is running campaigns and has tasked you with identifying the channel that drives customer acquisition in order to optimize their advertising budget. To help illustrate the process, let’s simulate a conversation to gain a better understanding of the scenario: What is the frequency of campaigns run? How many projects / products are the campaigns run for? What all channels are used for the campaigns?Data Science professional We run 10-15 campaigns every week for 3 products and have been doing so for the past 2 years. Sources used includes Google ads, Bing ads, Facebook marketing and Instagram influencer marketing.Client Let’s take a second to dissect the details shared by the client to decide our next line of questioning. It is clear that there are multiple sources of data due to the variety of channels being used. This also implies that the they might not have a system to standardize the data and store it in a consolidated manner. Considering the volume of data generated from 10 to 15 campaigns over a span of 2 years, it becomes crucial to ascertain what specific data points are being captured. What specific data points are being captured? Where do you store this data? Up until now, how have you been determining the allocation of funds for different channels?Data Science professional We receive orders and invoices through an automated software, while another system is responsible for inventory management. All this data is currently pushed into a SQL database. Our current approach involves evaluating the performance of these campaigns, measured by the number of clicks at the buy button as reported by different channels, on a weekly basis.Client How do you establish a connection between the orders and the purchases reported by the channels? Are there any known issues with the data? How effective has your current strategy been in achieving the desired outcomes?Data Science professional We are only able to connect it at an aggregate level and cannot pin point the exact source for each customer. To elaborate on the previous point, if a potential customer views the ad on google, clicks on it but then goes elsewhere. Later on, they encounter an ad on Facebook, click on it and go ahead to make a purchase. Both google and Facebook will report it as a conversion. Since we update our spending on a weekly basis using consolidated data, there is an opportunity for better optimization based on specific products or campaign types. However, this aspect remains unexplored, and we currently observe variations in our key performance indicators (KPIs) on a weekly basis.Client Okay, let’s analyze the insights we have gathered: - There are known issues with the data currently stored in database. - Establishing connections between the data poses a challenge. - Discrepancies are observed in reporting across different channels. - The client has defined key performance indicators (KPIs) to measure campaign performance and recognizes the potential for improvement. This serves as an excellent initial step in comprehending the intricacies of the data puzzle. In the next article of this series, we will take a look the nuances of dealing with three common data types: images, text, and structured tables. Stay tuned for more actionable insights and guidance in our next instalment. In this post, you have discovered essential elements that form the bedrock of understanding data in any type of data science project. These insights serve as a solid foundation for exploring further factors that are specific to your unique project requirements. By grasping these fundamental aspects, you are better equipped to navigate the intricacies of data and make informed decisions to drive the success of your data science endeavours. - Data plays a pivotal role in developing solutions and executing client projects in the field of data science. Asking the right questions is crucial to anticipate potential data challenges and connect the dots effectively. - Different forms of data, such as images, text, and structured tables, have their own nuances and require specific considerations. - General concerns about data include its type, volume, source, and known issues or challenges. - Known issues with data quality and variations in key performance indicators highlight the need for improvement. Data Science Discovery is a step on the path of your data science journey. Please follow us on LinkedIn to stay updated. About the writers: - Ujjayant Sinha: Data scientist with professional experience in market research and machine learning in the pharma domain across computer vision and natural language processing. - Ankit Gadi: Driven by a knack and passion for data science coupled with a strong foundation in Operations Research and Statistics has helped me embark on my data science journey.
OPCFW_CODE
/* function name: refresh function usage: after changing the time scale, it will be called to redraw the graphs */ function refresh() { year_range = getTimeRange(); for (let cb of year_cb) cb(); for (let cb of attr_value_cb) cb(); } /* function name: setTimeSlider function usage: set the TimeSlider on the bottom, style in style.css */ function setTimeSlider() { $('.range-slider').jRange({ from: 1980, to: 2016, step: 1, scale: [1980, 1985, 1990, 1995, 2000, 2005, 2010, 2015, 2016], format: '%s', width: $(window).width() * 0.85, // width of slider showLabels: true, isRange: true, theme: "theme-blue", ondragend: refresh // after moving the bottom, call the function }); $('.range-slider').jRange('setValue', '1980, 2016'); // set initial value for the slider } /* function name: getTimeRange function usage: return the start year and the end year of time scale, in a form of array TimeRange = [startYear, endYear] */ function getTimeRange() { let TimeRange = d3.select('.range-slider').property('value').split(',').map(d => parseInt(d)); return TimeRange; }
STACK_EDU
So, we've all been there. You go to your trusty grafana, search for some sweet metrics that you implemented and WHAM! Prometheus returns us a 503, a trusty way of saying I'm not ready, and I'm probably going to die soon. And since we're running in kubernetes I'm going to die soon, again and again. And you're getting reports from your colleagues that prometheus is not responding. And you can't ignore them anymore. All right, lets check what's happening to the little guy. kubectl get pods -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 1/2 Running 4 5m It seems like it's stuck in the running state, where the container is not yet ready. Let's describe the deployment, to check out what's happening. State: Running │ Started: Wed, 12 Jan 2022 15:12:49 +0100 │ Last State: Terminated │ Reason: OOMKilled │ Exit Code: 137 │ Started: Tue, 11 Jan 2022 17:14:41 +0100 │ Finished: Wed, 12 Jan 2022 15:12:47 +0100 │ So we see that the prometheus is in a running state waiting for the readiness probe to trigger, probably working on recovering from Write Ahead Log (WAL). This could be an issue where prometheus is recovering from an error, or a restart and does not have enough memory to write everything in the WAL. We could be running into an issue where we set the request/limits memory lower than the prometheus requires, and the kube scheduler keeps killing prometheus for wanting more memory. For this case, we could give it more memory to work to see if it recovers. We should also analyze why the prometheus WAL is getting clogged up. In essence, we want to check what has changed so that we suddenly have a high memory spike in our sweet, sweet environment. A lot of prometheus issues revolve around cardinality. Memory spikes that break your deployment? Cardinality. Prometheus dragging its feet like it's Monday after the log4j (the second one ofc) zero day security breach? Cardinality. Not getting that raise since you worked hard the past 16 years without wavering? You bet your ass it's cardinality. So, as you can see much of life's problems can be accredited to cardinality. In short cardinality of your metrics is the combination of all label values per metric. For example, if our metric http_request_total had a label response code, and let's say we support 8 status codes, our cardinality starts off at 8. For good measure we want to record the HTTP verb for the request. We support GET POST PUT HEAD which would put the cardinality to 4*8=32. Now, if someone adds a URL to the metric label (!!VERY BAD IDEA!!, but bare with me now) and we have 2 active pages, we'd have a cardinality of 2*4*8=64. But, imagine someone starts scraping your website for potential vulnerabilities. Imagine all the URLs that will appear, most likely only once. The point to this story is, be very mindful of how you use labels and cardinality in prometheus, since that will indeed have great impact on your prometheus performance. Since this has never happened to me (never-ever) I found the following solution to be handy. Since we can't get prometheus up and running to utilize PromQL to detect the potential issues, we have to find another way to detect high cardinality. Therefore, we might want to get our hands dirty with some kubectl exec -it -n monitoring pods/prometheus-prometheus-kube-prometheus-prometheus-0 -- sh, and run the prometheus tsdb analysis too. /prometheus $ promtool tsdb analyze . Which produced the result. > Block ID: 01FT8E8YY4THHZ2S7C3G04GJMG > Duration: 1h59m59.997s > Series: 564171 > Label names: 285 > Postings (unique label pairs): 21139 > Postings entries (total label pairs): 6423664 > Highest cardinality metric names: > 11340 haproxy_server_http_responses_total We see the potential issue here, where the haproxy_server_http_responses_total metric is having a super-high cardinality which is growing. We need to deal with it, so that our prometheus instance can breathe again. In this particular case, the solution was updating the haproxy. ... or burn it, up to you.
OPCFW_CODE
# ___________________________________________________________________________ # # Pyomo: Python Optimization Modeling Objects # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC # Under the terms of Contract DE-NA0003525 with National Technology and # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain # rights in this software. # This software is distributed under the 3-clause BSD License. # ___________________________________________________________________________ # # baa99: Annotated with location of stochastic rhs entries # for use with pysp2smps conversion tool. from pyomo.core import * model = ConcreteModel() # use mutable parameters so that the constraint # right-hand-sides can be updated for each scenario model.d1_rhs = Param(mutable=True) model.d2_rhs = Param(mutable=True) # first-stage variables model.x1 = Var(bounds=(0,217)) model.x2 = Var(bounds=(0,217)) # second-stage variables model.v1 = Var(within=NonNegativeReals) model.v2 = Var(within=NonNegativeReals) model.u1 = Var(within=NonNegativeReals) model.u2 = Var(within=NonNegativeReals) model.w11 = Var(within=NonNegativeReals) model.w12 = Var(within=NonNegativeReals) model.w22 = Var(within=NonNegativeReals) # stage-cost expressions model.FirstStageCost = \ Expression(initialize=(4*model.x1 + 2*model.x2)) model.SecondStageCost = \ Expression(initialize=(-8*model.w11 - 4*model.w12 - 4*model.w22 +\ 0.2*model.v1 + 0.2*model.v2 + 10*model.u1 + 10*model.u2)) # always define the objective as the sum of the stage costs model.obj = Objective(expr=model.FirstStageCost + model.SecondStageCost) # # this model only has second-stage constraints # model.s1 = Constraint(expr=-model.x1 + model.w11 + model.w12 + model.v1 == 0) model.s2 = Constraint(expr=-model.x2 + model.w22 + model.v2 == 0) # # these two constraints have stochastic right-hand-sides # model.d1 = Constraint(expr=model.w11 + model.u1 == model.d1_rhs) model.d2 = Constraint(expr=model.w12 + model.w22 + model.u2 == model.d2_rhs) # # Store the possible table values for the stochastic parameters # on the model. These will be used to either generate an explicit # list of scenarios or to represent the SP implicitly. # model.d1_rhs_table = \ [17.75731865, 32.96224832, 43.68044355, 52.29173734, 59.67893765, 66.27551249, 72.33076402, 78.00434172, 83.40733268, 88.62275117, 93.71693266, 98.74655459, 103.7634931, 108.8187082, 113.9659517, 119.2660233, 124.7925174, 130.6406496, 136.9423425, 143.8948148, 151.8216695, 161.326406, 173.7895514, 194.0396804, 216.3173937] model.d2_rhs_table = \ [5.960319592, 26.21044859, 38.673594, 48.17833053, 56.10518525, 63.05765754, 69.35935045, 75.20748263, 80.73397668, 86.03404828, 91.18129176, 96.2365069, 101.2534454, 106.2830673, 111.3772488, 116.5926673, 121.9956583, 127.669236, 133.7244875, 140.3210624, 147.7082627, 156.3195565, 167.0377517, 182.2426813, 216.3173937]
STACK_EDU
Geosemantic Network of Greater Boston Neighborhoods [oral presentation preferred] Dmitry Zinoviev, Aleksandra NenkoGeosemantic networks (GSNs) represent geographical, social, and anthropological aspects of compact spatialized communities, such as urban neighborhoods and metropolitan areas. They have become a crucial tool in defining spatial borders of cultures or culturally uniform communities and studying their evolution. In general, a GSN is a weighted graph comprised of tag nodes (stems, words, or expressions). Two nodes are connected with an edge if they are somewhat similar, and the edge weight is a measure of the similarity. Each node belongs either to the geographical domain (location) or the semantic domain (social and anthropological phenomena, e.g., topics discussed by communities). Unlike bipartite networks, GSNs do not inhibit connections between nodes that belong to the same domain. Thus, they have three subsets of edges: homogeneous (geographic and semantic) and heterogeneous (cross-domain). Ample, easily accessible data from major social networking websites make it possible to construct and analyze large geosemantic networks of the size of a metropolitan area. The goal of this study is to explore network neighborhoods and their interactions defined by each of the edge subsets. The study is based on the Instagram posts associated with the urban neighborhoods and suburbs of the Metropolitan Boston area. Our dataset consists of ~75,000 first comments (usually from the original posters) made over several weeks in 2019-2020. Sixty-six thousand of the comments have Instagram hashtags that we used to construct a geosemantic network. Two hashtags are connected if they are used together in a significant number of comments. For further study, we kept only those hashtags that appeared in the corpus at least 20 times for geographic tags and 75 times for the other tags. Finally, we eliminated all hashtags related directly to Boston as such (e.g., "#boston"), Instagram as a medium (e.g., "#igers"), and photography techniques (e.g., "bwphoto"). Both homogeneous subgraphs of the GSN have small size and excellent network community structure. The geographic subgraph has 363 nodes, 3,742 edges, and 19 node clusters that match the traditional Metro Boston neighborhoods, such as East Boston, Dorchester, and the North Shore. The semantic (socio-anthropological) subgraph has 885 nodes, 20,761 edges, and 28 node clusters referring to such topics as real estate, lifestyle, food, alcohol, pets, and small businesses. We estimated the most likely socio-anthropological topics for each geographic neighborhood by cross-tabulating the number of posts that simultaneously refer to each neighborhood and each topic. The cross-tabulation matrix reveals strong preferences for each neighborhood. For example, the conglomerate of small towns to the west of Boston (such as Newton and Waltham) is strongly associated with hair saloons and local shopping, while the North Shore (such as Salem and Lynn) is known on Instagram for its #foodies. Finally, we calculated VADER sentiment polarity scores for all posts from each neighborhood. The majority of the neighborhoods evoke moderately positive sentiments, with the notable exceptions of overly positive Coolidge Corner and Newton, and overly negative area of Massachusetts General Hospital. At the moment, we cannot find a relationship between preferred semantic topics and general sentiment levels.
OPCFW_CODE
I bought Xojo2020 in order to allow me to build for ARM if I choose. As an experiment, I restarted from scratch an app I have in mind for iOS, clean sheet, API2 And I have to confess I am amazed how quickly I have been able to do what took me 8 months a few years back. Its not complete yet I still find myself banging up against no ‘selectcolor’ dialog, and access to iCloud (because users HATE sharing files by dragging them into shared documents folder) It’s hard to find documentation about the diffs between ‘Mobile this’ and ‘IOS that’ - I expect this is a push towards shared components (like B4X controls) so that if Android is ever possible, we can share some code. (Oddly the list view is still an iOSListview not a MobileListView, so I can envisage some issues there. ) But overall, I am actually enjoying the coding process for iOS in Xojo Didn’t expect to be saying THAT at the end of 2020 I updated an older iOS project to 2020 R2 last weekend and the only thing I really had to do was set Simple References to off and fully resolve the old Xojo framework. I’m really just kicking the can down the road a bit but my immediate problem was solved and the client is happy (for now). I stopped giving money to Xojo after API 2. However, with the advent of ARM Macs and the new Worker feature, I decided to bite the bullet, downgrade to Desktop from Pro, and buy into 2020r2. Close to 30,000 deprecations in my main project, but with some minor tweaks it just ran. Encouraged by that I decided to clobber some of those deprecations. After a week and a half of long days, the total is down to under 1,000, all of which are Date vs DateTime. Most difficult was the new database stuff, and I’m still finding bugs, but overall, it wasn’t so bad. Sadly I can’t compile for Apple Silicon since I use XojoScript extensively, but all in all, I am mildly satisfied. like prodman, I gave up on Xojo with the advent of API2 and the drastic change in they way they handle customer interactions. Perhaps in a year or two after I have switched to the new ARM technology I will buy the base $99 package again . If I need Windows I can still use 2019r1.1… But never again will I buy any of the larger Xojo packages… They burned that bridge If you have and follow naming standards it can be really easy to update to API 2.1 with RegEx.
OPCFW_CODE
Posted by Tom Allen Upgrading to the newest version of OpenText Web Experience Management (WEM) is almost always a good idea. Beyond the new features offered, each new version includes bug fixes that may not be available in previous versions. The newest versions of WEM have made upgrades easier by allowing new features to be selectively enabled as well as productizing many common extensions. But what should you know before moving ahead with the upgrade? In previous versions of WEM, implementing multilingual was always an extension. Code had to be developed and a model had to be designed to support the translations. In 8.5, translations of content and channels are now supported out of the box. The old in-context menu has been replaced with the new preview dock. The new interface allows you to drag and drop content onto the screen and build pages intuitively. Previously OpenText Dynamic Portal Module (DPM) allowed content targeting in previous versions of WEM, but now it has been expanded to not require Portal and instead use external or internal segment providers. The URLs in previous versions of WEM were based on content and channel names. The display name of the content or channel would be the name that shows up in your URL. If two items had the same name in the same path, problems would arise. 8.5 introduced a new concept called “Canonical URLs”. These are based off of new virtual fields that are automatically generated for each object in the system and guaranteed to be unique. The field can then be changed at any point afterwards, allowing you to have URL names be different from display names while avoiding conflicts. The old version of URLs can still be used and is configured on a site-by-site basis. To enable these virtual fields, the content type definitions for your content should be changed to show the virtual field widget. WEM 8.5 adds support for Apache Solr, providing faster search results, easy extension, and open development standards. While you can still use Autonomy, OpenText Common Search, or a custom search provider in your site, searches inside the content workspaces can only use Solr. This means that 8.5 does require you to install and configure Apache Solr before upgrading and you must re-index your content post-upgrade. Translations in WEM 8.5 require the use of canonical URLs. Due to the way that WEM interprets what locale you are in, canonical URLs must be used so that the correct content can be displayed. Tools are included to convert your existing content to the new multilingual model. Using these tools, you can keep your existing content and convert it to the new localization model without manually modifying your entire content library. The rich text editor included in older versions of WEM was ePhox EditLive, a Java based editor. This has caused innumerable headaches for content authors and IT alike due to the requirements of installing Java on every contributor’s machine as well as keeping it up to date with the constant flow of security updates from Oracle. There are two strategies you can take when performing an upgrade: in-place or parallel. In-place upgrades consist of upgrading the software on the same machine. Parallel upgrades consist of installing on a new machine and then migrating your content from the old system to the new system. In-place is the fastest method of upgrading, but it can be risky and requires an extended downtime. If anything goes wrong in the process of the upgrade, it can be hard to roll back to a working version depending on the method of backup taken. If you are upgrading from a version prior to 8.1, this method requires you to upgrade to 8.1 before upgrading to 8.5. Parallel upgrades offer a safety net of a working system to revert back to at any point. This allows you to spend as much time as you like upgrading and testing the new system before making it live. As more companies switch to virtual machines for the server hosting, this is made easier, as you don’t have the requirements of buying new physical hardware for the upgrade. Also aiding in parallel upgrades is a new tool called OpenText Web Experience Management Motion. This allows you to pick and choose which content to migrate, apply transformations as the content is migrated, as well as create a repeatable process to keep content in sync between environments. Upgrading WEM can be a challenging process, but opens up useful functionality, a cleaner user interface, and plenty of bug fixes. Have questions about upgrading?
OPCFW_CODE
The user or administrator has not consented to use the application I have followed the steps in this tutorial and am getting this error message when I call the WebApi Appservice from the Spa Appservice in Azure Error description:AADSTS65001: The user or administrator has not consented to use the application with ID {my id} . Send an interactive authorization request for this user and resource. My SPA app has permission to access to my Web Api and all the correct keys are in there. Is there a missing step in this process? Hi, I had the same problem. Also, I already edited the manifest file and change the 'oauth2AllowImplicitFlow' to true, but still not working. @onmondo what did you do to resolve? I would double check that everything in this step was saved correctly for the To Do SPA application in your Azure AD tenant In the "Permissions to Other Applications" section, click "Add Application." Select "Other" in the "Show" dropdown, and click the upper check mark. Locate & click on the To Go API, and click the bottom check mark to add the application. Select "Access To Go API" from the "Delegated Permissions" dropdown, and save the configuration. @isaac2004 try hitting "Grant Permission" for your app at https://portal.azure.com. Let me know if that fixes it. Hitting "Grant Permission" in the AD App under "Required Permissions" indeed solved the problem. Incredibly unintuitive that the permissions are not changed on save. Thanks @danieldobalian Hi, i am also getting same error still after Hit "Grant Permission" in Azure portal error_description": "AADSTS65001: The user or administrator has not consented to use the application with ID 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' named 'LSNTestAPP'. Send an interactive authorization request for this user and resource.\r\nTrace ID: ceea736e-5935-4501-b605-e90c83064000\r\nCorrelation ID: f9848b48-7684-4093-aee0-00759ae607cb\r\nTimestamp: 2017-12-21 14:01:51Z", "error_codes": [ 65001 ], i solved it by https://docs.microsoft.com/zh-cn/azure/active-directory/active-directory-users-assign-role-azure-portal , set 'directory role' with 'golbal administrator'. kinldy let me know the process to resolve this error , i need urgent @sharmajiamit1680 Login as a tenant admin to https://portal.azure.com Open the registration for your app in the Go to Settings then Required Permissions Press the Grant Permissions button If you are not a tenant admin, you cannot give admin consent I am creating an application and adding permission via azure powershell commands. I have global admin credentials. is there anyway to grant the permissions via powershell with any user interaction? @isaac2004 try hitting "Grant Permission" for your app at https://portal.azure.com. Let me know if that fixes it. It didn't fix the same issue for me. Any other solution please? I have logged in as tenant admin (global admin) and then grant permissions for registered app. I see for all permissions granted access. Still when I call the API the response comes back is "error":"invalid_grant","error_description":"AADSTS65001: The user or administrator has not consented to use the application with ID 'some guid' named 'some-app-name'. Send an interactive authorization request for this user and resource. Not sure what would be the issue. Did you consent to the Web API for all your tenant, @praveenbattula ? It seems that the current version requires backend-app to expose and API and add Scope. Refer - https://github.com/MicrosoftDocs/azure-docs/issues/35843 @jmprieur How to do this (consent) ? Currently I'm playing around wit AAD and the MSAL-ANGULAR. It work for some time, but after adding a new app registration I am not asked for a consent any more. For everyone who also can not find "Grant Permission": Switch under "App registrations" to the old portal version using the "App registrations (Legacy)" button. Maybe there is another way - but I haven't found it. It should be in the "API permissions" tab, above the permissions lists Go to your Azure Active Directory blade > App registrations > click on you application Click on the link next to "Managed application in : " In the overview you will find permissions Check "scope" section in your request body. Go to Azure and add requested scope to user Im seeing this issue over and over again. Once solved an application works. However when I create a new aad application I run into this issue again and again. The weird thing is that the user actually consents to the application, however I see these errors in my backend with the code grant flow nonetheless. The user is not promted again, but login fails with invalid gran. I am suspecting that the error log is actually misleading. Hitting "Grant Permission" in the AD App under "Required Permissions" indeed solved the problem. Incredibly unintuitive that the permissions are not changed on save. Thanks @danieldobalian "The option to Grant admin consent here in the Azure AD admin center is pre-consenting the permissions to the users in the tenant to simplify the exercise. This approach allows the console application to use the resource owner password credential grant, so the user isn't prompted to grant consent to the application that simplifies the process of obtaining an OAuth access token. You could elect to implement alternative options such as the device code flow to utilize dynamic consent as another option." From Microsoft Learn
GITHUB_ARCHIVE
sql - replace argument not valid I would like to replace the text of a column in a table I tried out: select replace([article], '<p>&nbsp;</p>', '') from Articles update Articles set article = replace(article, '<p>&nbsp;</p>', '') where article like '<p>&nbsp;</p>' or UPDATE [AJA].[dbo].[Articles] SET [article] = ' ' WHERE [article] = '<p>&nbsp;</p>' GO and everytime it comes out with the error: argument 1 not valid in replace What's wrong with it? Thanks for your help What is the actual SQL Server error message? Can't see that in SELECT * FROM sys.messages WHERE text LIKE '%argument%not valid in%' what is the datatype of article? And those statements all do different things - the first should show you the correct data, but won't update it in the database; the second updates the article column removing the string you specify only on rows where article equals exactly <p>&nbsp;</p> (you're missing the wildcard characters on your like statement); the third one sets the article body equal to a space where article matches <p>&nbsp;</p> exactly. is it safe to assume that article is of datatype text ? If it is text datatype the full error message should tell you the problem at least on 2008 Argument data type text is invalid for argument 1 of replace function. Hi, the column article is of type text, the full error message is: the type of data of the argument 1 of the function replace I've check out your problem verifying with two datatype i.e. ntext : while working with ntext , it throws above error....Check out here varchar(max): While working with varchar(max), it is perfectly workin....Check out here So, use varchar(max) datatype while working with html tag.... If you want to work on your previous type, then cast the column type as varchar SELECT REPLACE(CAST([article] as VARCHAR(MAX)), '<p>&nbsp;</p>', '') FROM Articles Good to know :), but I cannot change the design of the table, thanks Go with updated answer....Cast the column type into varchar while using query.... You're getting this error because you've text datatype. With varchar datatype your query works fine. You need to cast your field from text to varchar in order to use replace function. Declare @mytable table ( Article text ); INSERT into @mytable VALUES('<p>&nbsp;</p>'); INSERT into @mytable VALUES('<p>&nbsp;</p>'); INSERT into @mytable VALUES('<p>&nbsp;</p>'); INSERT into @mytable VALUES('<b>&nbsp;</b>'); select replace(cast([article] as VARCHAR(8000)),'<p>&nbsp;</p>','') from @mytable where Article LIKE '<p>&nbsp;</p>' of course, this solution will cut off text longer than 8000 chars Try this one UPDATE Articles SET article = REPLACE(article, '<p>&nbsp;</p>', '') Work on the other replaces likewise . . This will update all rows even those where the text doesn't exist. Read @Bridge's comment Hi coosal, doesn' t work on me, same error invalid argument 1
STACK_EXCHANGE
With the introduction of The Experience Database (xDB) in Sitecore 7.5, MongoDB hosts the primary repository of web activity across Sitecore backed websites. Web visitors, now known as contacts, are captured along with each page view (Interactions in xDB) generated in a given browsing session. Much like its predecessor, DMS, the new xDB separates web activity by site. When building upon xDB in a multi-site implementation, being aware of how Sitecore captures and processes this information is essential for a successful multisite configuration. A Quick Look at Sitecore 7.5 with xDB Contacts are identified just as they were in Sitecore DMS. A cookie, SC_ANALYTICS_GLOBAL_COOKIE, is created with a Guid uniquely identifying the contact. From there, a contact record is created. This contact will be referenced for the lifetime of the cookie as interactions are recorded against the contact. Site activity is captured as documents within Interactions. High level data points of Interactions include: - Site Name - Pages viewed with URL, item Guid - Visit page count total - Browser type - Screen resolution - Geo location data While the structure of the data differs from DMS, as we’re now storing data as documents, commonality exists between the data points captured in DMS vs the new xDB structure. The main takeaway with xDB is Sitecore’s ability to find matching contacts and merge those contacts given a predefined value uniquely identifying visitors. In previous versions, visitors identified by their Global Session cookie were maintained as unique visitor records with DMS. Upon processing of analytics data during the SessionEnd event in xDB, contacts are merged, creating a single consolidated view of the customer. Contact merging is useful in two specific areas: - Multiple browsing sessions for a single site - Such as sharing a shopping cart between a session on a PC and transferring that session to a mobile device. For more information on this approach, See Nick Wesselman’s series of in-depth Sitecore 7.5 posts. - Multisite Sitecore Implementation - Two sites sharing the same membership model, both uniquely identifying contacts in the same way. A Multisite Example Suppose we have a multisite implementation, Launch Sitecore and Launch SkyNet (our fictitious Launch Sitecore evil twin). Both sites follow Sitecore’s best practice recommendations for configuration in IIS, while also sharing MongoDB, Collection and Reporting databases. For the purpose of this example, while the two sites share membership, Single Sign-On is not implemented, requiring the user to identify themselves on both sites. Having such a setup will show how the xDB implementation handles contact merging and the importance of a common contact identification strategy shared across all sites. Browsing Session #1: Launch Sitecore Browsing the site for the first time results in the creation of a Global Analytics cookie. If you’re familiar with DMS, this works in the same way as previous versions. The cookie is what xDB will use to tie contacts together for unique browsing sessions. While browsing Launch Sitecore, suppose we login, recognizing the current browsing session as a single customer within our membership model. At the point in which the user is identified, the contact, who previously was anonymous is now labelled using the unique identifier. In this example, we’re using the username from the extranet domain. Notice how the previous page views (xDB interactions) are now tagged with the contact id of the logged in user. Launch Sitecore programmatically identifying the contact is below. The line of code we’re most interested in is Browsing Session #2: Launch SkyNet Upon browsing Launch SkyNet, we have a completely different Global Session Guid in our cookie. To Launch SkyNet, we’re anonymous and in no way connected to the user identified in Launch Sitecore. As soon as we login on Launch SkyNet, using the same logic to uniquely identify the contact (extranet domain username), Sitecore will flush the contact to xDB, updating the interactions with the contact id of the recognized user. Any updates to facets, tags, counters, automation states, and contact attributes will be auto-merged within the MergeContacts pipeline processors. Key takeaway: Contact consolidation occurs at the point in which the current tracking session is identified via Regardless of how many sites you have running through a single instance of Sitecore, xDB processing and contact merging will consolidate contacts while maintaining the page interactions of each site. It is through this process we’re able to maintain a single view of the customer and maintain the customer lifetime value as seen by the Sitecore experience database. Risks of splitting separate Sitecore instances to separate instances of xDB processing will result in a partial view of the customer and their relative engagement value for each site instance.
OPCFW_CODE
When a tester joins an agile team, the tester should inspire within the team by enabling [Laing 2016] In this implementation, the tester will act as a test coach within the team in the same way as [Moustier 2019b] : To help him, a good knowledge of testability and Lean practices is essential, but before achieving excellence, it is probably necessary to allow the team to acquire the prerequisites that are essential. To achieve this, [TMMi 2020] provides a scale of progression inherited from a maturity system close to CMMi [CMMi 2010] and is presented on 5 levels with for each level a gradation of practices from level 1 (the organization is capable of producing something but with a random level of quality) to level 5 where the processes are defined, managed, measured and constant optimization based on the measurements makes it possible to move towards excellence. In an equivalent vein, [Rohen-Harel 2010] proposes ATMM (Agile Testing Maturity Model) with a gradation rather oriented towards the agilization of different aspects of testing on 5 levels: As with CMMi with its generic practices used to institutionalize each practice, ATMM also provides a path to maturity [Rayaprolu 2020]. This kind of system dates back to the 1970s [Mettler 2011] and tries to correspond to the classical learning curve of [Thurstone 1919] by anticipating the needs of the next phase [Gibson 1974] but these systems have some drawbacks: Moreover, the agile culture is so much about demonizing anything pushed from the top of the organization; one can recall Henrik Kniberg (the 'creator' of the Spotify model) demonizing SAFe, as a rehash of an old model named RUP that comes from the top. Although he recognizes the wisdom contained in it [Kniberg 2015], the company's base (the employees) rejects the very idea from the start and reduces the chances of success. Also, the model regularly cited is rather the Japanese term "Shuhari - 守破離" in which ATMM tries to fit: Another approach is to provide a catalogue of good practices from which the team can draw. The table proposed by [Rohen-Harel 2010] provides an evolution of each aspect of the test by describing the effects, but it is difficult to design a catalogue that would contain one half of a practice, so each practice in such a catalogue can be characterized by [Jacobson 2007] : But such a catalogue does not dispense with the notion of interaction between individuals as proposed in the agile manifesto because : Experimentation generates guides that form frameworks from which principles emerge. This is how Craig Larman developed LeSS [Larman 2017]. In turn, the principles are a foundation for the culture and practices that tools can implement. As we have seen, a maturity system is a tool to achieve progress in a given direction. The few practices that follow are also complemented by principles that must be observed. To help you understand them : The description that can be found behind the QR Code is a generic description close to a "state-of-art" with adherences to tests aspects and test automation with Agilitest. The description of the practice as per [Jacobson 2007] should be done within your organization to adapt it to your culture. The progression of a team is a delicate matter. Beyond the practices and principles that might be relevant to an organization, two factors compete at the level of each individual who is subject to two competing models [Argyris 2010] [Moustier 2020]: For example, when someone tries to promote peer programming, which has real added value and profitability, yet has been proven by studies [Gilb 1993] [North 2006] [Shull 2002], but the view of the hierarchy that may not understand could be detrimental to a manager subject to this conflict of reasoning. Argyris explains this conflict of positions as a form of addiction where the I-model feeds on what it generates and the first step out of this rut is to : 1. Work on the traps that appear in everyday life (notably through testimonies) 2. 2. Use education on the problems generated by Model I practices, as a way of combating addictive behavior (e.g. "The 12 steps of Alcoholics Anonymous") 3. Gradually replace Model I reflexes with Model II reflexes 4. Build a consensus 5. Taking action To do this, the proposed card game can be used for the different stages and provoke discussion around each of the themes. For steps 4 and 5, this can be done in a workshop: Inapplicable practices or principles (column D) are due to : In addition to education that may emerge from communities of practice and coffee discussions, Henri Lipmanowicz and Keith McCandless propose a workshop called "15% SOLUTION". It is part of Liberating Structures and is designed to be scaled up [Lipmanowicz 2014]. Its aim is for everyone to contribute to a general problem. This workshop can be combined with other LS workshops such as : In 1994, James Bach had already pointed out the inadequacy of a maturity model such as CMMi [Bach 1994] because they are not built on any theoretical foundation other than the views of experts and experiences of other companies than yours. In contrast, a simple catalogue of correctly formulated practices should help each individual, then each team and eventually the organization to find its own way by applying the principle of subsidiarity [Appelo 2010] [Moustier 2020]: "The responsibility for action remains with the individuals until they can no longer perform the task effectively"
OPCFW_CODE
5 Best WordPress Plugins That Are Absolutely Essential Plugins extend WordPress functionality. You have to be a bit of a programmer to add a form on a website without plugins. You have to be a bit of a sysadmin to configure caching. Unless you are not, just install a plugin and play with settings in a user-friendly interface. Before creating a website using WordPress and installing plugins, you have to install WP on a hosting server. Vepp will help you with that. Vepp is a service for website and server management. You just choose a template and this is it. Vepp will upload WordPress on a hosting server and take care of technical details. It is important for every website to expand in 5 ways we have chosen: Page builder, SEO optimisation, analytics, caching, contact forms. Here’s the selection of popular extensions. 1. Best Page Builder Plugin WordPress — Elementor The default WordPress page editor allows to add only text and images. Default links, bullets, bold or italic text, that’s what we can do there. Not much, right? Free WordPress page builder Elementor allows to modify standard pages the way you want. Add buttons, sliders, comments, and more. Build the pages from various blocks yourself. Some of the page builders force users to edit the code, otherwise you won’t be able to make much difference. Elementor is a different one. This tool is even able to process the dynamic content. For example, adding the author information to every blog post is not a problem at all. The styles will remain the same everywhere. The premium version Elementor Pro is 49$ a year and allows to build pages for one whole website, three websites for 99$ a year, thousands for 199$ a year. Premium version also has some extra features and widgets. For instance, “global widget” i an element which you only set up once, and then add in one click. If you change the settings, they will apply everywhere. To get started you may use knowledge base, video tutorials and community support. 2. Best SEO Plugin WordPress — Yoast SEO Free Yoast SEO plugin will help with the latest search engine requirements and your website promotion. With this plugin, it is easy to create an XML site map, edit the titles, meta tags and breadcrumbs. Set up internal linking, embed shema.org markup, and more. Yoast SEO will also give you some hints on each SEO parameter, which is perfect for the beginners. The Premium version for 90$ a year suggests corresponding key phrases and internal links. It sets redirects from deleted pages and modified URLs. Technical support is available via email. There is a knowledge base on extension website. You may also report about the faced issues on Yoast SEO GitHub. 3. Best Google Analytics WordPress Plugin — GA Dashboard for WP This plugin helps integrating Google Analytics into your website. You can monitor metrics right from the WordPress admin panel. Sign in to your Analytics account, install the plugin and the Analytics section will appear. You can monitor the number of users, channels and traffic sources in real time mode. View standard and detailed reports. The whole plugin functionality is free. English interface and documentation. You may report about the faced issues on the plugin GitHub. 4. Best WordPress Cache Plugin — WP Super Cache Caching helps speed up page loading and reduce server load.WP Super Cache creates pages copies (static HTML or PHP files) and saves them inside cache. When a user requests a page, WordPress doesn’t create it from scratch. It sends the browser previously saved HTML copy or quickly assembles a page from PHP files. WP Super Cache is free and easy to understand cache plugin. It compresses the website pages and clears the cache from outdated files. It has detailed documentation in English and community forum support. 5. Best WordPress Form Plugin — Contact Form 7 To make it easy for the visitors to contact you, and for you to answer them, there are feedback forms. The WordPress plugin Contact Form 7 allows to create a feedback form fast, add some fields to it, and modify the appearance of it. There is an option of adding a notification for those, who sends a message — “It’s all right, we got your message!”. You can also set up email templates. You will get the email with a message about a typo on your website or a callback request. The are some more settings for advanced users such as enhancing the functionality with JS code. For example, adding Google Analytics events. Contact Form 7 is a free plugin. It has documentation and a support forum. + The Plugin of the Plugins — Jetpack If you want it all and now, download Jetpack. This plugin alone replaces 30 different WordPress plugins. Jetpack basic options are protection and optimization of your website. But there are more. Webform options, social media, publications, comments, widgets, and images instruments. You may set up subscriptions, links, mobile theme, and many more. Most Jetpack options are free. But there are 3 paid variations as well. You will get a priority tech support for 33$ a year. There are automatic website checks, malicious code cleanups, Google Analytics integration, and more for 105$ a year. For 286$ a year, you will get premium themes, automatic backups, Elasticsearch integration. You may read the Jetpack blog for news and tutorials. There is a forum and technical support for everyone. However, paid version owners have a priority. Jetpack is an excellent plugin, but a heavy one. It suits the beginners who want to try many functions and avoid a plugin conflict. + Plugin for Online Stores — WooCommerce Powerful yet free There is documentation, tech support for everyone, forum, and GitHub. There are many WordPress plugins created by many developers. Sometimes the conflict between plugins, with your theme or WordPress version takes place and causes website errors. The plugins mentioned in our article weren’t reported to conflict with each other. They are generally considered Check out how to automate WordPress launch and maintenanceWatch the video Subscribe to the articles by WordPress experts To make your website faster, get rid of spam and other people's opinions, just turn off comments. Installation of WordPress can be manual or automatic. Here we will describe both types. A story of a marketer, who wanted to fill Lake Baikal with traffic, but nearly lost the job instead.
OPCFW_CODE
MVVM Survival Guid for Enterprise Architectures in Silverlight and WPF I finished just reading this book and I thought I would do a review while it is still fresh in my head. I believe the book has good content and is a great reference for building application architectures using Silverlight or WPF. Chapter 1 takes us through the various Presentation Patterns. We first are introduced with a brute force monolithic approach and then look at MVC and MVP while evaluating any issues or concerns. Its primary focus is to demonstrate the power of these patterns and enforce separation of concern (SoC). Chapter 2 introduces us to MVVM and the benefits it offers do to Silverlight and WPF’s rich data binding. The user is presented with strategies for sharing code between both Silverlight and WPF. I was surprised that there is no mention of Prism or Caliburn.Micro or any other framework besides MVVM Light. I think that MVVM Light is a great framework but I don’t agree that Microsoft leaves you without any direction as was mentioned in the book when they have an excellent guidance using Prism. Chapter 3 uses the Northwind database as the primary data storage throughout the book. It uses Entity Framework and also introduces Unit Tests for testing. We are exposed to a new pattern, “Service Locator Pattern” and how it interacts with View and ViewModel. Chapter 4 goes a step further with our Northwind database and discusses services and persistence ignorance. It basically follows the best practices as established by Martin Fowler with regards to using a “Service Layer” to remove tight coupling. Transaction Script, Domain-Driven Design, and Service-Oriented Architecture and presented in this chapter as well. We then look at persistence ignorance and custom models and the pros and cons for each. Chapter 5 is all about Commands and User Input dealing with the Northwind database. It does a good job of demonstrating the commanding API supported in both Silverlight and WPF. It also deals with InputBindings such as KeyBinding and MouseBinding. We then discuss attached behaviors. I did not see any true behaviors or actions that are also supported by both Silverilght and WPF. Neither did I see any examples of EventAggregation which is very important when trying to communicated across projects or classes. Chapter 6 handles hierarchical view models and inversion of control. We are presented with master/detail scenarios and also take a look at the ObservableCollection<> object. With IoC, we are presented again the Service Locator pattern. Next we are presented with StructureMap. I am again surprised in that we are not presented with Unity or MEF as these two come straight from Microsoft. I would also like to have seen others like Ninject but we are at least presented with IoC best practices. I do like that they introduce the Single Responsibility Principle (SRP) and give the reader a good idea of how it works works and how to use it. Chapter 7 is all about dialogs and MVVM. Dialogs can be a tricky part of an application architecture in that it is easy to break all the hard work we established by using MVVM. We are presented three options for using dialogs: Dialog Service, Mediators, and Attached behaviors. With dialog services, we are presented with data templates and the flexibility they provide as well as some common conventions to give us one of my favorite patterns, convention over configuration. With mediators, we are finally introduced Event Aggregation and Messaging. Finally with attached behaviors we see a way to controls dialogs via XAML. Chapter 8 this chapter is a nice surprise in that it discusses Windows Work Flow and building MVVM applications. We discuss several patterns in this chapter, “Unity of Work” and “Memento“. Next we are shown how these patterns are supported and implemented using WF and how they support handling application flow. Chapter 9 is all about validation and error handling. IDataErrorInfo, INotifyDataErrorInfo, and ValidationSummary are a few of the objects presented in this chapter. We also discuss the Enterprise Library Validation Application Block and how it interacts with both WPF and Silverilght. Chapter 10 talks about using non MVVM third party controls. It deals with strategies for incorporating controls such as the WebBrowser control and still achieve rich data binding using MVVM with these controls. We look again at the power of attached behaviors and a technique such as exposing a binding reflector to help with our data binding. Another technique is to use the adapter pattern and leverage inheritance to expose and surface the functionality we want to support MVVM as well. Chapter 11 focus on MVVM and application performance. For WPF applications, we look asynchronous bindings. Another area of performance concerns with controls is the concept of virtualization and paging. Not a lot is addressed here but at least the concepts are presented. The BackgroundWorker object is presented to facilitate work being done on a separate thread. Appendix A claims to evaluate other MVVM libraries but I only see a list and not much discussion as to the power or differences among them. Appendix B is titled “Bindings at a Glance” but I really only see a glossary of terms and now real good examples of bindings or real-world examples. All in all, this seems to be a good book. It doesn’t answer all the questions but it does present you with enough information to give you a good direction as to where you need to go when building out your own architectures.
OPCFW_CODE
You might have found that even right after some simple parameter tuning on random forest, We have now achieved a cross-validation precision only a little better than the original logistic regression design. This physical exercise gives us some quite intriguing and unique learning: Just about each individual system addresses this, not surprisingly, but I'll get it done somewhat otherwise. I’ll use python to help learners hone their vector abilities. But initial, a quick intro to the topic. Because Tuples are immutable and may not alter, They may be faster in processing when compared with lists. That's why, Should your listing is unlikely to vary, you must use tuples, rather than lists. Though supplying selection in coding methodology, the Python philosophy rejects exuberant syntax (such as that of Perl) in favor of an easier, a lot less-cluttered grammar. As Alex Martelli place it: "To explain something as 'clever' isn't considered a compliment inside the Python society. Just before we deep dive into dilemma fixing, allows have a phase back and recognize the fundamentals of Python. As we understand that info buildings and iteration and conditional constructs variety the crux of any language. Loops and iteration finish our four simple programming patterns. Loops are the way we explain to Python to do a thing over and over. Loops are classified as the way we build plans that stay with an issue right until the trouble is solved. If check my source you are trying to jot down code from scratch, its likely to be a nightmare and you simply won’t stay on Python for over two times! But allows not stress about that. Thankfully, there are numerous libraries with predefined which we will specifically import into our code and make our life easy. In parting, I woluld be remiss not to mention an incredible resource on all components of the open up-resource project two.0 two months back Anonymous partly done this study course. The course in and of by itself isn't _terrible_, but be expecting to complete loads of looking for outdoors help on Stack Overflow plus the like as being the lectures usually do not offer anywhere close to adequate substance to resolve the issues. This can be basically being expected in recent times, nevertheless the lectures aren't definitely sufficient to unravel the fabric. I personally located it extra worthwhile to simply skip the lectures because they were relatively prolonged and did not offer all of the necessary data a nyway. Suggestion: Even when you download a All set-produced binary to your System, it makes sense to also download the supply. About this training course: This course aims to teach Anyone the fundamentals of programming personal computers utilizing Python. We cover the basic principles of how a single constructs a application from a series of straightforward Guidelines in Python. The study course has no pre-requisites and avoids all but The best arithmetic. The “arrow” is yet another built in object in VPython. You can find seriously just 3 significant Homes with the arrow object: Thus we see some variations inside the median of bank loan amount for every team and This may be used to impute the values. But initial, We have now to make certain that Every of Self_Employed and Training variables must not Possess a missing values. Like print(), you'll be able to generate your own private tailor made functionality. It's also called person-outlined functions. It helps you in automating the repetitive task and contacting reusable code in less difficult way.
OPCFW_CODE
Wow has this really been around for 6 years? One would think the vendors website would be a little more up to date by now. English is lacking in the docs section, no description of hardware requirements, other than nic and raid. What kind of box does it run on IE, Processor speed, Memory Hard drive space? Is VPN supported in free version? Best system on board! Very good client software! Re: Squid fix? Not sure if it is connected, but I know that checkpoint firewall had problems with the mix of persistent connection + squid + transparent proxing. (Servicepack 7 for Firewall-1 4.0), was mention on the squid maillist. I'm curious about the changelog comment for the release on 04/16/2002 that refers to Squid enhancements to make it work better with sites like eBay. I wasn't aware Squid had any problems working with sites like eBay, but if it does the Squid developers would certainly like to hear about them. Would you mind expanding on that comment a little? Re: License response. ... % I would like to know how to go about > receiving the license for the free Only send the Securepoint program id to securepoint: You get a free registration key back! License response. ... I would like to know how to go about receiving the license for the free version. I will be using this for my home network. I've tried to contact via e-mail but have not yet received a reply. It has roughly been a week since my first e-mail. Please respond I would love to use your software. I've also contacted you about non-profit use. Please check your e-mails. Is there a direct e-mail where I can get an answer within a reasonable time frame? Re: cost / there are no costs % The Freshmeat post says it's a > "firewall and > <B>VPN</B> server", but > the VPN features are only availible in > the non-free version. That's not true. You have to read it carefully. For non commercial use it is free! Re: cost / there are no costs > % About $900? Come on. > Hello IO, > You have to look at Securepoint Small > Business on the website. > There are no costs. The distribution > is freely, too. You looked at > Securepoint Professional. > In freshmeat is no insertion of the > professional version! > Regards, Lutz The Freshmeat post says it's a "firewall and <B>VPN</B> server", but the VPN features are only availible in the non-free version. So the point still applies. Fix the entry so that it's not so misleading, please. > About $900? Come on. It might be a reasonable amount of money for a company if it does what it is supposed to do. I have not tried the software so I have no idea how it works but if it worked well and it saved me from spending a day with it I would pay for it. You have to look at Securepoint Small Business on the website. There are no costs. The distribution is freely, too. You looked at Securepoint Professional. In freshmeat is no insertion of the professional version! An open, cross-platform journaling program. A scientific plotting package.
OPCFW_CODE
I would want either the 2nd or 3rd for you to breed. But I have a feeling that the 3rd one is better. Even if the 2nd Betta's Tail Fin is more accurate for a 180 Degree angle, I just have the Feeling. Since your a better Betta Fishkeeper than me, you should have a feeling of which one you think is best. You Decide. Good Luck with Choosing!!! Thanks. I have virtually no experience in genetics so I though I would ask on which one has better form first, I already own the third male as my personal Betta pet. He is very aggresive, protective of his bubble nests, a very good bubblenester, and has a great personality Oh, and the 2nd make is an over halfmoon I believe. His fins exceed that D shape Male number 2 has the best form/fins out of the 3. Male number 3 looks really nice too but doesn't have good enough branching or sharp caudal edges IMO. I wouldn't really consider male number 1 for a breeding program... My toughts were the same on male 1. I really look forward to breeding. If I do get male 2 I'm definitely going to breed him, the person said that they will give male 2 to me, and a sibling female as long as I provide them with a pair from the spawn. If I do breed male 3. Then should I try breeding him with a female than has attributes that will improve the rays. Or just not bother with it If I you were to breed male #3, look for a DT female or one that's DT geno. That will give you better dorsals. Make sure the female comes from a good HM family. A cross like that would give you HM's with better dorsals and with good branching . I didn't notice the dorsal but I definitely noticed the anal, I'm not quite experienced in breeding so I didn't exactly know if there was something wrong with breeding him. I don't have any pictures of the females. But the one you are recommending that I breed, the seller has a sibling female The first male, though it's kind of hard to tell because it isn't a straight on flare shot, has several issues. Tried to highlight them below. The dorsal is a bit long, and has either damage or random extended rays. It should be a bit more forward-facing and broader, too. The caudal has some damage to it, and the anal looks a bit too long, and should also be more forward facing. The second male: He's got nice vibrant coloring, but has some issues as well. The branching on the caudal and dorsal is a little bent, could definitely be straighter. The caudal penducle should also be a bit thicker, where the body meets the caudal. He's also got some bad scaling towards the front of his body. The anal is a tad long, and the blue pattern isn't continued on the anal fin like the other two fins. Ideally the pattern should be consistent on all three fins. The third male I believe is the best overall, although his caudal is a bit off balance, the dorsal is just a touch too short, and the anal is a touch too long. Definitely the best out of the 4, though. The fourth male, the mustard gas, is my least favorite. His anal is very, very long, and his dorsal is short. His ventral fins should be longer, and there are a few branches on the caudal that are bent/wavy. He also has a mild spoon-head. So, my choice would be the blue. Hope I helped. =) The third boy's anal is has the perfect shape in my opinion...I guess he would make a good breeding project if paired to a nicely branched DT geno or DT female...I really like that the 2nd male has better branching and better balance overall though. I would breed the second male. Yes he has issues - otherwise the breeder wouldn't sell him :lol: Notice the front ray of his dorsal - straight and firm. This trait is hard to achieve in a few generations and is hard to maintain if not bred correctly. Another trait I like about him is his wide ventral, something not all long fins have. His faults may be genetic but IMO can easily be bred out. Pair him to a 4 ray female with symmetrical body, preferably pointed edge caudal. She doesn't have to be HM, super DeT should be OK - thus avoid roses. Look for a dorsal that rather stands .... this should be close to impossible. Most important she must have balanced fins. It would be easier to breed female F1 to father. Make at least 2 batches to ensure his traits on F2. Then inbreed F2. You should have something very close to daddy.
OPCFW_CODE
Can a broker make it appear they have executed a 'buy to close' trade but in actual fact they simply have taken your position? If you wrote a call option, and the underlying spiked up creating a margin call - which was not filled, can your broker make it appear that they executed a 'buy to cover' to close your position but in actual fact just take on your short position? I am lead to believe the latter because the open interest on this option was only 1 when the call option was written, and remains at 1 after the supposed 'buy to cover' trade was executed. The option has since gone back well into the money and I am left with a massive hit to the cash balance that was being held in the account. If you wrote a call option, and the underlying spiked up creating a margin call - which was not filled, can your broker make it appear that they executed a 'buy to cover' to close your position but in actual fact just take on your short position? What does which was not filled mean? Are you saying that you did not meet the margin call by adding additional funds to your account? If you violate the Minimum Margin Maintenance Requirement and you get a margin call, most brokers automatically close the position. That's the corrective action to fix the margin violation and that's the only issue here. The massive hit to the cash balance in your account is because you messed up via a bad trade. When brokers buy to cover to close your position they are doing just that. If by some stretch of the imagination the broker took on your position, they would become short the call at the current market price. So if you're implying some sort of conspiracy that benefits the broker, there is none. I am led to believe the latter because the open interest on this option was only 1 when the call option was written, and remains at 1 after the supposed 'buy to cover' trade was executed. Open interest is a lagging indicator. The correct amount is tallied at the end of the day. Furthermore, when options are bought and sold, there are 4 possibilities that affect Open Interest. If the other side of your trade was Sell To Close then Open Interest would decline by one. If the other side of your trade was Sell To Open then Open Interest would be unchanged because the short call has changed hands and is now in someone else's account. Thank you Bob. Yes when I said "was not filled" I meant that I did not meet the margin call in time (I was actually on a flight when this call was initiated and accelerated to the point that my position was closed - all in a matter of 1 trading session) I don't know that I would call it a conspiracy, but I certainly feel hard done now that the position is well in the money. Although your explanation of the other side being a 'Sell to Open' is likely what may have happened. @Mash - For future reference, if you're going to chase fat option premium by selling naked options, consider spreads instead so that you have a protective leg in place. It won't prevent losses but it will limit them sharply. IMO, the only exception to that would be the investor who wants to own the stock at a lower price. Otherwise, Spread Em Danno! :->)
STACK_EXCHANGE
But, luckily, this is where Bootstrap comes into the picture. This dashboard just makes sense. Start Bootstrap provides support for issues through their help page. The template has many features like widgets, charts, buttons, forms, tables, cards, icons and 2 pre-built pages. It has got an amazing admin dashboard and powerful Bootstrap components. Bootstrap 4 Admin Dashboard Bootstrap Dashboard is my first free Bootstrap 4 template. The only suggestion is if the markup can be split across multiple files it would be easy to pick or drop components. It has the basic Bootstrap components and easily customizable. Want to create a stunning website of your own? It is light and fresh to the senses. The footer contains social media links like Facebook, Twitter, and Instagram. The design is highly customizable, too. The colors used are bold and striking. The components are beautifully designed and very elegant with light backgrounds and contrasting and eye-catching colors for the components. Why would you choose to use Bootstrap at all? The theme has a simple testimonials section to make it more credible and authentic. Landing Page A simple, elegant, and beautifully responsive landing page theme for Bootstrap 4 websites. Chameleon bootstrap admin template comes with starter kit which will help developers to get started quickly. Free Agency Bootstrap 4 Theme This stylish, one-page Bootstrap theme is perfect for agencies and small businesses. And perhaps more importantly, your Bootstrap build will look consistent across all screen resolutions and platforms. It has beyond exceptional typography. The popularity of the framework contributes to the popularity of Bootstrap admin template. This one has a simple and minimalistic look. It utilizes all of the Bootstrap components in its design and re-styles many commonly used plugins to create a consistent design that can be used as a user interface for backend applications. Visit our website to get to know more about our products. The one-page theme is built and aided by Bootstrap 4 and has a responsive layout. Content sections are also responsive with images and text, along with the whole theme. A for building responsive layouts. Business Casual Bootstrap 4 Theme The Bell theme takes on multiple purposes for a theme. Star admin is an easily customizable template with clean and well-commented code. Bell Bootstrap 4 Theme Bell is a single page Bootstrap 4 theme. Xtreme Admin Template is based on a Modular design, which allows it to be easily customised and built upon. Buttons, forms, tables, charts, icons, sample pages etc. The default unchanged layout, from a data analyst perspective this is a brilliant addition to my every day workload. Some other features of this theme include a modular design dashboard, seven page templates, unique color options, and a basic data table. Thank you for understanding and respecting the license conditions. However you cannot redistribute the template or its derivatives - neither for free or commercially e. There are a lot of components included in the User Interface including alerts, buttons, cards, carousel, collapse, icons, modals, tooltips, and popovers. The content sections can be styled with various text stylings. Moreover, it has a simple look with a sophisticated design. Reusable classes to style our content and many other things also. The content sections of this theme are also responsive. The clean and modern design complements the beauty of the photos perfectly. It is specially designed for creatives, small businesses, and other sites. I am currently working when I have some Time. The theme relies on its clean and minimal yet professional design. If it's not covered in the documentation, then it means it is a default Bootstrap 4 feature. For smaller screens, the theme features an off-canvas navigation. It is a Multi-purpose Template comes with Clean and Modern Design, Essential Sections and Feature-rich Elements to Launch Bootstrap 4 based Complete Site in Minutes. Coming Soon Bootstrap 4 Theme Have you ever thought of a template that can act as a landing page when your site is being repaired? The template has a minimalist design with more emphasis on the functionalities. This theme does just that. It is an ideal template for start building admin panels, e-commerce systems, project management systems etc. The template is built using the Material Design framework from Google, Material Components for the Web. But with data simple is more stable and easier to tell a story. I had to manually copy paste portions to adapt to build the common layout for my Ruby on Rails application. The theme can be styled in any way that you want.
OPCFW_CODE
Webpack is known as a bit of a bear. Yet, it's used in a large percentage of frontend projects. There is a lot to Webpack, and I won't go into all of it today, but I did want to talk about certain aspects. I often say that one of the most important pieces of information when debugging your project is knowing what technology is responsible for the error you're seeing. It helps you google more effectively, helps you narrow down what changes might be causing the issue, etc. Thanks to leaky abstractions, understanding when an issue is Webpack or Node.js is not as obvious as one might think. So let's talk about it! Npm is a package manager. And npm listens to a package.json file to determine what dependencies and versions to install. The result of running npm install lives in your Insert joke about the size of that directory here. If you've gotten your package name wrong when listing it in package.json, or tried to reference a version that doesn't exist, npm will yell at you when you try and install dependencies. But as long as those things exist, and npm can install them, it doesn't care. This is where Webpack comes in. Lots of modern tools abstract Webpack configuration away from you. But the goal of Webpack is to bundle resources so a browser can use them. The result, is that your dependencies exist as static assets that your code can reference. Ever seen code like this before? const React = require('react') Well, this is where things get a bit confusing. Node.js follows CommonJS conventions and includes require as a built-in function. Webpack supports a number of different specs, including CommonJS. So require is also valid Webpack syntax. However, Webpack's require is more powerful than the same function in Node.js. It uses enhanced-resolve and allows you to reference absolute paths, relative paths and module paths. Webpack also includes a function called require.resolve. This function takes a module name and returns a string that contains the path to the module. The difference between the two is sometimes confusing, so I wanted to include that callout here. As mentioned before, Webpack allows for multiple different syntaxes (though it recommends you stay consistent within your project). One of those is ES6. The rough equivalent of require in ES6 is this. import React from 'react' Here is where stuff really gets interesting. ES6 and CommonJS are not the same spec! So even though both are valid in Webpack, they often aren't elsewhere in the ecosystem. And since Webpack is bundling lots of different types of files for you, it can be challenging to keep things straight. At this moment, ES6 import syntax is not valid in Node.js. If you want to support it you can use the experimental package esm. This means that files that run server-side, taking advantage of Node.js runtime, likely need to use When Babel compiles your code, it turns all of your imports into Node.js require statements (not Webpack ones). It's worth noting that Babel output typically needs to be bundled by Webpack, so a bit of a Twilight Zone moment there. With all of that background it becomes a bit easier to determine where an error like Cannot find module 'react' is coming from. It may appear because it's referencing a dependency you don't have installed in your project. Make sure it's installed, and then make sure you're referencing it properly, no typos! Conversely, you may see that error because Webpack didn't bundle your files where Node expected to find them. Take a look at your file path. I've spent a fair time debugging these various issues and the thing I've come to recognize is that error messages go a long way. With so many packages and tools bundling Webpack for us, it's important to make sure the debugging information we get is as helpful as it can be!
OPCFW_CODE
Is 好きじゃねえな人 correct? would 好きじゃねえな人 be grammatically correct for "people I don't like"? or does it have to be 嫌いな人? 好きじゃねえ人 without な works (this is not a な adjective) but is somewhat crude compared to 好きじゃない人 あい to ええ is a somewhat common sound change but it's associated with men speaking in very informal settings thank you, can you elaborate on why its crude? @towa Because the ~あい → ~ええ change for い adjectives is a very informal of speaking. You should never speak like this in situations where you need to convey politeness or seriousness. I know this question was already answered but I was hoping to clarify some assumptions made in this example sentence to better explain why they are wrong. Firstly, じゃねえ is a hypercasual form of ではない. This, in turn, is the negative phrase to である which literally means "to exist as...". This is the origin of the copula だ, which has the same meaning, but is used and translated the way we say "Is (something)". I explain all this to now tell you that な, the particle that attaches adjectives to nouns, comes from the similar phrase にある meaning "To exist being...". Both だ and な connect to adjectives the same way, and are considered by many to be the same phrase in different contexts; だ concludes phrases with na-adjectives, すきだ "it is liked"; while な joins the phrase to a noun, 好きな人 "a person who is liked". The second thing I'll point out is that ない is an i-adjective. All i-adjectives inherently mean "To be" already. In this case ない means "To be nonexistent". Using だ or な is redundant after i-adjectives because it results in the phrasing; ないだ "it is being that it is nonexistent". People jump to the extreme conclusion that 'i-adjectives + だ' are ungrammatical, but that's not true either. It is grammatically correct, but never used because it sounds repetitive. This is proven by the fact that the polite form of だ, です, IS used after i-adjectives, as a tool to make them polite. So, you do not need to connect 好きじゃねぇ to 人 using な. 好きじゃねぇな人 would sound like "a person who is being that which is not liked." I understand what you're trying to add here, but it's really just being pedantic about details that aren't relevant to answering the question - and then getting those details mostly wrong. です being used after i-adjectives is a special case, resulting from them not having a more obvious polite form; and it wasn't always considered grammatical. i-adjective + だ is considered simply ungrammatical - の (or ん) is required as a linker, and then adding on to the i-adjective changes the meaning somewhat. And on the flip side of that, です is not used in relative clauses. I suppose one could correctly say 好きじゃないにある人, if you were trying to make that point (and also explain that it's pointlessly wordy). It sounds like something chat gpt would write... I suppose I understand your confusion if we're looking at this from a prescriptivist point of view. When I say ungrammatical, I mean sentence structures that are syntactically and meaningfully invalid. I did not say that じゃないだ is grammatical and therefore correct. It's grammatical syntactically but morphologically redundant. の and です appear after i-adjectives because that structure is a syntactically valid phrase, and they introduce additional meaning. だ does not add meaning. It is not because some rule prevents you from putting them after i-adjectives. That's not how communication works. It is true that な and だ developed as phonological contractions of にある and である, but your translations of these にある and である as "to exist being..." and "to exist as..." are wrong. Both にある and である simply mean "am/art/is/are". Besides 似{に} (Continuative form of 似{に}る), 二{に} etc., it makes sense to consider that Japanese language has 2 に and 2 で: 1. に and で which are forms of verb n-; 2. に and で particles (at least で particle developed from aforementioned で form of verb n-, it is possible that に particle also developed from aforementioned に form of verb n-). Verb n- is defective and is missing several forms, which would be needed sometimes, so Japanese language developed periphrastic constructions to express desired meanings. These periphrastic constructions consist of forms of 2 verbs: ni (Continuative of verb n-) or nite / de (Conjunctive/Gerund of verb n-; de is contraction of nite) + some other verb used for conjugation. That second verb is often ar-, but throughout history of Japanese language some other verbs have seen use, in modern times mostly verb gozar-. These periphrastic constructions are syntactically similar to compound verbs. In both cases, the first verb remains in the same form, while the second verb is conjugated for appropriate tense, aspect, voice etc. Semantically, in ni aru, ni imasu, de aru, de gozaru, de saurau → de sɔːrɔː etc. constructions, verbal form ni or de gives meaning "to be" to whole construction, and meaning of the second verb is ignored. Verb ar- is similarly used for forming some forms of adjectives (e.g. past tense: yoku aritaru → yokatta) and now-archaic negative suffix -an-. There is also a different ni aru / de aru, homophonic to ni aru / de aru discussed above, but grammatically very different, and consisting of case particle ni or de + lexical verb aru "to exist, to happen, to occur": noun denoting place + ni + aru ("to exist in some place"); noun denoting place + de + aru ("to happen, to occur in some place").
STACK_EXCHANGE
PaymentButtonContainer is not inflating Android (0.8.1) Describe the bug Using com.paypal.checkout.paymentbutton.PaymentButtonContainer with ('com.paypal.checkout:android-sdk:0.8.1') but its not rendering , inflating error in android. How i can use that ? or there is some other approach ? To Reproduce Steps to reproduce the behavior: add 'com.paypal.checkout:android-sdk:0.8.1' then add <com.paypal.checkout.paymentbutton.PaymentButtonContainer android:id="@+id/payment_button_container" android:layout_width="match_parent" android:layout_height="100dp" app:paypal_button_color="silver" app:paypal_button_label="pay" app:paypal_button_shape="rectangle" app:paypal_button_size="large" app:paypal_button_enabled="true"> Hi @HafizAwaiskhan, Thank you for reporting the issue. Could you please replace the 100dp height with warp_content. Also, implement the view state listener on the view and see what exception do you get then report it here so we know what is the actual reason for not rendering. For example: paymentButtonContainer.viewState = PaymentButtonContainerViewState.invoke( onLoading = { -> Log.d(tag, tag.toString()) }, onFinish = { fundingEligibilityState, exception -> fundingEligibilityState?.let { Log.d(tag, fundingEligibilityState.toString()) } exception?.let { Log.d(tag, exception.message.toString()) } } ) Hi @mahmoud-turki Thank you for reply . D/nxoPayPalButton: eligibility status updated: Loading D/nxoPayPalCreditButton: eligibility status updated: Loading D/nxoPayLaterButton: eligibility status updated: Loading java.lang.RuntimeException: Unable to start activity ComponentInfo{com.cybermart.paypalcheckoutintegration/com.cybermart.paypalcheckoutintegration.MainActivity}: android.view.InflateException: Binary XML file line #10 in com.cybermart.paypalcheckoutintegration:layout/activity_main: Binary XML file line #10 in com.cybermart.paypalcheckoutintegration:layout/activity_main: Error inflating class com.paypal.checkout.paymentbutton.PaymentButtonContainer and your method is not even called because its crashes in beginning. I see, it looks to me that you didn't set the config on your application level. PayPalCheckout.setConfig(checkoutConfig: CheckoutConfig). Could you check if you have the config set probably on the application level and let me know? class MainActivity : AppCompatActivity() { lateinit var paymentButtonContainer: PaymentButtonContainer val TAG = "paypal" override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) paymentButtonContainer = findViewById(R.id.payment_button_container) val config = CheckoutConfig( application = application, clientId = "AY_DPizCxyvnG91I1jHgN4A4TC5KwtFOaEyuIGkdb9Wz638V1t3W5Pnq3oALaFaz0CjYnuXWT2-nOJCa", environment = com.paypal.checkout.config.Environment.SANDBOX, currencyCode = CurrencyCode.USD, userAction = UserAction.PAY_NOW, settingsConfig = SettingsConfig( loggingEnabled = true ) ) PayPalCheckout.setConfig(config) paymentButtonContainer.paypalButtonEnabled = true paymentButtonContainer.setup( createOrder = CreateOrder { createOrderActions -> val order = Order( intent = OrderIntent.CAPTURE, appContext = AppContext(userAction = UserAction.PAY_NOW), purchaseUnitList = listOf( PurchaseUnit( amount = Amount(currencyCode = CurrencyCode.USD, value = "10.00") ) ) ) createOrderActions.create(order) }, onApprove = OnApprove { approval -> approval.orderActions.capture { captureOrderResult -> Log.i("CaptureOrder", "CaptureOrderResult: $captureOrderResult") } }, onCancel = OnCancel { Log.d("OnCancel", "Buyer canceled the PayPal experience.") }, onError = OnError { errorInfo -> Log.d("OnError", "Error: $errorInfo") } ) } } Here is my main class ! can you please check what is wrong ? The config should be set on the application level before you start the MainActivity otherwise you will get exception. thank you
GITHUB_ARCHIVE
Support for Linux clients and OPENVPN on R7000 or any other Netgear router Althought the R7000 router has support for MAC and Windows clients when using the Netgear R7000 OPENVPN built-in server, it does not support Linux as a client. See the following link: Currently I am running the latest "supported" firmware version: V220.127.116.11_1.1.67 as of Nov 10 2015 There is a great number of Linux users and not being able to take advantage of the OPENVPN server in the Router seems to be a big limitation/oversight. I personally purchased this router because of the fat that it had a built in OENVPN server, so that I could connect my Android and Linux devices to my network. As I discovered after purchasing the router neither of these platforms are supported. It seems that IOS and Android support is coming, but no plans to implement Linux. It may be possible to manually configure a Linux client if Netgear would publish how OPENVPN is implemented. I understand that this would not be "supported" by Netgear, but for those of us who have some technical skill we could possibly implement it and make it work for our needs. Providing information such as and not limited to the following would be very useful since OPENVPN is open source software: - Tunnel Device (TUN/TAP) - Protocol UDP/TCP) - Port number (1194 -> official port, or another port defined by Netgear) - Encryption cipher (None, blowfish, AES-512/256/192/128 CBC, etc....) - Hash algorithm (SHA1/256/512,MD4/5,none, etc....) - TLS Cipher (none, AES-128/256 SHA, etc...) - LZO Compression (Adaptive, yes/No, none) - Authority/ Password usage - TLS Auth Key usage ? - PKCS12 Key usage? - Static Key usage? - ns-cert-type server ? - Is access limited to the local network, to access the internet only, or to both local and internet? This post is essentially to ask for Netgear to provide the following: - Implement a Linux client file and instructions on how to implement it for the various distributions of Linux. - Provide comprehensive documentation on how OPENVPN is implemented in the R7000 router or any other router that has an OPENVPN server built-in. You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
OPCFW_CODE
''' Creating a toy example to test the kernelised dynamical system code. We have a system that evolves deterministically through four states: state 1 -> state 2/state 3 -> state 4 state 4 is an absorbing state A time-series belongs to one of two types, A or B. Both A and B have rare and common variants. There are three actions available at each stage, a, b, and c. The rewards are as follows: state 1, a = -10 , b = 5 , c = 0 state 2, a = 5 , b = -10 , c = 0 state 3, a = 5 if A, -10 if B; b = 5 if B, -10 if A; c = 0 state 3 persists for one step, so everyone follows 0,1/2,3 We observe the rewards (as observations, just made it easy for me) as well as an observation that depends only on the time-series' type (the obs mean which determines whether we have A or B): obs dim #1: either 0,1 or 3 if type A, 2 or 4,5 if type B. Designed so that in the training set, we see mostly 0,1 and 4,5; if a kernel maps to the nearest it will fail on the observations because they will be incorrectly mapped. The output is the sequence of observations, actions, and rewards stored in the fencepost way ''' import numpy as np import numpy.random as npr import cPickle as pkl import os # create the data set def create_toy_data(train_data, test_data): sequence_count = 250 sequence_length = 4 # M = sequence_count * sequence_length # where to store things state_set = np.zeros((0, 1)) obs_set = np.zeros((0, 1)) action_set = np.zeros((0, 1)) reward_set = np.zeros((0, 1)) dataindex_set = np.zeros((0, 1)) optimal_set = np.zeros((0, 1)) # pre-decide the sequence types rare_count = 20 # initialize all sequences to the same type eg A sequence_type = np.zeros(int(sequence_count)) + 1.0 # make all second half of the sequences another type B sequence_type[int(sequence_count + 0.0) // int(2.0):] = -1.0 if test_data is True: skip = 5 else: skip = 1 my_start = 0 for rare_index in range(rare_count): my_end = my_start + skip sequence_type[int(my_start):int(my_end)] = rare_index + 2 sequence_type[(-1 * int(my_end + 1)):(-1 * int(my_start + 1))] = -1 * int(rare_index + 2) my_start = my_end min_obs = np.min(sequence_type) obs_mean_list = [] fenceposts = [] # set values (the three-stage part is hard-coded!) count = 0 fencecount = 0 for sequence_iter in range(int(sequence_count)): count = count + 1 obs_mean = sequence_type[sequence_iter] # Fill in the first dimension of the observation; since it is # dependent only on type, it does not change over time obs_sequence = obs_mean - min_obs + np.zeros((int(sequence_length), 1)) # print(obs_sequence) # Fill in the actions, for now chosen randomly action_sequence = npr.choice(3, (int(sequence_length), 1)) # Fill the rewards state_sequence = np.zeros((int(sequence_length), 1)) + 0.0 reward_sequence = np.zeros((int(sequence_length), 1)) + 0.0 optimal_sequence = np.zeros((int(sequence_length), 1)) if (sequence_iter > 0): fenceposts.append(fencecount - 1) else: fenceposts.append(fencecount) for reward_iter in range(sequence_length): fencecount = fencecount + 1 obs_mean_list.append(obs_mean) if reward_iter == 0: # at iteration 0, either action 2 or action 1 is optimal if action_sequence[int(reward_iter), 0] == 0: reward_sequence[int(reward_iter), 0] = -10.0 if action_sequence[int(reward_iter), 0] == 1: reward_sequence[int(reward_iter), 0] = 5.0 if action_sequence[int(reward_iter), 0] == 2: reward_sequence[int(reward_iter), 0] = 5.0 optimal_sequence[int(reward_iter), 0] = np.random.randint(1, 3) if reward_iter == 1 and obs_mean > 0: # at iteration 1, if type A then action 1 is optimal state_sequence[int(reward_iter), 0] = 1 if action_sequence[int(reward_iter), 0] == 0: reward_sequence[int(reward_iter), 0] = 0.0 if action_sequence[int(reward_iter), 0] == 1: reward_sequence[int(reward_iter), 0] = 5.0 if action_sequence[int(reward_iter), 0] == 2: reward_sequence[int(reward_iter), 0] = -10.0 optimal_sequence[int(reward_iter), 0] = 1 if reward_iter == 1 and obs_mean < 0: # at iteration 1, if type B then action 2 is optimal state_sequence[int(reward_iter), 0] = 2 if action_sequence[int(reward_iter), 0] == 0: reward_sequence[int(reward_iter), 0] = 0.0 if action_sequence[int(reward_iter), 0] == 1: reward_sequence[int(reward_iter), 0] = -10.0 if action_sequence[int(reward_iter), 0] == 2: reward_sequence[int(reward_iter), 0] = 5.0 optimal_sequence[int(reward_iter), 0] = 2 if reward_iter > 1: # at iteration 2 or more, there is no reward - absorbing state state_sequence[int(reward_iter), 0] = 3 optimal_sequence[int(reward_iter), 0] = np.random.randint(0, 3) # So overall the optimal sequence should be 1/2, 1, .... if type A # OR 1/2, 2, .... if type B # Store state_set = np.vstack((state_set, state_sequence)) # M x 1 obs_set = np.vstack((obs_set, obs_sequence)) # M x 1 action_set = np.vstack((action_set, action_sequence)) # M x 1 reward_set = np.vstack((reward_set, reward_sequence)) dataindex_set = np.vstack( (dataindex_set, np.zeros((sequence_length, 1)) + sequence_iter)) optimal_set = np.vstack((optimal_set, optimal_sequence)) fenceposts.append(fencecount - 1) # Attach the rewards to the observations as additional # observations (multiple times, to make the POMDP "stick" to # explaining that signal and not the other obs signal (hopefully # something that the kernel will pick up) reward_value_set, reward_obs_set = np.unique( reward_set, return_inverse=True) reward_obs_set = np.reshape(reward_obs_set, (reward_obs_set.shape[0], 1)) obs_set = np.hstack((obs_set, reward_obs_set, reward_obs_set, reward_obs_set, reward_obs_set, reward_obs_set, reward_obs_set)) optimal_set = optimal_set.flatten() data_set = {'state_set': state_set, 'action_set': action_set, 'reward_set': reward_set, 'obs_set': obs_set, 'dataindex_set': dataindex_set, 'obs_mean': obs_mean_list, 'optimal_set': optimal_set, } # # write the dataset dictionary to file if (train_data is True): print("Creating Training Data") f = open("train_fcpt.p", "wb") pkl.dump(data_set, f) f.close() f = open("train_fcpt.p", "wb") pkl.dump(fenceposts, f) f.close() elif (test_data is True): print("Creating Test Data") f = open("test_data.p", "wb") pkl.dump(data_set, f) f.close() f = open("test_fcpt.p", "wb") pkl.dump(fenceposts, f) f.close() else: print("Creating Validation Data") f = open("val_data.p", "wb") pkl.dump(data_set, f) f.close() f = open("val_fcpt.p", "wb") pkl.dump(fenceposts, f) f.close() def get_longterm_rewards(ids, rewards): totals = np.zeros(rewards.shape[0]) i = 0 while (i < rewards.shape[0] - 1): totals[i] = rewards[i] ind = i discount = 0.98 while (ids[ind] == ids[ind + 1]): totals[i] = totals[i] + np.power(discount, i) * rewards[ind + 1] ind = ind + 1 if (ind + 1 == rewards.shape[0]): break i = i + 1 totals[rewards.shape[0] - 1] = rewards[rewards.shape[0] - 1] return totals if __name__ == "__main__": # Load the data and calculate the long term rewards train_data = pkl.load(open('train_data.p', "rb")) test_data = pkl.load(open('test_data.p', "rb")) val_data = pkl.load(open('val_data.p', "rb")) train_ids = train_data['dataindex_set'] test_ids = test_data['dataindex_set'] val_ids = val_data['dataindex_set'] train_rewards = train_data['reward_set'] test_rewards = test_data['reward_set'] val_rewards = val_data['reward_set'] # Calculate the long term rewards for each set and save as ltr files train_ltr = get_longterm_rewards(train_ids, train_rewards) test_ltr = get_longterm_rewards(test_ids, test_rewards) val_ltr = get_longterm_rewards(val_ids, val_rewards) f = open("train_ltr.p", "wb") pkl.dump(train_ltr, f) f.close() f = open("test_ltr.p", "wb") pkl.dump(test_ltr, f) f.close() f = open("val_ltr.p", "wb") pkl.dump(val_ltr, f) f.close()
STACK_EDU
Better custom error handling for powershell So I have a powershell script that integrates with several other external third-party EXE utilities. Each one returns its own kind of errors as well as some return non-error related output to stderr (yes badly designed I know, I didn't write these utilities). So What I'm currently doing is parsing the output of each utility and doing some keyword matching. This approach does work but I feel that as I use these scripts and utilties I'll have to add more exceptions to what the error actually is. So I need to create something that is expandable,possibly a kind of structure I can add to an external file like a module. I was thinking of leveraging the features of a custom PSObject to get this done but I am struggling with the details. Currently my parsing routine for each utility is: foreach($errtype in {'error','fail','exception'}) { if($JobOut -match $errtype){ $Status = 'Failure' } else if($JobOut -match 'Warning'){$Status = 'Warning' } else { $Status = 'Success' } } So this looks pretty straightforward until I run into some utility that contain some of the keywords in $errtype within $JobOut that is not an error. So now I have to add some exceptions to the logic: foreach($errtype in {'error','fail','exception'}) { if($JobOut -match 'error' -and(-not($JobOut -match 'Error Log' } elseif($JobOut -match $errtype){ $Status = 'Failure' } else if($JobOut -match 'Warning'){$Status = 'Warning' } else { $Status = 'Success' } } So as you can see this method has the potential to get out of control quickly and I would rather not start editing core code to add a new error rule every time I come across a new error. Is there a way to maybe create a structure of errors for each utility that contains the logic for what is an error. Something that would be easy to add new rules too? Any help with this is really appreciated. On an unrelated note, {'error','fail','exception'} is a scriptblock which, being a new scope, is relatively costly to create. In this particular case there's no need for the curly braces at all. Ah ok. I spent a lot of time doing shell scripts and for some reason that notation keeps following me. Thanks for the tip. Another tip is following the practices taught by Don Jones. Create a main function with sub-functions that do the work. At the end of each sub-function, since PowerShell's default is all output is captured, use the [PsCustomObject]@{Item1=$Data} format. Then sub-function 2 can check the output from sub-function 1 and continue to process. And so on. Sorry I'm not too familiar with his teachings , I'm a self taught guy and aparantly haven't run across his page until now. I use lots of functions, basically if I have to do something more than once then it should be a function. From what you've written it sounds like you just keep passing the functions output to the next functions input down the line, rinse repeat. I would think a switch would do nicely here. It's very basic, but can be modified very easily and is highly expandable and I like that you can have an action based on the input to the switch, which could be used for logging or remediation. Create a function that allows you to easily provide input to the switch and then maintain that function with all your error codes, or words, etc. then simply use the function where required. TechNet Tips on Switches TechNet Tips on Functions Nice article on the switch statement. Using that might be more maintainable than a bunch of if else statements.
STACK_EXCHANGE
The IK tool has a series of different modes available in the Tool Properties view. Enabled by default, this mode lets you click on any bone in a character and move it without having to select the actual layer. When you disable this mode, you cannot move any bone except the selected one. This allows you to grab and rotate the selected part from many angles and locations. You can click completely outside the character and move the pieces. The main working mode for the IK tool. Enable this mode when you want to animate and position a puppet. Ctrl + click (Windows/Linux) or ⌘ + click (Mac OS X) on a body part to select it. You do not need to select a part to be able to move it. |Apply IK Constraints||Lets you correct a part's position on a series of frames. For example, if the character's foot is sinking into the floor, you can correct its position and angle over a series of frames.| |Edit Min/Max Angle||Lets you set a rotation restriction on some of your parts, such as elbows, knees and ankles.| |Bone Editing||Lets you fix the bone orientation on extremities, such as hands and feet.| Used in combination with the IK Constraints mode, this option determines the starting frame of the constraint you will apply. While animating with the IK tool and before doing a movement, set an easing preset so the motion is not so mechanical. Before moving the part, select a preset from the Ease Shape menu: If you select a new preset in the list and move the part again on the same keyframe, the easing will automatically update. |Enable Translation If Top of Hierarchy|| Used only on master pegs. This option is useful when you have this particluar situation: You want the character to do a perfect split (sitting down with the legs at right angles to the body or at the sides with the torso facing forwards), then the hip will need to translate and not just rotate on the spot. And since IK is all about rotation, you would select the hip layer and enable the translation option. This option is enabled by default. The pivot disappears but you can still see the bone. When this option is disabled, the selected part cannot be rotated and will remain in the same position. You can use this option to simulate an arm in a plaster cast. |Exclude from IK||This option lets you exclude certain parts of the puppet from the IK influence, such as the eyes and mouth.| |IK Nails||These options let you temporarily fix a part of a character to a spot either in translation or rotation, or enable maximum and minimum angle usage.| |Stiffness||When a certain part is selected, you can apply a stiffness value to it. A different stiffness value can be set to each body part individually. The greater the stiffness, the more difficult it is to make that part rotate, thereby rendering it stiff while the other parts continue to move freely on their joints.|
OPCFW_CODE
<?php namespace App\Services\Providers; use App\Models\TransactionModel; /** * Class SupermoneyService * @package App\Services\Providers */ class SupermoneyService implements ProviderInterface { /** * @param TransactionModel $transaction * @return string */ public function processTransaction(TransactionModel $transaction): string { /** @var int $currentMaxId */ $currentMaxId = (int) TransactionModel::max('provider_trn_id') ?? 0; /** @var int $trnId */ $trnId = (string) rand($currentMaxId + 1, $currentMaxId + 100); return $trnId; } }
STACK_EDU
If you are in high school and struggle with decision-making, please don't take this too seriously. If you're into EA maybe apply for 1-1 advice here? Choosing a university without knowing what metrics to base this decision on is hard. When I did this one year ago, I was mostly comparing German universities as it was my country of residence (I debated going abroad, but covid and other things made this option unattractive for me). Since most universities here are free and rankings not really popular, there isn't as much of a prestige difference between universities to cut down the possibilities. So in the end, I based my decision mostly on cost of living and location/city. I didn't expect the differences in universities to be large enough to be worth looking further into it. I've changed my mind about that, mostly based on the fact that I could share what I've learned with other aspiring undergraduates. I looked into which metrics were used for ranking universities by the CHE-Ranking which is the most popular (which ranks universities for every metric and every degree separately and refuses to aggregate them to give universities an overall rank. The idea is "make your personal ranking according to your priorities". It's like they actively refuse to play the prestige game). Most of their metrics are either subjective ratings by a few students (how useful are these ratings if these students have never been to a different university to compare it to?), or things like money per researcher, which seem rather useless for finding the best place to study as an undergraduate. The only thing that stood out to me was "% of graduations in appropriate time". Unfortunately, universities in Germany have only recently (2017) started to collect this data in a central place, so I rely on the summary statistics from the CHE-Ranking-Site and hope that they did a good job. As you can see in the plot below, there are lots of universities where you would be above average (median) if you finished your degree in your eighth semester. It turns out, the difference how many students (in Computer Science [most popular on lesswrong]) graduate after their eighth semester is pretty large. Some of this may be due to random variation in student ability or curriculum, but even excluding universities with less than 1000-students (so at least (1000/4) * (around 50% passing rate) = 125 students passing every semester) the differences are quite large: Next I looked whether the same was happening for business administration, because it is the most popular in Germany and I would have to worry less about statistical significance (I'll leave the statistical tests to people who've actually taken a statistics class). It does look quite similar (although overall it does not look quite as bad as for CS): One thing that is more pronounced here is that the upper outliers tend to be private universities, which makes sense to me (Left as an exercise for the reader Lol). I am still surprised by the variation, before I looked into this I thought that most universities are roughly equivalent. - Different universities have different specialized undergrad subjects (like "data science", "bioinformatics" etc.) under the lable computer science or business administration. Some of these tend to be more competitive (My impression from going through the CHE data) and thus consist of more able students - If it's just about more able students, then your particular subject might be less important to you compared to the whole university I was kind of skeptical that I might be misunderstanding something basic here, so I tried to check whether the numbers from the CHE-Ranking add up to the same as when I look at numbers from my university. (Caution, this just looks at who registered for the semester, so it includes students who never actually graduated!) Assuming everyone after the fifth semester eventually graduates I got ~50% (excluding last semester due to covid) which is pretty close to the 58% according to the CHE data, which means at least in this case the data is probably accurate. Interpret this as my wild guess. If you have a better interpretation, share it in the comments! I've done stupid/obvious mistakes before. If you find any please point them out! Average time until graduation is probably proportional to (personal ability) and/or (hurdles of the university in order to pass). If you are in university mostly for the signalling and the interesting people, then both, smaller hurdles and smarter people implied by more "graduations in appropriate time" would be good news. So maybe include it in your decision matrix. Things I'd do with more time/motivation: - Figure out how to get more data on the whole distribution for time until graduation. To identify universities where it is easier (or students smarter) to pass not only in "appropriate time" but maybe even in the advertised 6 Semesters, you'd actually have to know how many pass at that time. - Get the CHE data from past years and investigate how persistent "time until graduation" is over time, which would be rather important as we are necessarily trying to extrapolate here. I had a hard time finding past data with google. - Look at data from other countries See here for more info: Statistisches Bundesamt. “Studienverlaufsstatistik 2020.” ↩︎ CHE's description: “Proportion of undergraduate degree course graduates who completed their studies in an appropriate amount of time, namely the standard period of study plus one or two semesters (dependent on the subject area).” ↩︎ (for a “6-semester”-degree) ↩︎ They don't but it is good enough. ↩︎
OPCFW_CODE
Join GitHub today GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up Compiling qBittorrent nox 4.x on DietPi or Raspbian (Debian 9.0) for ARM devices Raspbian is the most popular Linux distribution built for Raspberry Pi hardware. DietPi is a refined Linux distribution for ARM SoCs such as Raspberry Pi 3 B+ and ASUS Tinker Board. Both are based on Debian. Debian 9.0 ships with a patched version of an older qBittorrent-nox release (3.3.7). qBittorrent 4.x has many improvements to the webUI component which are of particular use for headless operation. This guide outlines the steps needed to compile qBittorrent-nox 4.1.x and run it as a service. Table of Contents - Compiling Libtorrent - Compiling qBittorrent-nox - Running qBittorrent-nox on boot - Updating qBittorrent-nox You will first need to install various tools and libraries needed for compilation. sudo apt-get install \ build-essential \ pkg-config \ automake \ libtool \ libc6-dev \ libboost-dev \ libboost-system-dev \ libboost-chrono-dev \ libboost-random-dev \ libssl-dev \ qtbase5-dev \ qttools5-dev-tools \ libqt5svg5-dev \ zlib1g-dev If you choose to retrieve source code using git clone then also sudo apt-get install git. DietPi's repositories include an older version of Libtorrent. You will need to compile Libtorrent 1.1.x to get qBittorrent-nox 4.x running. To get the Libtorrent 1.1.x source code, either git clone ... git clone from repository git clone https://github.com/arvidn/libtorrent.git cd libtorrent # select the latest release tag git checkout $(git tag | grep libtorrent-1_1_ | sort -t _ -n -k 3 | tail -n 1) using release libtorrent_1_2_0 in this example wget https://github.com/arvidn/libtorrent/archive/libtorrent_1_2_0.zip unzip libtorrent_1_2_0.zip cd libtorrent-libtorrent_1_2_0 Compile Libtorrent 1.1.x ./autotool.sh export CXXFLAGS=-std=c++11 ./configure \ --disable-debug \ --enable-encryption \ --with-boost-libdir=/usr/lib/arm-linux-gnueabihf --with-libiconv make -j$(nproc) sudo make install out of memory (OOM) If OOM errors occur then add a swap file. sudo dd if=/dev/zero of=/.swapfile bs=1M count=1024 sudo mkswap /.swapfile sudo swapon /.swapfile sudo swapon -s # check swap is activated make # assuming the prior command succeeded sudo swapoff /.swapfile sudo rm /.swapfile (Those commands were copied from here). One example manifestation of an OOM error on Raspbian OS looks like $ make ... make: Entering directory '/tmp/libtorrent-libtorrent_1_2_0/src' CXX libtorrent_rasterbar_la-session_impl.lo g++: internal compiler error: Killed (program cc1plus) You will need to add Libtorrent as a system library or qBittorrent-nox won't run after you compile it. /etc/ld.so.conf.d/libtorrent.conf with contents sudo ldconfig afterward. Compile 4.1.x version To get the qBittorrent-nox source code, either A. compile a cloned git repository or B. download a release source code. git clone from repository git clone -b v4_1_x https://github.com/qbittorrent/qBittorrent cd qBittorrent You may select the branch version on the branches page. Using release release-4.1.5 in this example, wget https://github.com/qbittorrent/qBittorrent/archive/release-4.1.5.zip unzip release-4.1.5.zip cd qBittorrent-release-4.1.5 ./configure --disable-gui --enable-systemd --with-boost-libdir=/usr/lib/arm-linux-gnueabihf make -j$(nproc) sudo make install NOTE: Review Ubuntu/Debian compilation guide if you want to run qBittorrent with a GUI. The binary should be located at qbittorrent-nox was installed using apt then that binary will be at /usr/bin/qbittorrent-nox. Do not confuse them! WebUI access information - Username: admin - Password: adminadmin qBittorrent-nox is currently installed as a terminal application, which is not optimal for headless use. We now will add qBittorrent-nox as a service. Add user for qBittorrent-nox service sudo useradd -rm qbittorrent -G dietpi -s /usr/sbin/nologin Create systemd service file UPDATE: this may not be necessary if qBittorrent compilation was configured with flag Create a systemd service file at /etc/systemd/system/qbittorrent.service. Contents are: Description=qBittorrent Daemon Service After=network.target [Service] User=qbittorrent Group=dietpi ExecStart=/usr/local/bin/qbittorrent-nox ExecStop=/usr/bin/killall -w qbittorrent-nox [Install] WantedBy=multi-user.target Run and check systemd service status sudo systemctl daemon-reload sudo systemctl start qbittorrent sudo systemctl status qbittorrent systemctl status command should show qBittorrent-nox is active and running. qbittorrent service during boot. sudo systemctl enable qbittorrent Get a copy of the latest qBittorrent release version On DietPi, you will need to run the following commands to update an already compiled version of qBittorrent-nox: systemctl stop qbittorrent ./configure --disable-gui --with-boost-libdir=/usr/lib/arm-linux-gnueabihf --prefix=/usr/local/bin/ sudo make -j$(nproc) make install check version to verify that the binary got updated. sudo systemctl stop qbittorrent /usr/local/bin/qbittorrent-nox --version If the version has changed then the new version was successfully compiled and installed! sudo systemctl start qbittorrent
OPCFW_CODE
Table of Contents In this blog post, we have covered Single-Page Application vs Multi-Page Application comparison as well as which one your business should opt for. The web world is evolving at a faster rate where the only way to compete is to have the best resources with you. Irrespective of the business that you carry, it is mandatory to remain updated with the latest trends so that your brand will not appear dull at any point in time. In the mobile app development, both single-page and multiple-page apps are in hefty demand to develop robust yet featured web applications. Here arises a point of dilemma about which app type among them is preferable. The fact is, both have their own set of advantages and disadvantages. So, you need to be assured about your requisites and all the minor details before moving ahead by implementing these ideas. Let’s have a look at a few aspects that can help you differentiate these two app types: What are Single-page Applications A single-page application is an app that works inside a browser without requiring a page reload during its usage. Some of the most popular examples of these apps are GitHub, Google Maps, Gmail and even Facebook. This web application fits on a single HTML page that can be dynamically updated and most of the interactions are easily managed. Google Docs is the finest example of SPAs. When you click on any element of this document, type something, or perform any other activity, the main interface remains untouched. While only the chunk of content that you want to change is being modified. Single-page apps run on this basic concept. Benefits of Single-page Applications: 1) SPAs inhibits quick behavior as the resources which they use, like HTML+CSS+Scripts, are loaded only once for the entire user session of an app. Data is the only thing that is transferred and modified after user actions. 2) The web app development with the SPA is quicker and more efficient. So, there is no need to write custom code for server page rendering. Moreover, there is also easy to start coding from a file file://URI, without using any server at all. 3) These apps are optimized for Chrome debugging because developers have the authority to monitor network operations, scrutinize page elements and associated data. 4) With SPA, it is easy to create a mobile application. The reason is quite simple. The developers can reuse the same backend code as and when required in the application. 5) SPAs are highly efficient to cache any local storage. An app simply forwards a single reflex, combines the necessary data, and can even function offline in some instances. What are Multiple-page applications Multi-page apps are practically larger than the single-page and aim at showcasing more content. These applications work conventionally. Here, every major data change or data submission back to the server results in rendering a fresh page in the browser. The complexity and costs of developing MPAs are higher and demand multiple levels of UI to design. There is a sigh of relief as an AJAX solution makes it easy to refresh certain parts of the application instead of the entire layout containing tons of data between servers and browsers. Standard features of multiple-page applications 1) SEO implementation is much easy in the multi-page apps as one can optimize each separate page of the app for a specific keyword. 2) It’s the most suitable option for users who demand better visual clues for navigation for the app. MPAs are usually constructed with familiar multi-level menus and similar navigation tools. 3) It is a cakewalk to develop basic mobile apps with MPAs. But, if you require an extremely rich user interface with a bulky data, then things become complicated. 4) It takes much time to generate complex pages on a server and switch them to the client over the internet. It is then rendered to the browser with much effort and time and degrades the user experience as well. 5) Initially, MPAs were improved by introducing AJAX where only a few parts of the page are refreshed instead of the whole page. It has improved the user experience with few complexities within the page itself. Summing it Up Are you confused about which kind of app development to follow? It is a bit confusing but you do not need to panic. The best part is, we are living in a top-notch advanced arena where technology gives the solution for each problem that we face. Here, the only consideration is to scrutinize even the minutest aspect of the mobile app that you want to develop for your brand. Practically, both these apps have inherent traits that make them fit for their specific use. So, pick the one that suits you the best. We would love to hear your comments relating to the post. Got some other thoughts? Drop us words through our contact page.
OPCFW_CODE
Files without tags Frost Knight Battle for Skyrim Defeat the Immortal Frost Knight Pack with perks and equipments to destroy anyone who oppose them. These are just NPCs hand placed by me No level list No script Vanilla Assets only. For those who like challenges like me. updated 2:24, 25 Oct 2016 26 1 642kb dlarkz Enderal - Keeper and Holy Order Gear (English) for Skyrim Adds purchasable Keeper Armor and Weapons and Forgeable Holy Order Armor and Weapons. updated 0:30, 25 Oct 2016 52 6 2kb Dovahkiin1973 Reverb and Ambiance Overhaul for Skyrim Makes all sounds more realistic for player and NPCs. Improves and balances ambiance and reverb to be more realistic and lively. Increases the diversity and dynamics of in-game sound. Fixes numerous issues. Lightweight and compatible with all sound replacer mods. updated 0:27, 23 Oct 2016 986 34 41kb mm137 Get On With It - No more waiting for doors for Skyrim Do you know how much time you’ve wasted watching doors open when playing Skyrim? I’m sure you don’t and I don’t either. But this mod will cut all of that out of your play session, saving you time and life. updated 18:33, 21 Oct 2016 431 23 2,037kb Isvvc Alternative texture for Dawnguard DLC Vampireremains for Skyrim Alternative texture for vampireremains. updated 15:53, 21 Oct 2016 50 6 424kb doktorvlad Unofficial Skyrim City Patch (USCP) for Skyrim A mod that fixes a slew of problems regarding the various cities of Skyrim. Currently the mod covers 3 cities; Whiterun (just the exterior), Riverwood (interior and exterior) and Riften (exterior). For more info check the long description. updated 19:57, 22 Oct 2016 1,624 90 338kb DDVIL Invisibility and Muffle combined into power for Skyrim Simple mod which makes Invisibility spell also use Muffle effect. No glowing feet visual effect. Duration extended up to 60 seconds. updated 17:22, 21 Oct 2016 51 2 1kb Folky63
OPCFW_CODE
bison/yacc - limits of precedence settings So I've been trying to parse a haskell-like language grammar with bison. I'll omit the standard problems with grammars and unary minus (like, what is (-5) from -5 and \x->x-5 or if a-b is a-(b) or apply a (-b) which itself can still be apply a \x->x-b, haha.) and go straight to the thing that suprised me. To simplify the whole thing to the point where it matters, consider following situation: expression: '(' expression ')' | expression expression /* lambda application */ | '\\' IDENTIFIER "->" expression /* lambda abstraction */ | expression '+' expression /* some operators to play with */ | expression '*' expression | IDENTIFIER | CONSTANT /* | ..... */ ; I solved all shift/reduce conflicts with '+' and '*' with %left and %right precedence macros, but I somehow failed to find any good solution how to set precedence for the expression expression lambda application. I tried %precedence and %left and %prec marker as shown for example here %http://www.gnu.org/software/bison/manual/html_node/Non-Operators.html#Non-Operators, but it looks like bison is completely ignoring any precedence setting on this rule. At least all combinations I tried failed. Documentation on exactly this topic is pretty sparse, whole thing looks like suited only for handling the "classic" expr. OPER expr. case. Question: Am I doing something wrong, or is this impossible in Bison? If not, is it just unsupported or is there some theoretical justification why not? Remark: Of course there's an easy workaround to force left-folding and precedence that would look schematically like expression: expression_without_lambda_application | expression expression_without_lambda_application ; expression_without_lambda_application: /* ..operators.. */ | '(' expression ')' ; ...but that's not as neat as it could be, right? :] Thanks! It's easiest to understand how bison precedence works if you understand how LR parsing works, since it's based on a simple modification of the LR algorithm. (Here, I'm just combining SLR, LALR and LR grammars, because the basic algorithm is the same.) An LR(1) machine has two possible classes of action: Reduce the right-hand side of the production which ends just before the lookahead token (and consequently is at the top of the stack). Shift the lookahead token. In an LR(1) grammar, the decision can always be made on the basis of the machine state and the lookahead token. But certain common constructs -- notably infix expressions -- apparently require grammars which appear more complicated than they need to be, and which require more unit reductions than should be necessary. In an era in which LR parsing was new, and most practitioners were used to some sort of operator precedence grammar (see below for definition), and in which cycles were a lot more expensive than they are now so that the extra unit reductions seemed annoying, the modification of the LR algorithm to use standard precedence techniques was attractive. The modification -- which is based on a classic algorithm for parsing operator precedence grammars -- involves assigning a precedence value (an integer) to every right-hand side (i.e. every production) and to every terminal. Then, when constructing the LR machine, if a given state and lookahead can trigger either a shift or a reduce action, the conflict is resolved by comparing the precedence of the possible reduction with the precedence of the lookahead token. If the reduction has a higher precedence, it wins; otherwise the machine shifts. Note that reduction precedences are never compared with each other, and neither are token precedences. They can actually come from different domains. Furthermore, for a simple expression grammar, intuitively the comparison is with the operator "at the top of the stack"; this is actually accomplished by using the right-most terminal in a production to assign the precedence of the production. To handle left vs. right associativity, we don't actually use the same precedence value for a production as for a terminal. Left-associative productions are given a precedence slightly higher than the terminal's precedence, and right-associative productions are given a precedence slightly lower. This could be done by making the terminal precedences multiples of 3 and the reduction precedences a value one greater or less than the terminal. (Actually in practice the comparison is > rather than ≥ so it's possible to use even numbers for terminals, but that's an implementation detail.) As it turns out, languages are not always quite so simple. So sometimes -- the case of unary operators is a classic example -- it's useful to explicitly provide a reduction precedence which is different from the default. (Another case is where the precedence is more related to the first terminal than the last, in the case where there are more than one.) Editorial note: Really, this is all a hack. It's a good hack, and it can be useful. But like all hacks, it can be pushed too far. Intricate tricks with precedence which require a full understanding of the algorithm and a detailed analysis of the grammar are not, IMHO, elegant. They are confusing. The whole point of using a context-free-grammar formalism and a parser generator is to simplify the presentation of the grammar and make it easier to verify. /Editorial note. An operator precedence grammar is an operator grammar which can be bottom-up parsed using only precedence relations (using an algorithm such as the classic "shunting-yard" algorithm). An operator grammar is a grammar in which no right-hand side has two consecutive non-terminals. And the production: expression: expression expression cannot be expressed in an operator grammar. In that production, the shift reduce conflict comes in the middle, just before where the operator would be if there were an operator. In that case, one would want to compare the precedence of whichever reduction gave rise to the first expression with the invisible operator which separates the expressions. In some circumstances (and this requires careful grammar analysis, and is consequently very fragile), it's possible to distinguish between terminals which could start an expression and terminals which could be operators. In that case, it would be possible to use the precedence of the terminals in the FIRST set of expression as the comparators in the precedence comparison. Since those terminals will never be used as the comparators in an operator production, no additional ambiguity is created. Of course, that fails as soon as it is possible for a terminal to be either an infix or a prefix operator, such as unary minus. So it's probably only of theoretical interest in most languages. In summary, I personally think that the solution of explicitly defining non-application expressions is clear, elegant and consistent with the theory of LR parsing, while any attempt to use precedence relations will turn out to be far less easy to understand and verify. But, if you insist, here is a grammar which will work in this particular case (without unary operators), based on assigning precedence values to the tokens which might start an expression: %token IDENTIFIER CONSTANT APPLY %left '(' ')' '\\' IDENTIFIER CONSTANT APPLY %left '+' %left '*' %% expression: '(' expression ')' | expression expression %prec APPLY | '\\' IDENTIFIER "->" expression | expression '+' expression | expression '*' expression | IDENTIFIER | CONSTANT ;
STACK_EXCHANGE
// NOTE: normally this thing doesn't exist, so don't pay attention to it import { Observable } from 'rxjs/Observable' import 'rxjs/add/observable/of' const data = [ { name: 'someone', phone: '101010101' }, { name: 'someone 2', phone: '1010101011' } ] const filteredData = filter => Observable.of( data.filter( ({ name, phone }) => name.includes(filter) || phone.includes(filter) ) ) export const fakeAjaxGet = url => { if (url.startsWith('/api/phonebook/')) { const parameter = url.split('/').slice(-1)[0] return filteredData(parameter) } return Observable.of(data) }
STACK_EDU
One of the many wonderful features of the Archer tool set is deep links. If you have ever received an email notification from Archer, there is a good chance it included an deep link to some content stored within the system. The key to working with deep links rests in your ability to decode them. Here is a deep link for an specific record. This would typically be used in an email notification to someone that needs to take action based upon that record. The black portion of the link is common to all deep links and it identifies the server name and the method. We can break this down into two parts: - https://servername.domain.com/default.aspx? - identifies the server - requestURL - is the request method The blue portion of the link constitutes the actual request we are passing to the application...%2fGenericContent%2fRecord.aspx%3fid%3d158189%26moduleId%3d153 We can split this portion down into 2 sections (request type and content). Request type - There are different types of requests: - Generic Content - for working with records (new or existing) - Search Content - for working with reports - Foundation - for navigation purposes (controlling where you land in the application) Notice the presence of encoded characters in the URL, this is necessary to allow for the use of the reserved characters (/ & = ?). These characters must be encoded or the application will not process the request correctly. Here is an explanation of the values for the encoded character strings. - %2f decodes to / - %26 decodes to & - %3d decodes to = - %3f decodes to ? Let’s examine this part of the request. ..%2fGenericContent%2fRecord.aspx%3f decodes to /GenericContent/Record.aspx?. Here we are asking for a record by calling the Record.aspx page Content Specific- Details to identify a specific element you wish to view - Record id - Report id - Module id - Workspace id - Dashboard id id%3d158189%26moduleId%3d153 decodes to id=158189&moduleId=153 Here we see that we are specifying record number 158189 from application moduleId 153. If you wish to open an new record you would specify 0 for the id number. To determine which application is moduleId 153 you need to go to Application Builder and select manage applications. Hold your mouse pointer over an application name in the Name column and the Archer assigned Module ID number will be displayed in the lower right corner. So you can see that moduleId 153 is the Control Procedures application. Each application has an different moduleId. To identify a workspaceId, dashboardId, or reportId, you must use the same technique you did to locate the application id in the above example. Here are examples of other deep links using the various request types: Deep link to open a new record. Deep link to open a specific report. Deep link to open a specific workspace. Deep link to open a specific dashboard. Some deep links are generated by the system, like when you embed a link to a record in a system generated notification. Deep links are useful when you wish to send someone to a specific place inside Archer. You can use custom deep links on regular websites to enhance the usability of the product. You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
OPCFW_CODE
You can add your app to it quite easily, I was surprised how quickly this worked. ( Requires Python 2.3 or later ) Make sure you have the right meta-data in I repeated the command sudo apt-get install build-essential python2.7-dev python-numpy python-matplotlib and everything installed perfectly. Examples¶ This chapter provides a number of basic examples to help get started with distutils. Happy coding. Source if you're distributing modules foo and bar, your setup might look like this: I call it 'runner' (without the .py) in this example. python-2.7 numpy matplotlib ubuntu-14.04 share|improve this question edited Apr 21 '15 at 21:44 ali_m 29.4k669119 asked Apr 14 '15 at 16:05 elizabeth 1 add a comment| 2 Answers 2 active oldest I tried to install and got this error: [email protected]:~/opt/HTSeq-0.6.1p1$ python setup.py install --user Could not import 'setuptools', falling back to 'distutils'. How do you make a Canon 70D show the photo you just took on the rear display? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I followed the instructions on the HTSeq install page to install numpy and matplotlib. The simplest case, a single extension module in a single C source file, is: I read this as saying "The package named 'package' needs the files in [list]" scripts = [ ] A script(s) to be installed into standard locations like /usr/bin. When hiking, why is the right of way given to people going up? The setup script from the last example could also be written as from distutils.core import setup setup(name='foobar', version='1.0', packages=[''], ) (The empty string stands for the root package.) If those two If something is missing, check will display a warning. share|improve this answer edited May 1 '12 at 12:28 answered May 1 '12 at 11:42 Serge S. 2,91222341 Thanks SergeanT, but sorry to bother you once again - when Importerror: No Module Named Extern Using local copy.] freetype: no [The C/C++ header for freetype2 (ft2build.h) could not be found. Follow the instructions in it to complete registration.If you have an account, obv, choose 1. The package need python 2.4 or latest version. numpy==1.9.2 scipy==0.15.1 pillow==2.8.1 pandas==0.16.1 scikit-learn==0.16.1 scikit-image==0.11.3 ... Get More Info Checking a package 7.5. Setuptools Must Be Installed To Install From A Source Distribution Pip You signed in with another tab or window. Install Htseq Why is credit card information not stolen more often? Please install numpy and then try again to install HTSeq. this contact form i.e., multitool.main() was showing, but not multitool.core. Click here to register now, and join the discussion Community Links Members List Search Forums Show Threads Show Posts Tag Search Advanced Search Go to Page... Is ATC communication subject to FCC profanity regulations? Could Not Import Setuptools Which Is Required To Install From A Source Distribution Here is the result of print sys.path Code: ['', '/home/CCP4/ccp4-6.2.0/share/python', '/home', '/home/CCP4/Python-2.6.7/lib/python26.zip', '/home/CCP4/Python-2.6.7/lib/python2.6', '/home/CCP4/Python-2.6.7/lib/python2.6/plat-linux2', '/home/CCP4/Python-2.6.7/lib/python2.6/lib-tk', '/home/CCP4/Python-2.6.7/lib/python2.6/lib-old', '/home/CCP4/Python-2.6.7/lib/python2.6/lib-dynload', '/home/CCP4/Python-2.6.7/lib/python2.6/site-packages'] I'll have to talk to my PI - the install was from a After unpackaging the HTSeq source tarball (HTSeq-0.5.4p3) and navigating to the correct directory I entered: python setup.py build and get the following error Code: Could not import 'setuptools', falling back to When hiking, why is the right of way given to people going up? have a peek here version gets appended to the end of the tarball name. Please consult the Python Docs for this info. Setuptools Python You do not have to include any .py files in your package-root folder. What does this symbol of a car balancing on two wheels mean? It should look (something) like this: distutils:$ runner module is running /usr/lib/python2.4/site-packages/package My various data files and so on are: ['cross.png', 'fplogo.png', 'tick.png'] Registering with the Cheese Shop (PyPI) The Cheese It doesn't matter. asked 1 year ago viewed 459 times active 1 year ago Get the weekly newsletter! Easy_install Would presence of MANPADS ground the entire airline industry? Checking a package¶ The check command allows you to verify if your package meta-data meet the minimum requirements to build a distribution. running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/HTSeq copying HTSeq/__init__.py -> build/lib.linux-i686-2.7/HTSeq copying HTSeq/_HTSeq_internal.py -> build/lib.linux-i686-2.7/HTSeq copying HTSeq/StepVector.py -> build/lib.linux-i686-2.7/HTSeq copying HTSeq/_version.py -> build/lib.linux-i686-2.7/HTSeq creating build/lib.linux-i686-2.7/HTSeq/scripts copying Reload to refresh your session. http://sinistro.org/could-not/could-not-import-the-clipboard-because.html Reading the metadata¶ The distutils.core.setup() function provides a command-line interface that allows you to query the metadata fields of a project through the setup.py script of a given project: import package.module package.module.start () module.py Next is the package. Unsold Atari videogames dumped in a desert? Try this: import multitool.core.classes class HashTool(multitool.core.classes.CLITool): share|improve this answer answered Jun 15 '14 at 3:00 Éric Araujo 5,10611228 add a comment| Your Answer draft saved draft discarded Sign up or
OPCFW_CODE
In some rare cases, you may run into the strange situation that some websites won't load after being connected to the VPN or they timeout while requesting the page. Before you start connecting and disconnecting a hundred times, try the following 3 quick fix solutions. 1) MTU correction of your network adapter (high success ratio) (MAC + WIN) The MTU is a value of the packet size that enables making the connection between the remote client (you) and the server you request (website). If a router or ISP routing equipment on the way changes or alters the MTU size, you run into the error that the connection won't be created successfully. How to get started: 1) Check the MTU value by opening comment prompt (cmd) with administrator privileges. 2) Enter the following code and hit return netsh interface ipv4 show subinterfaces Your output may look a bit different depending on the adapters shown. It will display the LAN and WiFi network adapters, the sent and received bytes and the MTU size. Note: in many cases the MTU value is set to 1500 You need to adapt the MTU size (decreasing) step by step, to find the right value where the connection is passing trough. In order to do so, please enter the following command, but change the values in colors like explained beneath the code snippet: netsh interface ipv4 set subinterface "Local Area Connection" mtu=xxxx store=persistent Local Area Connection = is the network connection on your computer as shown above xxxx = the 4 digit value that you need to set (by trying) (lower than 1500) Let's say your MTU value is 1500 but some websites won't load, change it to a lower value, but don't forget to restart as changes won't go into effect without a restart. Try 1472, 1450, 1380, and further down until the website you had problems with is starting to load. On MAC, the procedure is a bit different as you locate the network adapter from a list in the network settings. - Click the Apple menu icon (upper left bar). - Click System Preferences. - Click Network. - From the list on the left-hand-side select the network interface which is “Connected” by clicking it once (usually marked with a green dot) - Click Advanced. - Click Hardware (on the right-hand-side of the window that opens). - Select Manually from the Configure menu. - Select Custom from the MTU menu. - Type xxxx in to the MTU box. (try lowering the value from the default, as outlined above) - Click OK. - Click Apply. Changes should go into affect immediately but you may reset the connection and try it again. 2) MTU correction of your router If changing the MTU size on your network adapter didn't help, or the no traffic issue is happening on all your devices ( tablet, phone etc) you can try adjusting the MTU size directly on your router. Connect to the router interface by typing the IP of the router in the browser. Login with the router username and password. Go to Advanced -> Network -> WAN and locate the MTU Size box. Change that value by lowering it step by step, 5-10 units and save, until the websites load again. Try 1480, 1472, 1450, 1380 or other values close to these ones. Depending on the router, the MTU Size box can be located in a different tab. For example, if you are using DD-WRT, chances are that the MTU Size box is under the Setup tab -> Basic Setup, section WAN Setup as shown in the image below: You will find it on Auto - 1500, set it to Manual and enter a different value. 3) MTU correction of the config file for OpenVPN If the previous solutions didn't help, and you are using openVPN, you can try changing the MTU size in the openVPN configuration file. Leave the MTU value as it by default at 1500 in Windows network connection and on the router. Edit the config file of the servers you want to connect to, for example "ES - Madrid @tigervpn.com.ovpn" by changing the following line: mssfix ---> mssfix abcd ( ex: mssfix 1380 ) where abcd stands for the value that should work, please try lowering the value gradually from 1500 to the one that seems to work and test it. In some cases might work with 1470, in others 1420, or 1380 and so on. 4) Change the DNS Once connected to the VPN, tigerVPN will handle your DNS queries. We only allow DNS queries if you are connected to the VPN. You must not set the DNS servers from tigerVPN manually into your current configuration as doing so will cause no website to load in case you disconnect from the VPN. However in some cases, the ISP forces you to use their DNS servers, or hard code it in the router that comes with the ISP. Once you connect to tigerVPN, your IP is changing and won't be recognized by your ISP anymore (therefore, likewise) he won't respond anymore to DNS queries. While we recommend that you should always use the tigerVPN DNS servers (they are submitted automatically once connected) you may also try alternative ones. Google offers a fast DNS service that comes with an easy to remember IP: 5) Clear Browsing Data & Cache Last but not least, you may try to clear browsing data and cache. While this may sound like a generic way of troubleshooting, sometimes, plugins and data saved from websites may prevent loading as there is still an active session on the server. On popular browsers like Chrome, Firefox you can enter CTRL+Shift+DEL to clear the browsing data. On other browsers, it's usually easy to spot and well documented.
OPCFW_CODE
Date: 16 Jun 2013 Time: 10 am TO 12 pm Venue:SpringPeople Corporate Learning Center Bangalore – 560 102 Sponsored By: SpringPeople Software Pvt. Ltd. About the event: We will explore some interesting Spring features not known to everyone. And as a bonus, take a peek in the emerging in-memory distributed data management platform - Gemfire. Spring internals - explore implementation of some of the features from ground up. In this live coding session we will look at how to implement some of the features provided by spring from • We will take a look at how to refactor traditional JDBC code to arrive at an implementation similar to the JDBC template. • We will take a look at how refactor tangled code with all the cross cutting concerns (transaction, logging etc) mixed up with business logic using the decorator pattern first and then to generalize it for many classes using dynamic proxies. Kamal is a Java/JEE evangelist, an Enterprise Integration Specialist and an industry veteran with over 15 years of design and development experience. He has been developing and architecting enterprise solutions and is a seasoned SpringSource certified trainer with significant experience of 100+ trainings delivered on Spring and Spring-related technologies. - Enterprise Integration with Spring Integration - Building RESTful applications using Spring MVC - Building modular applications Spring Security 3.1 – How To Spring Security is a framework that focuses on providing both authentication and authorization to Java applications. Like all Spring projects, the real power of Spring Security is found in how easily it can be extended to meet custom requirements. In this presentation we will incrementally apply Spring Security to an existing application to demonstrate how it can meet our authentication and authorization needs. It will also answer many of the common "How To" questions that are found on the forums. This will ensure that you can not only secure your application quickly, but ensure you understand Spring Security well enough to extend it to meet your custom requirements. Clarence is a technology enthusiast at heart and educationist by passion who works at SpringSource, a division of VMware. Clarence has trained thousands of students on many topics from the Java ecosystem. He has taught the official Spring courses in many cities in India. Apart from the Spring framework, he also has in-depth experience with topics such as Spring Security and Spring Roo. He has presented at some prestigious conferences including JavaOne and Oracle Develop. He is always remembered among his students for his seminal session titled “The Joy of Java”. Data Aware Distributed Querying in vFabric GemFire GemFire is in-memory distributed data management platform that pools memory (and CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. Using dynamic replication and data partitioning techniques, GemFire offers continuous availability, high performance and linear scalability for data intensive applications without compromising on data consistency, even under failure conditions. In addition to being a distributed data container, it is an active data management system that uses an optimized low latency distribution layer for reliable asynchronous event notifications and guaranteed message delivery. Bijoy is a Technical Content Developer at SpringSource, a division of Pivotal. He has around 8 years of experience working with SOA, middleware, security and J2EE technologies. He is the lead courseware developer for VFabric suite of products and Spring technologies, and have worked extensively on vFabric GemFire. Before joining VMware, he worked with Oracle Server Technologies, mostly on the implementation of the Oracle’s middleware platform comprising WebLogic, Oracle SOA Suite 11g, Oracle Web Service Manager, and Oracle Service Bus. I blog at http://ranganaths.wordpress.com
OPCFW_CODE
What is the smallest, legal zip/jar file? I hate generating an exception for things that I can simply test with an if statement. I know that a zero length zip/jar will trigger an exception if you try to access it using the java.util.zip/java.util.jar APIs. So, it seems like there should be a smallest file that these utility APIs are capable of working with. According to ZIP file format specs a zip file should at least have the central directory structure that is 46 bytes long + 3 variable fields (check the spec by yourself). Maybe we should assume at least 1 entry that implies the file header for that entry. (PlugIn/WebStart will reject any zip/jar that doesn't start with a an entry header magic number (for protection against GIFARs).) No, the smallest legal/valid zip file is 22 bytes: 80 75 05 06 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00. I don't know about jar files, though. @Cheeso, isn't zip supposed to start with PK ... wherease 80 75 is not that... :S @DimitryK - Those values are decimal. P=80 K=75. @billpg I think that's supposed to be in hexadecimal or vice versa? @Cheeso Hmm, smallest readable zip file for me seems to be 50 4B 05 06 00 00 00 00 00 00 00 00 00 00 00 00 00 00 (18 bytes). A zip anyone cares about contains an EOCD though. You really should put this sort of code into a try/catch as there are many things that can go wrong when reading/writing files? If you really must know the answer to this then try to add a 1 byte file to a zip and then see if that fails? It's easy code to go through a range of sizes from 1 -> 65536 bytes and add to a zip and see which ones fail? The code is in a try/catch block. I don't like to write code that uses exceptions to catch a condition that is testable, though. Your code must be really crap then, if you think checking the size of a zip is enuff and nothing else can be wrong. Let the zip classes do their thing and let them complain via an exception. Jar files need to have at least one entry. If you want to make an empty one make a manifest only jar. See JAR Manifest for more info on jar manifests. So, it seems like the smallest legal zip file will be smaller than the smallest legal jar? The smallest legal zip contains zero entries, and one "empty" central directory. The bytes are: 80 75 05 06 followed by 18 bytes of zero (0). So, 22 bytes long. VBscript to create it: Sub NewZip(pathToZipFile) WScript.Echo "Newing up a zip file (" & pathToZipFile & ") " Dim fso Set fso = CreateObject("Scripting.FileSystemObject") Dim file Set file = fso.CreateTextFile(pathToZipFile) file.Write Chr(80) & Chr(75) & Chr(5) & Chr(6) & String(18, 0) file.Close Set fso = Nothing Set file = Nothing WScript.Sleep 500 End Sub NewZip "Empty.zip" That may work for the zip program, but it seems to be unacceptable to the java.util.zip.* apis. I wrote a quick test and the smallest zip that I could create and then read back with the java.util.zip APIs was 118 byte. There may be a way to create a smaller zip file that is spec compliant and readable with the zip utility... final static byte[] EmptyZip={80,75,05,06,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00}; public static void createEmptyZip(String path){ try{ FileOutputStream fos=new FileOutputStream(new File(path)); fos.write(EmptyZip, 0, 22); fos.flush(); fos.close(); }catch (FileNotFoundException e){ e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } }
STACK_EXCHANGE
react assign specific key of object as default prop I've a react component like: ReactDOM.render(<someReactComponent someObjectParam={ 'key1': 'value1', 'key2': 'value2'} />, document.getElementById('someDivID')); I'm sure the someObjectParam will have key1 and I want to make 'key2' as optional So, In my react component I tried something like: var someReactComponent = React.createClass({ propTypes: { someObjectParam: React.PropTypes.object.isRequired, }, getDefaultProps: function() { return { //even tried someObjectParam['key2']:'' someObjectParam.key2: '' }; } render: function () {.....} }); But I syntax error in getDefaultProps. Is there any way to define it properly? P.S: I know a work around to do something like this.props.someObjectParam.key2 || '' in render function or set key1 and key2 as different props but I'm after a more declarative way of doing it and I can't define my whole object as default value for some other logic I'm doing. Any help is greatly appreciated. Since the property is an object you have to return an object, but you can certainly return an object with just a specific key populated: getDefaultProps: function() { return { someObjectParam: { key1: "default" } } } I'm sure the someObjectParam will have key1 and I want to make 'key2' as optional If this is what you want to do than you really want to make key1 and key2 as separate properties, not a single property. There's no way to partially fill a default property. You could apply some default logic in your constructor so you don't have to worry about it from your render function: constructor(props) { props.objectParam.key2 = props.objectParam.key2 || "default"; super(props); } If the property can be updated you'll need this same logic in componentWillUpdate as well. At this point I would say it isn't really worth it, just deal with it in your render function. the value1 may be anything it's not fixed string... I know I can set them has a seperate properties, but due to some restrictions on my side I can't do it And as I said in my question I can't define my whole object as default value for some other logic I'm doing Sorry I guess I read too quickly. Then the answer is simply "No, you can't do that." You could apply some default populating in your constructor instead. This question is quite old, but I found it today researching this issue so here goes my answer. A more "declarative" pattern is to merge someObjectProp into someObjectDefaults, like: const defaults = { key2: "key 2 default value" } const someObject = Object.assign({}, defaults, this.props.someObjectProp) // or a nicer, ES6 syntax: const someObject = { ...defaults, ...this.props.someObjectProp } then use someObject instead of this.props.someObjectProp from there on React's defaultProps does not merge when the prop is defined so you can't set "deep defaults" there.
STACK_EXCHANGE
#!/usr/bin/env python # Copyright (C) 2004 CCLRC & NERC( Natural Environment Research Council ). # This software may be distributed under the terms of the # Q Public License, version 1.0 or later. http://ndg.nerc.ac.uk/public_docs/QPublic_license.txt """ aircraft_utils.py ================== UNSUPPORTED! """ # Imports from python standard library # Import external packages import cdms import Numeric import MV cdms.setAutoBounds("off") def flatten2DTimeData(var, time_var): """ Returns a flattened 2D array variable with a recalculated time axies. """ data = MV.ravel(var) missing = var.getMissing() time_values = time_var._data time_unit = time_var.units sampling_rate = var.getAxis(1) rate = int(sampling_rate.id[3:]) newtime_values = [] for t in time_values: for i in range(rate): tvalue = t + ((1./rate)*i) newtime_values.append(tvalue) new_time_ax = cdms.createAxis(newtime_values) new_time_ax.units = time_unit new_time_ax.designateTime() new_time_ax.id = new_time_ax.long_name = new_time_ax.standard_name = "time" atts = var.attributes newAtts = {} for att,value in atts.items(): if type(value) in (type((1,1)), type([1,2]), type(Numeric.array([1.]))) and len(value) == 1: value = value[0] if type(value) in (type(1), type(1.), type(long(1))): newAtts[att] = value elif type(value) == type("string"): newAtts[att] = value.strip() # Now create the variable newvar = cdms.createVariable(data, id=var.id, fill_value=missing, axes=[new_time_ax], attributes=newAtts) return newvar
STACK_EDU
How real is the threat of MITM when you have your own network I never understood why I have to worry about MITM when I am at home connected to my simple WiFiless modem using an Ethernet cable. I can tell how serious the threat is when you are in an Internet cafe or you have a WiFi network but WiFi is relatively new compared to the existence of SSL. Where/How would the attacker intercept my connection? should I be afraid of my ISP? In the UK and many other countries, yes, you should be afraid of your ISP. Your direct ISP is only a very small part of the chain, there's many scenarios where your network traffic could potentially be accessible to hostile agents. Don't forget that in many cases hostile agents might also have the resources of a nation state (eg. in the case of mass surveillance): You may trust your ISP, but do you trust the ISP of the party you're connecting to and all the networks in-between? Which might be in another country. TLS may protect the content of your requests against intermediate ISPs and/or government surveillance. There's the threat of malware on either your client or your router. TLS is end to end so it may protect you if the system is compromised provided the integrity of the client implementing TLS is maintained. What if your traffic doesn't end up going to who you intended? There could be instances where an IP is assigned to a new customer and your traffic for the old customer may go to the new. TLS could save you there. On generally trustworthy networks the probability of your traffic being compromised might be low, but the consequences of your traffic being compromised are very high (eg. session cookies can easily be stolen which allow an attacker to log into your account for example). Considering that these days the overhead of implementing TLS isn't huge the cost/benefit is usually worthwhile. Any networked device inside your own network (mom's PC, dad's PC, your NAS) can arp-spoof your default gateway, and thus proxy all your internet traffic. Reading and modifying it as they wish. Including your bank/office/shopping traffic. Plus, your ISP (obviously) does not own the internet. Nor does your ISP secure all of the internet's roads for you. They only forward traffic. You will need to take care about travelling securely yourself. (as thexacre pointed out) Look at it this way: Usually you have not checked which road your traffic is really taking at this very moment. And to add to the already existing answers: Don't trust the router connecting you to your ISP too much. Lots of routers are vulnerable against CSRF attacks, weak passwords etc which might lead to mass compromise like with million DSL routers in brazil 2012. With the attacker owning the router man-in-the-middle attacks are easily done.
STACK_EXCHANGE
#!/usr/bin/env python """Generates unicodejs.*properties.js from Unicode data""" import re, urllib2, os for breaktype in ['Grapheme', 'Word']: # a list of property name strings like "Extend", "Format" etc properties = [] # range[property] -> character range list e.g. [0x0040, [0x0060-0x0070], 0x00A3, ...] ranges = {} # Analyse unicode data file url = "http://www.unicode.org/Public/UNIDATA/auxiliary/" + breaktype + "BreakProperty.txt" for line in urllib2.urlopen( url ): line = line.strip() # Ignore comment or blank lines if re.search( r"^\s*(#|$)", line ): continue # Find things like one of the following: # XXXX ; propertyname # XXXX..YYYY ; propertyname m = re.search( r"^([0-9A-F]{4,5})(?:\.\.([0-9A-F]{4,5}))?\s*;\s*(\w+)\s*#", line ) if not m: raise ValueError( "Bad line: %r" % line ) start, end, prop = m.groups() if breaktype == 'Grapheme' and start == 'D800' and end == 'DFFF': continue # raw surrogates are not treated if not ranges.has_key( prop ): properties.append( prop ) ranges.setdefault( prop, [] ).append( (start, end) ) # Translate ranges into js fragments fragments = [] for prop in properties: rangeStrings = [] for start, end in ranges[prop]: if not end: rangeStrings.append( "0x" + start ) else: rangeStrings.append( "[0x" + start + ", 0x" + end + "]" ) fragments.append( "'" + prop.replace("_", "") + "': [" + ", ".join( rangeStrings ) + "]" ) # Write js file js = "// This file is GENERATED by tools/unicodejs-properties.py\n" js += "// DO NOT EDIT\n" js += "unicodeJS." + breaktype.lower() + "breakproperties = {\n\t" js += ",\n\t".join( fragments ) js += "\n};\n" jsFilename = os.path.dirname( os.path.realpath( __file__ ) ) + "/../unicodejs." + breaktype.lower() + "breakproperties.js" open( jsFilename, "w" ).write( js ) print "wrote " + jsFilename
STACK_EDU
When a company migrates to Microsoft Dynamics AX from a legacy ERP solution, one of the important data migration tasks is the creating opening balances in the new Dynamics AX system based on the closing balances of the previous closed period (often the fiscal year) from the legacy system. In order to ensure accuracy in your Dynamics AX opening balances, it is important to take a systematic approach to the process of planning, designing, and executing the migration of data for trial balances and sub-ledgers, as well as validation and reconciliation of these elements along with general ledger, sub ledger, and financial dimensions. In this article I will lay out the process and the elements involved in creating new opening balances in MS Dynamics AX coming from a legacy system. The opening balances elements are: - Trial balance The trial balances (General ledger accounts) consist of the balance accounts that move their balances from one year to another, and profit and loss accounts which represent the income statement results and do not move to another year. - Sub ledger The sub ledgers are fixed assets, banks, vendors, customers, and items. The sub ledgers are linked to the chart of accounts through the posting profile setup. - Validation and reconciliation The balance of sub ledgers (fixed assets, banks, vendors, customers, and items) must represent the balance of general ledger accounts (trial balance), with respect to the financial dimensions (business units, department, and purpose) balance in case it is used through sub modules. The controllership and financial consultants should finalize and validate the design and deployment of financial dimensions or the dimension hierarchy set. Data integrity between the General ledger, sub ledger, and financial dimension is one of the main objectives of Enterprise Resource Planning (ERP), and it must be considered from day one of the opening balances since the opening balance transactions may affect the daily transactions after going live. Opening Balances Best Practices After various attempts to upload opening balances to Microsoft Dynamics AX, I have followed an approach where I have separated the upload of ledger accounts (trial balance) and sub ledgers (vendors, Customers, Bank, Inventory, and Fixed Assets) by using an Error Account in sub ledgers entries as debit and credit sides (Dr. Error account, Cr. Error account) only to balance the sub ledger. This approach took much time and effort in order to ensure that the GL account and sub ledgers are reconciled, as well as to ensure the balance of Error account is zero. I have worked with my team, and controllership to develop a series of best practices that successfully guided us through the process: Planning and design - The master data is prepared and uploaded into Microsoft Dynamics AX (chart of accounts, banks, fixed assets, financial dimensions, vendors, customers, and inventory items ) - Maintain high levels of coordination between the controllership and application financial consultants in the design phase, including finalization of mapping between the old chart of accounts (legacy system) and the new chart of accounts - Application financial consultants should ensure the setup of the needed fields in the data collection template he/she will use to upload the opening balance. - The accountant who will fill in the opening balance data collection sheet must understand the fields and how he/she will fill it in - Create a separate journal name under the general ledger journal and voucher number sequence for easier tracking - Create separate journal names under inventory management (movement journal) and voucher number sequence for easier tracking - If adjustments are needed for the opening balance, use the same journal name and voucher - The opening balances of general ledger and sub ledger are uploaded together. Avoid separating their uploads as much as you can - The sub ledger (vendors, and customers) posting profiles should be assigned to opening balance The opening balance will be executed in three waves (Fixed assets acquisition and depreciation, items, and trial balance with sub ledger). Here are the methods we utilized to ensure correct execution. Wave I Fixed assets - Create fiscal periods in order to acquire the assets in the actual acquisition date - Fixed assets acquisition – The fixed assets acquisition will be executed through a fixed assets acquisition proposal. The posting profile setup will generate the following entry (Dr. Fixed assets accounts, Cr. Error account) - If there are fixed assets that are acquired in a foreign currency, modify the acquisition entry with the currency the exchange rate - Fixed assets depreciation – The fixed assets depreciation will be executed through a fixed assets depreciation proposal. The posting profile setup will generate the following entry (Dr. Error account, Cr. Accumulated depreciation) Wave II inventory items Inventory opening balance will be uploaded from movement journal (inventory sub ledger) the posting profile setup will generate the following entry (Dr. Inventory, Cr. Error account) Wave III trial balance and sub ledger - Identify the GL account balances which are not affected by sub ledger posting profiles - GL accounts that are affected by sub ledgers will be broken down by their relevant sub ledger (Banks, vendors, and customers) and the accounts will be affected directly by the sub ledger posting profile - Make sure that the assigned posting profile is the proper posting profile for each customer/vendor for two reasons. First, make sure the customer opening balance hits the right account. Second, ensure that the entries occurred during the settlement process (during year operations) - Replace the fixed assets accounts by an error account in order to close the amount in the error account which resulted from acquisition transactions - Replace depreciation account by error account in order to close the amount in the error account which resulted from depreciation transactions - If there are balances in a foreign currency, upload the opening balance entry with the currency the exchange rate - Perform closing voucher to transfer all profit and loss balances to Retained Earnings account - Print trial balance report with closing balance criteria User Group: Dynamics Arabia Vote for me in Microsoft Dynamics Most InfluentialPeople: Click Here
OPCFW_CODE
// project imports import services from 'utils/mockAdapter'; // types import { KeyedObject } from 'types'; // user simple cards const users = [ { id: '#1Card_Kelli', avatar: 'user-1.png', name: 'Kelli', status: 'Active' }, { id: '#2Card_Laurence', avatar: 'user-2.png', name: 'Laurence', status: 'Rejected' }, { id: '#3Card_Melyssa', avatar: 'user-3.png', name: 'Melyssa', status: 'Active' }, { id: '#4Card_Montana', avatar: 'user-4.png', name: 'Montana', status: 'Active' }, { id: '#5Card_Johnathan', avatar: 'user-5.png', name: 'Johnathan', status: 'Active' }, { id: '#6Card_Joanne', avatar: 'user-6.png', name: 'Joanne', status: 'Active' }, { id: '#7Card_Lisandro', avatar: 'user-7.png', name: 'Lisandro', status: 'Rejected' }, { id: '#8Card_Geovany', avatar: 'user-1.png', name: 'Geovany', status: 'Active' }, { id: '#9Card_Lucius', avatar: 'user-2.png', name: 'Lucius', status: 'Active' }, { id: '#10Card_River', avatar: 'user-3.png', name: 'River', status: 'Active' }, { id: '#11Card_Haylee', avatar: 'user-4.png', name: 'Haylee', status: 'Active' }, { id: '#12Card_John', avatar: 'user-5.png', name: 'John', status: 'Active' }, { id: '#13Card_Jeanne', avatar: 'user-6.png', name: 'Jeanne', status: 'Active' }, { id: '#14Card_Maryam', avatar: 'user-7.png', name: 'Maryam', status: 'Rejected' }, { id: '#15Card_Alexandre', avatar: 'user-1.png', name: 'Alexandre', status: 'Active' }, { id: '#16Card_Jordi', avatar: 'user-2.png', name: 'Jordi', status: 'Active' }, { id: '#17Card_Sharon', avatar: 'user-3.png', name: 'Sharon', status: 'Active' }, { id: '#18Card_Trycia', avatar: 'user-4.png', name: 'Trycia', status: 'Active' }, { id: '#19Card_Mazie', avatar: 'user-5.png', name: 'Mazie', status: 'Active' }, { id: '#20Card_Virgie', avatar: 'user-6.png', name: 'Virgie', status: 'Active' } ]; // ==============================|| MOCK SERVICES ||============================== // services.onGet('/api/simple-card/list').reply(200, { users }); services.onPost('/api/simple-card/filter').reply((config) => { try { const { key } = JSON.parse(config.data); const results = users.filter((row: KeyedObject) => { let matches = true; const properties = ['name', 'status']; let containsQuery = false; properties.forEach((property) => { if (row[property].toString().toLowerCase().includes(key.toString().toLowerCase())) { containsQuery = true; } }); if (!containsQuery) { matches = false; } return matches; }); return [200, { results }]; } catch (err) { console.error(err); return [500, { message: 'Internal server error' }]; } });
STACK_EDU
Identical editorIdentifier for all WYSIWYG existing on Page Affected Version Silverstripe CMS4, CMS5 Description editorIdentifier was introduced as part of solution for this issue: https://github.com/silverstripe/silverstripe-admin/issues/635. And it looks like it was created specially for sslink_*** plugins. But it looks like it doesn't work as expected. We have one global editorIdentifier variable with 'cms' value and it doesn't matter how many another values we have. If we have more than one TinyMCEConfig configuration, friendly_name in configuration always the same for all elements. So it means we assign somewhere only last value to all our configuration. It could be potential reason for Browser console Error / Warning ReferenceError: editorIdentifier is not defined. We should try to remove or replace logic with global editorIdentifier. Related issue silverstripe/silverstripe-cms#2816 Steps to Reproduce I update project _config.php file with the following content: php use SilverStripe\Admin\CMSMenu; use SilverStripe\Admin\CMSProfileController; use SilverStripe\Forms\HTMLEditor\TinyMCEConfig; use SilverStripe\Core\Manifest\ModuleLoader; // Name of configuration TinyMCEConfig::set_config('simple'); $module = ModuleLoader::inst()->getManifest()->getModule('silverstripe/admin'); $simpleEditor = TinyMCEConfig::get('simple'); // enable sslinkexternal plugin $simpleEditor->enablePlugins([ 'sslink' => $module->getResource('client/dist/js/TinyMCE_sslink.js'), 'sslinkexternal' => $module->getResource('client/dist/js/TinyMCE_sslink-external.js'), ]); // Add special name that will describe your module $simpleEditor->setOptions([ 'friendly_name' => 'Simple CMS', ]); // Add buttons to the content editor $simpleEditor->setButtonsForLine(1, 'sslink'); // Remove default buttons to the content editor $simpleEditor->setButtonsForLine(2, ''); CMSMenu::remove_menu_class(CMSProfileController::class); I update Page class with the following content, that will have WYSIWYG with default 'cms' configuration: php use SilverStripe\CMS\Model\SiteTree; use SilverStripe\Forms\GridField\GridField; use SilverStripe\Forms\GridField\GridFieldConfig_RecordEditor; use SilverStripe\Forms\HTMLEditor\HTMLEditorConfig; class Page extends SiteTree { private static array $has_many = [ 'ChildElements' => \ChildElement::class, ]; public function getCMSFields() { $fields = parent::getCMSFields(); HTMLEditorConfig::set_active(HTMLEditorConfig::get('cms')); $fields->addFieldsToTab( 'Root.ChildElements', [ GridField::create( 'ChildElements', 'Child Elements', $this->ChildElements()->sort('Sort'), GridFieldConfig_RecordEditor::create() ), ], ); return $fields; } } I create ChildElement class with the following content, that will have WYSIWYG with custom 'simple' configuration: php use SilverStripe\Forms\HTMLEditor\HTMLEditorConfig; use SilverStripe\Forms\HTMLEditor\HTMLEditorField; use SilverStripe\Forms\FieldList; use SilverStripe\ORM\DataObject; class ChildElement extends DataObject { private static string $table_name = 'ChildElement'; private static array $db = [ 'Sort' => 'Int', 'MyContent' => 'HTMLText', ]; private static array $has_one = [ 'Page' => Page::class, ]; public function getCMSFields(): FieldList { HTMLEditorConfig::set_active(HTMLEditorConfig::get('simple')); return FieldList::create( HTMLEditorField::create( 'MyContent', 'MyContent', ), ); } } I create Page in Silverstripe CMS And I add new Clild Element in Child elements tab. And I should see alone 'link' button in WYSIWYG view. When I click on 'link' icon in WYSIWYG. And I should see only one menu item 'Link to external URL' (Failed: I see full list of menu items) Then I reload page And I should see only one menu item 'Link to external URL' (Passed) Then I go to 'Main content' tab in Page section And I should see additional elements in WYSIWYG When I click on 'link' icon in WYSIWYG. And I should see full list of menu items The defect you've explained sounds similar to https://github.com/silverstripe/silverstripe-admin/issues/1457 Can you please try the workaround of explicitly setting the skin and see if that resolves it? If so, they're probably just the same issue and aren't necessarily related to editorIdentifier. I'll try to reproduce issue. 👍 This is being resolved as part of https://github.com/silverstripe/silverstripe-elemental/issues/1120
GITHUB_ARCHIVE
|"Assuming your computer BIOS DOES support large hard drives....."| Generally, if the system or the mboard was made in 2001 or later, it does. Look in your bios Setup to see if it recognizes the full size of the drive. The drive detection on the controller the drive is connected to should be set to Auto by the Auto or LBA method. Drive manufacturers rate the size using a decimal size (e.g. 1,000,000,000 bytes per gb)- your computer's bios and Windows states the size as a binary size, which is always smaller than the manufacturer's decimal size. (1,024 bytes/kb, 1,024kb/mb, 1,024mb/gb; 1,073,741,824 bytes /gb) If it's listed in mb, there are 1,024mb per gb. A 250gb drive is seen by the bios as a about a 232.8gb (or 238,148mb) drive. Windows uses up some more space when the drive has been partitioned and formatted, so it's smaller than that in Windows. The XP CD must have at least SP1 updates included in order for XP to be able to recognize the full size of a hard drive or a hard drive partition larger than 128gb (in Windows and in your bios; = 137gb manufacturer's size). Even if you install SP1 or SP2 or SP3 updates after Setup has run, you can't make the size of the hard drive partition(s) larger in XP itself without deleting the partition(s) and the data on them first. You can, however, use third party programs, generically called partition manipulation programs, such as Partition Magic, or freeware programs, to make the partition(s) larger without losing the data on them, but you are advised to backup your data before doing so. The original version (Gold) XP CD, and all the original XP CDs with SP1 updates included that I've seen, have no printing on them that indicates SPx updates are included. If SP2 or SP3 updates are included that's printed on the CD. XP MCE 2005 has SP2 updates built in; earlier versions do not. If your XP CD does not have at least SP1 updates included, and you have hard drives larger than 128gb (in Windows; = 137gb manufacturer's size), you can burn a slipstreamed CD that has the contents of both the original Windows CD and the SP1 or SP2 updates Slipstreamed Windows XP CD Using SP2 Directions for using Roxio or Nero. (NOTE that if you use Nero, you will need additional info to make sure the resulting burned CD is bootable, due to possible bugs in the Nero version - let me know if you want to use Nero). You can also make a slipstreamed CD that has SP3 updates built in, which will save you some time after Setup has run. If your original Windows CD is one without any SP updates, it is possible to make a slipstreamed CD including just the SP3 updates, but it may be better to upgrade it to it having SP2 updates first, then upgrade it to having SP3 updates, then burn the CD. (Otherwise - once the original XP version with no SP updates included has been installed on the hard drive, you must run the SP2 updates installation, then the SP3 updates installation, but apparently that doesn't apply to making a slipstreamed CD from the contents of the original CD).
OPCFW_CODE
I am currently working on the development of spatial models of rabies transmission dynamics at landscape scales. These models are fit using historical time-series data on disease prevalence, and are then used to evaluate the expected efficacy of control strategies. In the case of the Serengeti District in northern Tanzania the population of domestic dogs, which are used for livestock protection, acts as the reservoir for rabies that then spills over into other mammals (livestock, wildlife) and results in numerous cases of rabies in humans (Cleaveland et al 2002, Cleaveland et al 2006). The models of rabies transmission I am working on (in collaboration with Sarah Cleaveland, Katie Hampson, and Tiziana Lembo) are used to evaluate two things: the likely source of reinfection of the domestic dog population following local extinction of rabies, and an evaluation of the expected efficacy of different vaccine deployment regimes. Fitting disease models is notoriously difficult because disease data is often noisy, incomplete, and may relate only indirectly to the real process of interest (i.e. sentinels). We have developed a new Bayesian approach to model fitting that shows a great deal of promise. The Bayesian framework allows us to explicitly model error in the data, it accommodates missing data, and through the use of latent variables provides a method of explicitly modeling indirect measures of the true variable of interest. I am also interested in investigating how fine scale movement data (e.g. GPS telemetry data) can improve our understanding of inter and intra-specific interactions between individuals, and how mechanistic movement models inform our understanding of habitat use in ways that traditional modeling approaches (e.g. resource selection functions) cannot. In particular, we are working with Juan M. Morales and Daniel Fortin to look at how Bayesian movement models can be used to evaluate proactive management actions designed to remediate incursions of bison into farmland adjacent to the protected area in which they primarily reside. I have worked closely with Dan Haydon for two years now and have no hesitation in giving him the highest possible recommendation to any prospective postgraduate students or postdocs. All of Dan's students learn immensely from him, both in respect to modeling and his approaches to problem solving in general. He is very supportive and has pulled together a vigorous and fertile research group. It is a fantastic group to work with. (But if you come to work with Dan be prepared for some challenging days out on the Scottish hills! Some of the fruits of such trips are shown in the pictures on this page). Cleaveland, S., Fevre, E. M., Kaare, M. & Coleman, P. G. (2002) Estimating human rabies mortality in the United Republic of Tanzania from dog bite injuries. Bulletin Of The World Health Organization, 80, 304-310. Cleaveland, S., Kaare, M., Knobel, D. & Laurenson, M. K. (2006) Canine vaccination - Providing broader benefits for disease control. Veterinary Microbiology, 117, 43-50.
OPCFW_CODE
conflicting results for SNI test with the go stub Can't quite understand how to interpret this conflicting result: platform: Linux (Ubuntu 14.04) runner: trytls 0.3.4 (CPython 2.7.6, OpenSSL 1.0.1f) stub: go-nethttp/run ... FAIL support for TLS server name indication (SNI) [accept badssl.com:443] output: Get https://badssl.com:443: x509: certificate signed by unknown authority (possibly because of "x509: cannot verify signature: algorithm unimplemented" while trying to verify candidate authority certificate "COMODO RSA Certification Authority") ... PASS support for TLS server name indication (SNI) [accept tlsfun.de:443] If I read this right, this stub on this platform and golang version might actually have SNI support but we somehow fail the check otherwise with the badssl? It might be that golang uses some cert bundle that dislikes badssl tlsfun.de does not actually test missing SNI badssl.com -> .. -> certificate -> FIN tlsfun.de -> ... -> certificate -> certificate status -> ... -> encrypted alert .. -> FIN go run run.go badssl.com 443 Get https://badssl.com:443: x509: certificate signed by unknown authority (possibly because of "x509: cannot verify signature: algorithm unimplemented" while trying to verify candidate authority certificate "COMODO RSA Certification Authority") REJECT go run run.go tlsfun.de 443 ACCEPT just a guess: client sends status_request but does not receive "CertificateStatus" and fails to verify CA ('algorithm unimplemented', ..) or something.. See: http://bridge.grumpy-troll.org/2014/05/golang-tls-comodo/ This is issue with SHA384 signature algorithm in COMODO's CA certificates. Apparently Go 1.2.1 in Ubuntu 14.04 is too old to have such algorithms implemented (by default?). # ./run badssl.com 443 Get https://badssl.com:443: x509: certificate signed by unknown authority (possibly because of "x509: cannot verify signature: algorithm unimplemented" while trying to verify candidate authority certificate "COMODO RSA Certification Authority") REJECT # ./run www.comodo.com 443 Get https://www.comodo.com:443: x509: certificate signed by unknown authority (possibly because of "x509: cannot verify signature: algorithm unimplemented" while trying to verify candidate authority certificate "COMODO ECC Certification Authority") REJECT # openssl x509 -noout -text -in /usr/share/ca-certificates/mozilla/COMODO_RSA_Certification_Authority.crt|egrep "Issuer|Subject|Signature Algorithm:" Signature Algorithm: sha384WithRSAEncryption Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Certification Authority Subject: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Certification Authority Subject Public Key Info: X509v3 Subject Key Identifier: Signature Algorithm: sha384WithRSAEncryption # openssl x509 -noout -text -in /usr/share/ca-certificates/mozilla/COMODO_ECC_Certification_Authority.crt|egrep "Issuer|Subject|Signature Algorithm:" Signature Algorithm: ecdsa-with-SHA384 Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO ECC Certification Authority Subject: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO ECC Certification Authority Subject Public Key Info: X509v3 Subject Key Identifier: Signature Algorithm: ecdsa-with-SHA384 -- This issue can be fixed in Go stub with following patch: # diff -u run.go.orig run.go --- run.go.orig 2016-09-13 08:10:36.956759764 +0000 +++ run.go 2016-09-13 08:09:12.985875122 +0000 @@ -1,6 +1,7 @@ package main import ( + _ "crypto/sha512" "fmt" "net/http" "os" # ./run badssl.com 443 ACCEPT # ./run www.comodo.com 443 ACCEPT Ubuntu 14.04 has old Go and seems to already have other problems with stub (offending change made in #268): root@60007121a7c8:~/trytls/stubs/go-nethttp# cat /etc/debian_version jessie/sid root@60007121a7c8:~/trytls/stubs/go-nethttp# go version go version go1.2.1 linux/amd64 root@60007121a7c8:~/trytls/stubs/go-nethttp# go build run.go # command-line-arguments ./run.go:42: unknown http.Client field 'Timeout' in struct literal I'd say Ubuntu 14.04 and Go are no longer compatible. Closing.
GITHUB_ARCHIVE
Success Story of Satya Nadella Satya Nadella is the present and the third Chief Executive Officer (CEO) of Microsoft, he is holding this position since 2014. Before him, it was Bill Gates and Steve Ballmer on this throne. The company under him gained commendable growth and then after seven years, on June 16, 2021, Microsoft appointed him as an Executive Chairman of the company, replacing John W. Thompson. Before becoming CEO of Microsoft, he was working as an Executive Vice President of Microsoft’s Cloud and Enterprise group and used to build and run the computing platforms of the company. Read More: businesscommunityindia Ahead, we are going to take a look at his journey and will try to learn or know more about Satya Nadella. Let us explore the story of the achievement of Satya Nadella! Satya Nadella’s Birth August 19, 1967, was the day when Satya Nadella was born in Hyderabad. Bukkapuram Nadella, his father, was working as an Indian Administrative Service officer, and Satya’s mother was a lecturer in Sanskrit. Satya Nadella’s Education The education of Satya Nadella has been completed at Hyderabad Public School, Begumpet. To pursue a bachelor’s in electrical engineering in 1988, he went to the Manipal Institute of Technology, Karnataka. And then fled abroad for his further studies. There, at the University of Wisconsin-Milwaukee, he studied Master of Science in Computer Science, in the year 1990. Professional Life of Satya Nadella He started his career by being a technical staff member of Sun Micro Systems (an American technology company). He became an employee of Microsoft in 1992 and handled several major projects of the company, which includes the company’s entry into cloud computing and one of the largest cloud infrastructure development. Later in his career at Microsoft, he got promoted to the online services division as the vice president of Research and Development, he served in this post from the year 2007 to 2011, and afterward, he got hid further promotion by becoming the president of the division of Server and Tool, where he served until 2014 and provided help to the company and brought Windows server, Microsoft’s Database, and developer tools to its Azure Cloud. After he joined cloud services of Microsoft in 2011, the company’s revenues from cloud services witnessed an increase of 20.3 billion in June 2013 from 16.6 billion in the year 2011. More interestingly, in September 2018, he tripled the stocks of the company with an annual growth rate of 27%. Satya Nadella highlighted empathy, collaboration, and a growth mindset at Microsoft and brought a cultural shift here. He changed the work environment at Microsoft in such a way that it promotes and focuses on constant learning and growth. Satya Nadella as An Author Satya Nadella also authored a book named Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone, in which he has shared his experiences and career at Microsoft. Along with his experiences, in this book, he is also sharing the inside story of his company’s (Microsoft’s) constant transformation that he believes will soon start to have an impact on all of our lives. This book is believed to be a set of recommendations, meditations, and reflections of Satya Nadella. Awards Received by Satya Nadella There are numerous awards that Satya Nadella earned, which includes the most prestigious one – Padma Bhushan Award. Below are a few names of the awards that he won or was honored with. - In the year 2018, Nadella was recognized as the Time 100 Honoree. - He got two recognitions in 2019, as Financial Times Person of the Year, and Fortune Magazine Business Person of the Year. - In the year 2020, CNBC-TV18’s India Business Leader Awards recognized him as the Global Business Icon. So, this was the story of the achievement of Satya Nadella, which will inspire the youths of today to do great in their lives and achieve a lot. Such stories are proof that hard work and dedication never go in vain and it always pays off. The thing that matters is your consistency, your skills, your willpower, and your patience. And with this note, we hope that this success story of Satya Nadella was an inspirational one, and we also hope that you enjoyed reading it.
OPCFW_CODE
How does TextureFilters (MipMap generation) affect performance? I've been reading that choosing texture filter (specifically MipMap generation) may have an impact on performance of the application but I can't really get my head around how it works. So a few questions: Are MipMaps generated every frame or only once when the texture is loaded? If every frame, does it still get regenerated if the scene is static (the texture size and position is constant) from one frame to another. Like if you have a static UI does that perform worse when using MipMap filtering? If only once, why does it affect performance, and in what way? I'm wondering since I've discovered that everything looks a lot better when using (in LibGDX): param.genMipMaps = true; param.minFilter = TextureFilter.MipMapLinearLinear; param.magFilter = TextureFilter.Linear; But why isn't this standard/best practice? What are the drawbacks? If, for my application, it doesn't reduce fps, are there any other drawbacks? More GPU/CPU intensive? Higher battery consumption (for mobile devices)? I would read this article https://www.gamedevelopment.blog/texture-filter/ which explains how texture filtering works, it explains what each filter does and shows you example images produced and should answer all your questions. Mipmaps have to be generated whenever the texture data changes. As long as the texture doesn't change, there is also no need to recreate them. They influence performance because the read operation for every texel gets slower. Depending on which filter type you use, the GPU has to read multiple texels from multiple mipmap levels to calculate the final color. For example, GL_NEAREST will only read 1 texel and return that. GL_LINEAR will already have to read 4 texels from the one mipmap level and perform a bilinear interpolation. If you now enable mipmapping, then also information from a second texture level will influence the outcome. With GL_LINEAR_MIPMAP_LINEAR, the GPU will do a linear lookup (4 texel) in the mipmap level greater than the required size and one linear lookup in the mipmap level smaller than requested. The result from these two lookups will then be interpolated to gain the final color. So all in all, a GL_LINEAR_MIPMAL_LINEAR lookup might read 8 texels and perform 2 bilinear interpolations and a linear interpolation between the two levels (trilinear interpolation). Another consideration is GPU memory consumption. Mipmaps need to be stored somewhere on the gpu and take up approximately 1/3 more space than without mipmaps. For more details about Mipmapping one should also read the Wikipedia Article which explains the concept very well. As stated by others in comments, also this blog gives a good overview about texture filtering methods. Note, that the explanation here assumes 2-dimensional textures. Also note, that graphics cards may very well optimize the process, but the technique described is how it works in theory. Mip maps can also theoretically improve rendering performance for heavily minimized textures because the fetch operation in the shader doesn't have to fetch from as large of an image. So if I'm understanding it right there are two performace costs involved: 1. The cost of generating the MipMap which only happens when initiating the game/creating the texture or changing the actual texture image. 2. The rendering cost for each pixel on the screen depending on the chosen filter, which affects how many texels should be taken into account and how many linear interpolations will be performed -> more calculations. And number 2 happens every frame and is probably the one to consider most?
STACK_EXCHANGE
Greg Stein <gstein_at_gmail.com> writes: > On Tue, Jun 21, 2011 at 14:15, Philip Martin <philip.martin_at_wandisco.com> wrote: >> Greg Stein <gstein_at_gmail.com> writes: >>> On Tue, Jun 21, 2011 at 11:16, <philip_at_apache.org> wrote: >>>> Author: philip >>>> Date: Tue Jun 21 15:16:07 2011 >>>> New Revision: 1138040 >>>> URL: http://svn.apache.org/viewvc?rev=1138040&view=rev >>>> Support building with APR 2.0. Berkeley DB detection doesn't work. >>> I'm not entirely sure that we want to allow this. >>> We provide strong API guarantees to our third-party applications. But >>> our API is inherently tied to the APR API. If somebody upgrades >>> Subversion and that brings APR 2.0 along with it, then their >>> application may break. >>> Any thoughts? >> It will take a concious decision by the user to use >> as configure will not pick up apr-2 automatically unless apr-2-config >> masquerades as apu-config (it doesn't in a standard apr-2 install). >> We could make it --with-apr2 to make it more obvious. > I'm thinking of the case where a distro packager uses those > configuration settings. Then a downstream sysadmin thinks, "they > guarantee this is safe" and upgrades to the latest svn release. BOOM. > Yes, we can blame the distro packager for screwing around with our > compatibility rules, but I'd rather not give them the choice. > Would you be okay with a --with-apr2 setting that is labeled > "experimental" and issues a warning? ("use of this option is > incompatible with regular Subversion releases") (or maybe the > --with-apr2-experimental or somesuch?) I'm happy to make it --with-apr2. I don't mind a warning but I expect it won't get read. I know Debian bumped the so version when they switched from apr-0.9 to apr-1.x, i.e from libsvn_xxx-1.so.0 to libsvn_xxx-1.so.1. Perhaps we should do something similar and make apr-2 create libsvn_xxx-1.so.2? Received on 2011-06-21 20:42:35 CEST
OPCFW_CODE
. At compile time, we can easily’t make any guarantee about the kind of a industry. Any thread can entry any field at any time and between the moment a discipline is assigned a variable of some type in a way and enough time is is utilized the line just after, another thread may have altered the contents of the field. This Command allows the programmer to go backwards and forwards via time, research appealing frames, and Look at the execution throughout different frames. Purely purposeful languages can provide an opportunity for computation to generally be performed in parallel, steering clear of the von Neumann bottleneck of sequential just one stage at time execution, since values are independent of each other.[seven] My title is Hank, And that i fought out this glitch which has a Savage, so here goes:. My chamber was soiled and tough, so #four hundred grit on a brass rod chucked in the previous energy drill spiffed it up so I could chamber something. Then I figured out the Difficult way that . I instructed you what an inventory was! A person vital talent for any programmer is to make a decision for by themselves how an issue ought to be solved. "Harshul was the top tutor I have at any time experienced. His understanding of device learning and Python helped me a good deal. He did not squander my time at all. Thank you, Harshul." By default, Groovy performs nominal sort checking at compile time. Because it is mostly a dynamic language, most checks that a static compiler would Typically do aren’t achievable at compile time. * Other than environment a "breakpoint", which is like checking site visitors about the freeway by organising a barricade. Or writing to the "console", that is like working out exactly where your Pet dog goes during the day by pursuing the trail of droppings. Each and every decide to the learn department of this repo are going to be built. Feasible Create artifacts use community Model identifiers: If you employ a map constructor, additional checks are accomplished to the keys in the map to check if a residence of a similar title is described. By way of example, the following will fall short at compile time: In a few languages the symbol applied is considered to be an operator (meaning the assignment has a value) while others define the assignment as an announcement (meaning that it cannot be Utilized in an expression). you dont have to extract nicotine Within the cigarettes, . there is commercially out there nicotine bitartarate powder, . whenever you blended their website it Along with the deionized water you should have the nicotine Answer T is an array along with a is surely an array plus the part form of A is assignable on the ingredient style of T There are lots of theories all over indicating that Shakespeare's perform was created by someone else but the final consensus is that Shakespeare did his possess perform. You can find an excellent e-book by Bill Bryson on what we learn about Shakespeare (that is little) plus the theories described.
OPCFW_CODE
package rosa.core; import java.io.File; import java.io.IOException; import java.util.List; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import org.xml.sax.SAXException; import rosa.core.util.XMLUtil; // TODO move public class SceneMapping { /** * Divide into blocks. Each block (except possibly the last) ends with a * known lecoy number. If there is a difference between guess for the end * block and the correct value, spread that difference evenly through the * block. * * Returns array of start (inclusive), end (exlusive) lecoy numbers parallel * to cols. */ private static int[] guessLecoy(List<BookStructure.Column> cols, boolean usesyncpoints) { int[] guess = new int[cols.size() + 1]; int total_poetry_lines = 0; for (BookStructure.Column col : cols) { total_poetry_lines += col.linesOfPoetry(); } int avg_col_poetry_lines = Math.round((float) total_poetry_lines / cols.size()); int start = 0; // start of block inclusive int end = cols.size(); // end of block inclusive guess[0] = 1; for (;;) { for (end = start; end < cols.size(); end++) { BookStructure.Column col = cols.get(end); guess[end + 1] = guess[end] + col.linesOfPoetry(); if (col.firstLineLecoy() != -1) { break; } } if (end == cols.size()) { break; } // TODO spread more uniformly? if (usesyncpoints) { BookStructure.Column startcol = cols.get(start); BookStructure.Column endcol = cols.get(end); int diff = guess[end] - endcol.firstLineLecoy(); // System.err.println(endcol + " " + diff); guess[end] = endcol.firstLineLecoy(); guess[end + 1] = guess[end] + endcol.linesOfPoetry(); if (diff < 0) { diff = -diff; // Missing lines, possibly whole leaves // Assume missing leaves are in middle of block if (diff > avg_col_poetry_lines * 2) { System.err .println("Probably at least one leaf missing between " + startcol + " and " + endcol); } int middle = (start + end) / 2; for (int i = middle; i < end; i++) { guess[i] += diff; } } else if (diff > 0) { // More non-lecoy itmes than expected // Assume additional stuff is in middle of block int middle = (start + end) / 2; for (int i = middle; i < end; i++) { guess[i] -= diff; } } } start = end + 1; } // for (int i = 0; i < cols.size(); i++) { // BookStructure.Column col = cols.get(i); // // System.err.println(col // + " " // + guess[i] // + (col.firstLineLecoy() == -1 ? "" : " [lecoy: " // + col.firstLineLecoy() + "]")); // } return guess; } /** * Guess narrative mapping. If transcription file not null, add lines from * this transcription. The string trans_prefix will be prepended to each transcription line. */ public static NarrativeMapping guessNarrativeScenes(NarrativeSections nar, BookStructure struct, boolean usesyncpoints, File transcription, String trans_prefix) throws IOException, SAXException { // Turn guess of column start/end lecoy into guess of scene start/end // columns List<BookStructure.Column> cols = struct.columns(); int[] lecoyguess = guessLecoy(cols, usesyncpoints); NarrativeSections.Scene[] scenes = nar.asScenes(); BookStructure.Column[] sceneguess = new BookStructure.Column[scenes.length * 2]; // line offset into scene, starts at 1, inclusive int[] scenelineguess = new int[scenes.length * 2]; for (int i = 0; i < lecoyguess.length - 1; i++) { int start = lecoyguess[i]; int end = lecoyguess[i + 1]; BookStructure.Column col = cols.get(i); for (int j = 0; j < scenes.length; j++) { NarrativeSections.Scene scene = scenes[j]; if (!scene.isLecoy()) { continue; } if (start <= scene.lecoy_start && end > scene.lecoy_start) { sceneguess[j * 2] = col; scenelineguess[j * 2] = scene.lecoy_start - start + 1; } if (start <= scene.lecoy_end && end > scene.lecoy_end) { sceneguess[(j * 2) + 1] = col; scenelineguess[(j * 2) + 1] = scene.lecoy_end - start + 1; } } } NodeList milestones = null; if (transcription != null) { Document doc = XMLUtil.createDocument(transcription); milestones = doc.getElementsByTagName("milestone"); } // Turn scene start/end columns into narrative tagging NarrativeMapping guess = new NarrativeMapping(); for (int j = 0; j < scenes.length; j++) { NarrativeSections.Scene scene = scenes[j]; BookStructure.Column start = sceneguess[j * 2]; BookStructure.Column end = sceneguess[(j * 2) + 1]; int startline = scenelineguess[j * 2]; int endline = scenelineguess[(j * 2) + 1]; if (start == null || end == null) { // System.err.println("Bad error, start or end null for scene " // + scene.id); continue; } String start_trans = null; if (milestones != null) { String s = "" + scene.lecoy_start; for (int i = 0; i < milestones.getLength(); i++) { Element m = (Element) milestones.item(i); if (m.getAttribute("n").equals(s)) { // TODO not accurate for all transcriptions start_trans = trans_prefix + " " + XMLUtil.extractText(m.getParentNode()).trim(); break; } } } //System.out.println(scene.id + " " + scene.lecoy_start); guess.scenes().add( new NarrativeMapping.Scene(scene.id, start.parent().folio(), "" + start.columnLetter(), startline, end.parent().folio(), "" + end.columnLetter(), endline, start_trans, false, scene.lecoy_start)); } return guess; } private static int numColumns(String folio, String col) { int n = Integer.parseInt(folio.substring(0, folio.length() - 1)) - 1; int s = folio.endsWith("r") ? 0 : 1; int c = (col.equals("a") || col.equals("c")) ? 0 : 1; return (n * 4) + (s * 2) + c; } public static void printComparison(NarrativeMapping truthnar, NarrativeMapping guessnar) { int count = 0; int columncorrect = 0; int pagecorrect = 0; int totalcoldiff = 0; int withinoncolumn = 0; System.out .println("Scene,Location,GuessColumn,CorrectColumn,ColumnDiff"); next: for (NarrativeMapping.Scene truth : truthnar.scenes()) { for (NarrativeMapping.Scene guess : guessnar.scenes()) { if (guess.id().equals(truth.id())) { if (guess.startFolio().equals(truth.startFolio())) { pagecorrect++; if (guess.startFolioCol().equals(truth.startFolioCol())) { columncorrect++; } } int coldiff = numColumns(guess.startFolio(), guess .startFolioCol()) - numColumns(truth.startFolio(), truth .startFolioCol()); totalcoldiff += Math.abs(coldiff); if (Math.abs(coldiff) <= 1) { withinoncolumn++; } System.out.println(guess.id() + ", " + guess.startFolio() + "." + guess.startFolioCol() + ", " + truth.startFolio() + "." + truth.startFolioCol() + ", " + numColumns(guess.startFolio(), guess .startFolioCol()) + ", " + coldiff); count++; continue next; } } System.err.println("No guess for " + truth.id()); } System.err.println("Total " + count); System.err.println("Correct column: " + columncorrect + " (" + ((float) columncorrect / count) + "%)"); System.err.println("Correct within one column: " + withinoncolumn + " (" + ((float) withinoncolumn / count) + "%)"); System.err.println("Correct page: " + pagecorrect + " (" + ((float) pagecorrect / count) + "%)"); System.err.println("Column diff avg: " + +((float) totalcoldiff / count)); } }
STACK_EDU
In the second part of my blog about MuleSoft RPA in action, I’d like to show the complete automation process - from process evaluation to design, build and execution. I hope it will help you see the potential of Mulesoft RPA and how you can apply it to improve the efficiency of your processes. You will learn how to automate a simple process that involves reading records from an Excel document, checking if this information exists in Salesforce and creating a new entity if it is not found. We always start with the process evaluation to confirm that our candidate can and should be automated. We create a process in RPA Manager and specify the cost of running, nature of work, frequency, input/output, risks and other parameters. The result is the process matrix which is used to evaluate its suitability for automation. I log into RPA Manager, go to the menu, and click Process Evaluation. In the Process Evaluation menu, select Process Evaluation and click Create. Provide the process's name, category and description and click OK.You’ll be prompted to describe the business process, including the execution time, cost of running, frequency, and more. Then select the category. Now, you can assess the qualifiers and benefits like nature of work, complexity, estimated risks, input/output and data type. Basically, a process is suitable for automation if it is rule-based in nature, doesn’t change frequently and deals with structured data. The result is a process matrix which clearly indicates whether the business process is a good fit for automation or not. Ideally, it should be in the top right corner. Then click Save. It's time for our Project Manager to review and approve your process. Refresh the Process Evaluation page to view our newly created process. Now the Project Manager can navigate to My RPA and click on My Backlog to see pending processes. Click Start Automate to implement the process. Configuration and project management We are ready to implement our process, but first, we need to set up the team and user's permissions for each stage of the automation lifecycle - Design, Build, Test and Production. We also specify all applications used as a part of the process, which in my case are Chrome as the browser and MS Excel. Assign the process to one of the pre-defined categories and give it a description. Once completed, click Save. Our project is ready for the Design phase. Navigate to My RPA and click My Processes. Click on our new process to go to the Design pane. I can now either build a flow using the Business Process Model Notation (BPMN) or a Process Recorder. Use the left-side menu to select and drag and drop task elements into my process's flow. For this test, I use only Create Bot Task element. Below you can see how I create the flow. BPMN is now completed. Once the design is done, we promote our process to the Build phase by clicking the Publish button. Confirm and click Release to Build. RPA Builder, a locally installed application, is used to create logic for our bot's execution. We start by downloading BPMN from RPA Manager Repository developed during the Design Phase. Each element of our workflow should now be transformed into actionable steps. In our case, the first BPMN element is to 'Login to Salesforce'. I use ToolBox to select required action steps and drag them into my workflow to create a complete sequence. Let's look in detail at how the login is performed by the bot: - The first action is to open the browser. - Then, the bot should specify the login credentials, so I create variables to store the username and password. - After that, I add the image search action step. I specify how the bot should locate the 'username' field of the login form. - Following that, a mouse click action step takes the mouse pointer to the 'username' text field for the bot to type username. - The same sequence of steps is performed to specify the password. Moving on to the third element of my workflow Open Excel and read lead data. It contains steps to read information from a spreadsheet, check it I Salesforce and create a new lead in Salesforce in case it is missing. Below, I have included some action steps used: - Use action steps from the Excel Operations to set the file path to my spreadsheet and iterate over the data in the file. - Create variables to store data extracted from the spreadsheet and used to populate the web form when a new lead in created in Salesforce. - Image Search action step to take screenshots from Salesforce web form so the bot can locate the fields to fill in. - Mouse click action steps. - Mail Session and Send Mail action steps to send email notifications if a record already exists in Salesforce, or an error occurred. After the bot has completed execution, the good practice is to do the cleanup. Think that the bot might run again on a different process, so the machine should have no other screens or web pages open. It will ensure our bots' good performance. We are now ready to test our flow. In the RPA Builder, click Run Process and monitor the execution. The bot logs into Salesforce by typing the credentials. The bot accesses our test Excel document that contains a list of people and their details. In Salesforce, the bot clicks the Leads tab and types the first email from the Excel spreadsheet in the search box. The email exists, so the bot cleans the search box and types the next email. If the record is not found, the bot proceeds to create a new lead. It uses information from the Excel document to fill in the Lead form. The process continues till there are no more records in the spreadsheet. At that point, the bot logs out and closes all windows and documents. The only thing left is to check email notifications informing me about people's records missing in Salesforce. And we are done! In real life, the next step would be the Test Phase to configure the test plan and run thorough testing of our bots' execution. During the Production Phase, the team sets up the activity programme, defines the user for the execution and specifies the running schedule. Finally, we are ready to deploy and execute our bots. To make it easier for you, I have recorded the Building Phase so you can watch how to create logic using the RPA Builder tool and watch my bot in action. Hope you find it helpful. Let me know if you have any questions or comments. Creating execution logic in RPA Builder and testing the bot.
OPCFW_CODE
We wanted to know: Could AI tackle analysing a bilingual contract? We put three models—Claude, GPT-4, and ChatGPT 3.5 —to the test by challenging them to detect differences between English and Vietnamese versions of an agreement. The results revealed opportunities and limitations in using AI for this legal tech task. In 2021 we engaged a software company in Vietnam to help deliver a software project. Aside: They were absolutely great! Amazing developers and would highly recommend. As part of our work we signed a contract which was in both English and Vietnamese As part of thinking into legal-sector use cases for AI, I decided to see if AI could help parse the contract to highlight changes between the different language versions. And compare performance between different Large Language Models (LLMs). I had no idea if there were any differences, so this would be a journey of discovery! The prompt was deliberately simple. I used basic language and let the LLM work it out. I want you to compare a contract which is written in two languages and highlight any discrepancies. Do you understand? For ease, I pasted in three sections of the contract as text. I chose parts of the contract which had no identifiable information or specifics like day rates/deliverables. The total length of the contract selected was ~2,750 words, which I would have expected to be ~3,600 tokens. But the Vietnamese characters seemed to affect OpenAI and made the token count much higher You can use OpenAI's Tokenizer to calculate the number of tokens. Strong start from Claude, guiding me to try to fill in some of the gaps from such a broad prompt. Text is pasted in, followed by a short instruction: And the results! ChatGPT 3.5 Turbo And we're off again: But... uh oh: Ok so stumbled at the first attempt because of the tokenisation of Vietnamese language. Lets split the doc roughly into two. Part one response: Interesting - Claude didn't highlight any of these. They're all fairly minor in my opinion, but equally good to be aware of. Now for the second part. (Spoiler, ChatGPT 3.5 seemed to get very lost very fast) Response to part 2: And try again It seemed happy the first time but now its not. Ok lets go for GPT-4. Instead of using ChatGPT, we'll use the API. You'll notice I set the system prompt as a version of the original prompt to Claude. It could definitely be more verbose or specific, but I wanted to try to keep things consistent. Ok seems promising. But on closer inspection finding 6 doesn't make sense. And finding 2 seems like its identified a difference, but its very minor. Learnings and improvements If I had to name a winner it would be... Claude.🥇 This was a relatively quick and dirty test, and I think the real value add would be spending time optimising the prompt, ideally being specific about the types of errors. If as a lawyer you are able to build a better set of criteria, you could feed that to the model. It's also unclear how good the models are at dealing with Vietnamese. This would be something to optimise for or invest time in understanding further. The most telling thing is the lack of consistency across the different tools. Definitely an opportunity for an international commercial law firm to build their own expertise out on top of it.
OPCFW_CODE
Comment on page The Prometheus provider collects metrics from a Prometheus instance and makes them available to Akamas. This provider includes support for several technologies (Prometheus exporters). In any case, custom queries can be defined to gather the desired metrics. This section provides the minimum requirements that you should match before using the Prometheus provider. Akamas supports Prometheus starting from version Using also the prometheus-operatorrequires Prometheus 0.47 or greater. This version is bundled with the Connectivity between the Akamas server and the Prometheus server is also required. By default, Prometheus is run on port 9090. The Prometheus provider includes queries for most of the monitoring use cases these exporters cover. If you need to specify custom queries or make use of exporters not currently supported you can specify them as described in creating Prometheus telemetry instances. - Kubernetes (Pod, Container, Workload, Namespace) - Web Application - Java (java-ibm-j9vm-6, java-ibm-j9vm-8, java-eclipse-openj9-11, java-openjdk-8, java-openjdk-11, java-openjdk-17) - Linux (Ubuntu-16.04, Rhel-7.6) Akamas reasons in terms of a system to be optimized and in terms of parameters and metrics of components of that system. To understand which metrics collected from Prometheus should be mapped to a component, the Prometheus provider looks up some properties in the components of a system grouped under prometheusproperty. These properties depend on the exporter and the component type. Nested under this property you can also include any additional field your use case may require to filter the imported metrics further. These fields will be appended in queries to the list of label matches in the form field_name=~'field_value', and can specify either exact values or patterns. It is important that you add instanceand, optionally, the jobproperties to the components of a system so that the Prometheus provider can gather metrics from them: # Specification for a component, whose metrics should be collected by the Prometheus Provider name: jvm1 # name of the component description: jvm1 for payment services # description of the component instance: service0001 # instance of the component: where the component is located relative to Prometheus job: jmx # job of the component: which prom exporter is gathering metrics from the component The Prometheus provider does not usually require a specific configuration of the Prometheus instance it uses. When gathering metrics for hosts it's usually convenient to set the value of the instancelabel so that it matches the value of the instanceproperty in a component; in this way, the Prometheus provider knows which system component each data point refers to. Here’s an example configuration for Prometheus that sets the # Custom global config scrape_interval: 5s # Set the scrape interval to every 15 seconds. The default is every 1 minute. evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # A scrape configuration containing exactly one endpoint to scrape: # Node Exporter - job_name: 'node' - targets: ["localhost:9100"] - source_labels: ["__address__"]
OPCFW_CODE
ERROR tcc Info Train_Carts: v1.18.1-v1 (build: 995) BKCommonLib: v1.18.1-v1 (build: 1172) Server: 3171-Spigot-610a8c0-fb556bf (MC: 1.17.1) Bug Description When I do /tcc give everything is fine, but when I create a trail I get an error in my logs: https://paste.traincarts.net/jofudomere.cs Update BKCommonLib, v2 has been out for a while now. Ok its function I had forgotten to update it 😅, but I have this when I create the plot : https://www.zupimages.net/up/22/02/m0gl.png Any idea? Known issue. Copy-paste from discord: Leash bug vote for this bug: https://bugs.mojang.com/browse/MC-212629 resourcepack fix: https://www.dropbox.com/s/bmbufltqjxok3l3/leashFix.zip?dl=1 Super thank you for your help and your responsiveness 😄 I have a new error after a server restart: https://paste.traincarts.net/ovagayojay.bash Did you use /reload at any time? Because it seems the server glitched out and didn't load certain jar files fully. Make sure to update any other plugins like TC-Coasters as well. I just restarted my server and deactivate then reactivate another plugin, it is MotionCaptureRewritten TC-Coasters is well up to date Its probably related to https://github.com/bergerhealer/Mountiplex/commit/a9742445b7b655dd37698cf57f943984aca72a72 https://ci.mg-dev.eu/job/BKCommonLib/1207/ probably doesnt have this issue anymore I no longer have the BKCommonLib error but a TCCoasters error : https://paste.traincarts.net/zelicehuso.sql It's worth a try. This closing of class loaders is a huge pain with bukkit. https://ci.mg-dev.eu/job/BKCommonLib/1208/ I still have the error, but it spams in the logs when I hold the track editior :/ https://paste.traincarts.net/alohezaliz.bash But he gives it to me while before he didn't give it to me Frustrating. Im trying to understand why you have this issue. Are there any plugins that show up red in /plugins? What if you remove these plugins that fail to enable? All my plugins are in green All my plugins are in green Well, just to rule out some bugs caused on spigots end. Unlucky build /etc. Try paper 1.17.1 to see if the same happens or not. https://papermc.io/downloads#Paper-1.17 My server is running on Bungeecord, can it be from there? I will change it to PaperSpigot I really dont know as youre the first to report this specific error. If its not the server having a bug, it must be some other plugin youre using thats closing its own class loader causing this. In that case youll have to do a binary search (disable all and enable half of half until the error occurs and you find the plugin that causes it) I found the plugin that makes tcc bellow I contact the creator of the plugin in question ? Can you tell me which plugin? If its open source I can have a look, Ill contact them myself if I find out what causes it. If its closed-source/paid premium, you can contact the dev yourself and maybe link this issue ticket. A summary for the author: Jar file classloader is being closed. My guess: plugin is a jar-in-jar solution, and something is going wrong. Closing issue, assuming resolved
GITHUB_ARCHIVE
Welcome to BeeHive's project page. In this section you will find the reasoning why we started this project and what solutions we are proposing for the problem we are trying to address. At the bottom of the page you will find a table with links to all sub-parts of the project and its repositories. If you ever been to a biology laboratory, you might have noticed that a lot of the equipment in there are designed to perform very specific tasks. Each task is normally performed by one machine, but from the electronics point of view, all these machines have a lot of similar modules!! Here are some examples, with each item in the list below organized as: - task - machine examples - electronics module (overall view) - Heating.cooling static samples - Dry baths, PCR machines - peltier elements, H-bridge, temperature sensors - Heating/cooling flowing solutions - in line heater, heated chambers - peltier elements, H-bridge, temperarture sensors - Keeping air in a chamber at constant conditions - incubator - peltier elements, H-bridge, temperature sensors - Controlling fluid injection - syringe pumps - stepper driver - Controlling fluid flow (reward systems, perfusion) - peristaltic pumps, solenoid valves - H-bridge/solenoid controller - control gas flow - solenoid valves - H-Bridge/solenoid controller - Measuring environmental variables (temperature, humidity, light levels) in animal husbandry rooms - different types of sensors - microcontroller+sensors All of the above are just a subset of the types of machines in a lab, and what we can see from these examples is that there is a lot of repetition on the electronics behind different devices! Unfortunately, this repetition did not bring about the benefits we would expect, that is, these machines are not made cheaper or more accessible because they could have interchangeable parts, or because they are easy to repair, etc. Being expensive and only available for purchase via a few different companies, makes these machines only accessible by researchers in academic institutions. And even in this case, researchers have to be in well funded laboratories in specific locations in the globe (as being away from the "global north" increases the complexities of shipping, customer care, customs etc). These issues make research an elitist activity, when it should be the opposite! EVERYONE SHOULD HAVE THE RIGHT TO ASK SCIENTIFIC QUESTIONS AND PERFORM EXPERIMENTS TO GENERATE THE DATA THAT WILL HELP ANSWER THOSE QUESTIONS. One possible solution for the problem mentioned above is to make scientific equipment easier to access/build/understand/modify. This is where BeeHive comes in! We are building a modular platform that will allow people to pick up different modules and build equipment, making using of re-usable electronic modules as well as code. The system specification: - A central breakout board for ESP32 - different custom PCBs, each responsible for one task (H-bridge, solenoid driver, 8 switch array, IR photo transistor controller, temperature sensor breakout) - Standard pin out for the boards, allowing other PCBs to be created by anyone - compatibility with GROVE System for different sensors and actuators - A training board with different actuators and sensors so that users can focus on developing their own different firmware for their applications, before figuring out the electronics and their connections (to be implemented) - Compatibility with Bonsai-RX using Open Sound Control protocol - Compatibilty with LabThings for smart control/observation of the different tools. |Table of contents| |- GitHub organization (with more detailed documentation on how to build things)| - Repository containing all board definitions and main implementations/enhancement issues being discussed - project log on Hackaday.io |Behavioural task under the microscope | - project log on Hackaday.io |Behavioural taks in home cage - repository ...
OPCFW_CODE
M: Show HN: We built a simple video-call summarizer for macOS - Erazal https://get-spoke.com/ R: Erazal During the confinement, as we spent our lives on video-conferences which we all had to attend although most of us were not active participants, but required "information attendees", we set out to build a video-call summarizer for MacOS. The goal was to summarize video conversations, save, and share them with our team easily. So we set out to build a tool to capture video-conversation moments, upload them, run Text-To-Speech solutions on them, and then provide a simple "video editing by text editing" interface to clean and share the videos in a few minutes. We started by building the video recorder itself in Swift, with AVFondation. Swift has all the wanted features of a modern language, but XCode just ruins it. On top of that, the documentation of AVFondation has up to almost no documentation when you implement something out of the ordinary (for instance the function to choose the format of video recordings takes a dictionary where 99% of the values lead to a crash). So to mitigate this issue, we now use ffmpeg to convert the movs created by AVFondation to a good audio and video format for the web and audio-analysis. On the web side of things, we started out by using Python / Django. Frustrated by Django's monolith and poor documentation, we switched to Rust with Rocket, overcoming our fears of lack of maturity. Rocket has just the right level of abstraction, everything being explicit, with a lot of procedural macros to reduce the boilerplate. One limitation we however encountered was that Rocket does not handle well hanging connections, which can be very problematic when uploading videos. This ended up blocking all the threads if the user's connection was reset during an upload, so we had to put Nginx ahead of Rocket. A few months after starting the journey of building Spoke we just launched version 1.0. If you're also bored by attending useless video-conference calls when somebody could've just sent you a summary, feel free to check it out. R: kprimice Very interesting concept. I always find it surprising that people run online meeting as they would run off-line ones. I think you are onto something, video-call summarizer + transcription will make off-line call so obsolete that they probably won't exist anymore in a few years.
HACKER_NEWS
This is a learning exercise for myself, but in the event this is of interest to others.... In short: after a few edits, it seems to mostly work (ipv6, ath10k-ct, 2.4 and 5 GHz wifi, guest networks, strongswan, a few firewall rules for a custom etherwake server wol script). root@OpenWrt:~# uname -a Linux OpenWrt 4.19.52 #0 SMP Thu Jun 20 11:23:37 2019 armv7l GNU/Linux root@OpenWrt:~# uptime 21:09:10 up 5:07, load average: 0.00, 0.03, 0.00 root@OpenWrt:~# df -h Filesystem Size Used Available Use% Mounted on /dev/root 3.5M 3.5M 0 100% /rom tmpfs 233.1M 1.1M 232.0M 0% /tmp /dev/ubi0_1 78.7M 19.7M 55.0M 26% /overlay overlayfs:/overlay 78.7M 19.7M 55.0M 26% / tmpfs 512.0K 0 512.0K 0% /dev I started with master and cherry picked the ipq806x kernel 4.19 commit from here. This commit does not (yet) have the changes to qcom-ipq8064-r7500v2.dts to expand the overlay (see PR 1922). I made the overlay expansion change to the 4.19 dts file since I want to upgrade/downgrade via sysupgrade from luci. In the absence of other knowledge about usb and kernel 4.19 on the r7500v2, I edited the qcom-ipq8064.dtsi and qcom-ipq8064-r7500v2.dts (only the usb entries) following the discussion about usb on r7800 and 4.19 (see this response) from @Ansuel. After some help from @Ansuel (see posts below), USB is now working. There are a few "failures" in dmesg that don't show up in 4.14 related to cpu idle and temperature sensors (see below - collectd-mod-thermal does not display temperatures). After make distclean, make menuconfig (I made no changes to my usual custom .config), make defconfig, and make kernel_menuconfig, i ran into this error applying a 4.19 kernel patch. Following @slh's advice in the next post of this thread and directly editing the patch to match that in 4.14 (I only needed to change a few lines) resolved the issue and I was able to build and sysupgrade the image without any trouble. Helpful instructions (r7800) for tty serial tftp booting initramfs images for testing (i.e. avoid flashing) here. See this post below for a worked example on the r7500v2. For reference, the generic serial flashing guide. Example for the ipq8064 based NEC Aterm WG2600HP).
OPCFW_CODE
Greg KH wrote: On Mon, Aug 23, 2010 at 11:03:47PM +0700, "C. Greg KH wrote: On Mon, Aug 23, 2010 at 10:46:38PM +0700, "C. Bergström" wrote: Apologies for the OT cross post, but I couldn't sort this out and any help is greatly appreciated.. (Others directed me here) PathScale needs some packaging help 1) kernel-trace, kernel-default, kernel-sources , kernel-trace-sources (if they differ) , kernel-trace-sysm (if this needs to be different I've started a package derived from Kernel:HEAD which gets close to doing what we need, but not quite. The kernel-trace package builds and boots, but I can't manage to get the corresponding kernel-trace-sources to correctly match. The kernel-trace-sources package should be the same as the kernel-sources package, there is no difference. That's what I thought, but the results I got disagreed.. What were your results? kernel driver would build, but not load.. (I assumed it was just mismatched sources..) I took the /proc/config.gz and moved it into /usr/src/linux after creating the appropriate symlink.. ran make oldconfig ; make prepare.. etc and still wouldn't load.. I specifically needed to do two things 1) Build the closed NVIDIA driver (I couldn't get this to work and after a day of fighting gave up) Ask nvidia about that. It should have worked and when I manually built the kernel it did.. So I blame myself and in no way NV 2) Build pscnv (I didn't get to this.. What I ended up resorting to was installing the source package, using the traces config and manually building the kernel. Why are you using the trace kernel? What is in it that you want/need For us to get the tracing information we need while running the NVIDIA binaries mmiotrace must be enabled. So technically vanilla/default/anything + that 1 line change is sufficient.. (I couldn't get vanilla to build when I changed the config though.. it kept saying I had to run make oldconfig) So hopefully that explains how this is all intertwined... I'd really like to get all the tracing packages building so we can even possible make an ISO with it all prepackaged.. Then users could drop the ISO in.. get a trace and upload it without any kernel hassles or messing with their existing install.. To unsubscribe, e-mail: opensuse-kernel+unsubscribe(a)opensuse.org For additional commands, e-mail: opensuse-kernel+help(a)opensuse.org
OPCFW_CODE
package co.paralleluniverse.javafs; import java.io.IOException; import java.nio.file.FileSystem; import java.nio.file.Path; import java.util.Map; import co.paralleluniverse.fuse.Fuse; /** * Mounts Java {@link FileSystem}s as a FUSE filesystems. * * @author pron */ public final class JavaFS { /** */ public static final String ENV_SINGLE_THREAD = "single_thread"; /** * Mounts a filesystem. * * @param fs the filesystem * @param mountPoint the path of the mount point * @param readonly if {@code true}, mounts the filesystem as read-only * @param log if {@code true}, all filesystem calls will be logged with juc logging. * @param mountOptions the platform specific mount options (e.g. {@code ro}, {@code rw}, etc.). {@code null} for value-less options. */ public static void mount(FileSystem fs, Path mountPoint, boolean readonly, boolean log, Map<String, String> mountOptions) throws IOException { if (readonly) fs = new ReadOnlyFileSystem(fs); boolean singleThread = false; if (mountOptions != null && mountOptions.containsKey(ENV_SINGLE_THREAD)) { singleThread = true; mountOptions.remove(ENV_SINGLE_THREAD); } if (singleThread) { Fuse.mount(new SingleThreadFuseFileSystemProvider(fs, log).log(log), mountPoint, false, log, mountOptions); } else { Fuse.mount(new FuseFileSystemProvider(fs, log).log(log), mountPoint, false, log, mountOptions); } } /** * Mounts a filesystem. * * @param fs the filesystem * @param mountPoint the path of the mount point * @param readonly if {@code true}, mounts the filesystem as read-only * @param log if {@code true}, all filesystem calls will be logged with juc logging. */ public static void mount(FileSystem fs, Path mountPoint, boolean readonly, boolean log) throws IOException { mount(fs,mountPoint, readonly, log, null); } /** * Mounts a filesystem. * * @param fs the filesystem * @param mountPoint the path of the mount point * @param mountOptions the platform specific mount options (e.g. {@code ro}, {@code rw}, etc.). {@code null} for value-less options. */ public static void mount(FileSystem fs, Path mountPoint, Map<String, String> mountOptions) throws IOException { mount(fs, mountPoint, false, false, mountOptions); } /** * Mounts a filesystem. * * @param fs the filesystem * @param mountPoint the path of the mount point */ public static void mount(FileSystem fs, Path mountPoint) throws IOException { mount(fs, mountPoint, false, false, null); } /** * Try to unmount an existing mountpoint. * * @param mountPoint The location where the filesystem is mounted. * @throws IOException thrown if an error occurs while starting the external process. */ public static void unmount(Path mountPoint) throws IOException { Fuse.unmount(mountPoint); } private JavaFS() { } }
STACK_EDU
SAP BusinessObjects (BO) Business Intelligence is a reporting tool offered by SAP. SAP BO offers Intelligent solutions that can be made use of by people ranging from analysts and other people who work with information to CEO’s. By the end of SAP Business Objects Training you will: ·Acquire the relevant knowledge required to clear the SAP BO certification exam. ·Understand the core concepts of SAP’s BO module. ·Be able to apply the knowledge learned to progress in your career as an associate level SAP BO consultant. SAP Business Objects Training & Course will equip you with necessary skills to grab a highly-paid job in the fiercely competitive job market. SAP BO (BusinessObjects BI) Course Details & Curriculum SAP Business Objects Training will broadly cover these topics (please download detailed curriculum for elaborate details): Understanding BusinessObjects Enterprise What is BusinessObjects Enterprise? Working with SAP BO Launchpad ( Infoview) 2. SAP Business Objects Web Intelligence and BI Launch Pad 4.1 SAP Business Objects Dashboards 4.1 BI launch pad: What’s new in 4.1 Restricting data returned by a query Enhancing the presentation of data in reports Calculating data with formulas and variables Using multiple data sources Managing and sharing Interactive Analysis documents Reporting from Other Data Sources Introducing Web Intelligence Accessing corporate information with Web Intelligence Understanding how universes allow you to query databases using everyday business terms. Managing documents in InfoView Viewing a Web Intelligence document in InfoView Setting Info View Preferences Creating Web Intelligence Documents with Queries Getting new data with Web Intelligence Creating a new Web Intelligence document Modifying a document’s query Working with query properties Restricting Data Returned by a Query Modifying a query with a predefined query filter Applying a single-value query filter Using prompts to restrict data Using complex filters Displaying data in tables and charts Presenting data in free-standing cells Enhancing the Presentation of Reports Using breaks, calculations, sorts and report filters Ranking data to see top or bottom values Using alerters to highlight information Organizing a report into sections Copying data to other applications Alternative Query Techniques Using Combined Queries Using Sub-Queries Creating a Query based on another Query Character and Date String Functions Using the character string functions Concatenating different data types Using date functions Using If Logic Grouping data using If() logic Using If() to modify calculation behavior Advanced Reporting Features Formatting breaks Creating custom sorts Displaying document data in free-standing cells Alternative Query Techniques Defining Combined Query Types Using Combined Queries Creating a Query on a Query Character and Date String Functions Understanding Character Strings Using Date Functions User-Defined Objects Creating User Objects Using a User Object in a Query Editing a User Object Deleting a User Object Storing a User Object Grouping Data 3. Information Design Tool 4.0 What is the Information Design Tool (IDT) Create a project Create a connection to a relational database (Single and Multiple databases) Create the data foundation with Single & Multiple databases Define the different types joins in a data foundation Create a business layer Create folders and objects Resolve Loops and Use alias Resolve Loops Use contexts Resolving the fan traps and Chasm traps problem Define data restrictions Work with LOVs Use Parameters restrict data Use @functions also Aggregate Awareness Create Derived Tables and Index Awareness Deploy and manage and maintain universes 4. Universe Designer Tool 4.0 Understanding Business-Objects Universes. Understanding how universes allow users to query databases using their everyday business terms. Creating Universe Connections. The course database and universe. Creating the universe. Building and populating the Universe Structure. Defining joins in a universe. Creating Dimension Objects. Understanding classes and objects. Creating Measure Objects. Understanding measure objects. Using List of Values. Resolving loops using aliases. Resolving loops using contexts. Chasm traps and Fan traps. Restricting the data returned by objects. Using Functions with Objects. Using @ Functions. Working with hierarchies. 5. SAP Business Objects Dashboards 4.0 (Crystal Xcelsius 2011) Crystal Xcelsius Overview Creating Dashboards using Query As A Web Service (QAAWS) and Live Office Create Drill Down dashboard reports What’s new in SAP Business Objects Dashboards 4.0 Creating a Visualization Producing interactive visualizations Getting around in Xcelsius Working with your Excel workbook Visualizing data with charts Using Xcelsius Components Formatting a Visualization Applying formatting options Using themes and templates to apply formatting Adding Interactivity to a Visualization Adding dynamic visibility Using live data sources Connecting to BO Universes using Query as a Web Service Using Live Office data Creating Complex dashboards 6. SAP Crystal Reports 2011 Organizing data on reports Formatting & section formatting Creating basic and complex formulas Using report templates Applying conditional reporting Building parameterized and specialized reports Summarizing data with cross tabs Using report sections Representing data visually Basics of SQL, Database & Data Warehouse Understanding of Business processes related to Manufacturing, Production Planning, Sales and Shipping industry would be an advantage
OPCFW_CODE
feat: rework realtime subscription hooks Fixes https://github.com/invertase/react-query-firebase/issues/25 This PR reworks hooks that depend on realtime subscriptions. Currently, if a hook declares a realtime subscription (via the subscribe: true option or by default (e.g. auth hooks)), a staleTime of Infinity is being set by default. If other hooks with the same QueryKey are created, the existing subscription will be used (which is expected). The issue is, "Infinity" data is never considered stale for the remainder of the applications lifecycle. So assuming there is always an active subscription then this doesn't cause any issues (since the subscription always updates the cache). However, if all active hooks are unmounted, and remounted, the actual queryFn won't be executed again (since it'll just return the data within the query cache). However since all hooks have actually unmounted, there is no active subscription thus the data will never be updated again. This PR basically reworks the effected hooks by; Tracking the query key "mounts" If the is existing query keys, then it'll simple return the cached data If there is no existing query keys, it'll create a new subscription & store that reference Each time a hook unmounts, the query key is subtracted from the active counter. If it's zero (no more hooks active), it'll unsubscribe. ### TODOs [x] Auth [x] Firestore [ ] RTDB [ ] Add tests to test these conditions [ ] Extract / publish the utils package (since it causes weird building if it's directly import via rollup). [ ] Ensure all files have copyright headers Hello, is there anything we can help to push this PR over the line? Happy to help as well, this fix is crucial for an implementation in prod :) Happy to help as well, this fix is crucial for an implementation in prod :) I'm also stuck and can't go to production without this fix. I've had great hopes for this lib as it was used by Google showcasing Firebase 9, sadly it seems the repo is abandoned, I'll start looking for alternatives. Has anyone else tested this? It looks like the owner says it should work but is concerned about edge cases. I'm going to try and get some resource from the company assigned to this soon, apologies it's been really busy. I'm going to try and get some resource from the company assigned to this soon, apologies it's been really busy. This can't be stated enough but the community greatly appreciates your work! (just an update) Currently working on this, experiencing some strange behaviour with the hooks in a test, onSuccess is called more times than i'd expect when rerendering. OK actually made some progress on this, curious what people think. I'm using a library called react-query-subscriptions which seems to work fairly well. Interested in any opinions here. A work in progress example: import { QueryKey } from "react-query"; import { Auth, IdTokenResult, AuthError } from "firebase/auth"; import { Observable } from "rxjs"; import { useSubscription, UseSubscriptionOptions, } from "react-query-subscription"; import { UseSubscriptionResult } from "react-query-subscription/types/use-subscription"; export function idTokenFromAuth( auth: Auth, options?: { forceRefresh?: boolean; } ): Observable<IdTokenResult | null> { return new Observable<IdTokenResult | null>(function subscribe(subscriber) { const unsubscribe = auth.onIdTokenChanged(async (user) => { let token: IdTokenResult | null = null; if (user) { token = await user.getIdTokenResult(options?.forceRefresh); } subscriber.next(token); }); subscriber.add(unsubscribe); }); } export function useAuthIdToken<R = IdTokenResult | null>( key: QueryKey, auth: Auth, options?: { forceRefresh?: boolean; }, useSubscriptionOptions?: Omit< UseSubscriptionOptions<IdTokenResult | null, AuthError, R>, "queryFn" > ): UseSubscriptionResult<R | AuthError> { return useSubscription(key, () => idTokenFromAuth(auth, options), { ...useSubscriptionOptions, }); } tests still need fixing etc, and will rewrite the rest of the subscriptions like this. Is adding react-query-subscription really necessary? A dependency like this can cause issues in the future is no one maintains that package Is adding react-query-subscription really necessary? A dependency like this can cause issues in the future is no one maintains that package This is a fair point to be honest, it just seemed like they handled subscriptions quite nicely. @Ehesp What do you think? Yeah react-query-subscription seems to contain a few extra pieces we don't need. I think the solution to this bug is there, so i'll try and do something similar. @cabljac great! let us know when we can jump in and test! :) hm seems a bit trickier to implement than initially hoped. Even replicating the parts you need from react-query-subscription? Made some headway! essentially tweaking @Ehesp's solution a bit, but taking some ideas from react-query-subscription too. It seems to be passing tests. I think I've refactored all the subscription hooks now: useAuthUser useAuthIdToken useFirestoreDocument useFirestoreDocumentData useFirestoreQuery useFirestoreQueryData useDatabaseValue If anyone notices there's one missing in this list let me know :). I've also added tests for the issue, which seem to pass. I need to ensure all files have copyright headers, and give things another look/ add a couple of code comments to the new useSubscription utility hook. @cabljac amazing work, all seems to be in order. Should we as part of this PR bump the react dependencies to 18? I've used it with react 18 for some time now and haven't faced any issues OK so last bit of this is to add at least a single test for the error case, but that requires setting up firestore.rules and changing test setup a bit. a couple of tests pass on their own, but fail when run in series, and I can't figure out why. I skipped them and merged this. If people get a chance and are able to test it out, that would be greatly appreciated :D
GITHUB_ARCHIVE
Verint Systems Inc. This role will be based in our Belfast office in the Titanic Quarter. Extremely occasional travel to other Engineering sites may be required. About £43,000 - £57,000 a year Strong knowledge on automation and environment management. Added advantage exposure to cluster setup for high availability and scalability etc. About £49,000 - £65,000 a year As a Civil Engineer, you'll be working in a team environment alongside other Engineers, Hydraulic Modellers and Asset Planners in the delivery of water,… About £30,000 - £40,000 a year To support this projected growth we want to attract talented Principal C++ developers with 10+ years' commercial development experience who will become key… About £34,000 - £49,000 a year We are looking for talented engineers to help us build the industry-leading employee communications and engagement platform and solve the needs of our fast… About £42,000 - £57,000 a year As a Refrigeration Engineer you will be responsible for; What you will need to succeed as a Refrigeration Engineer; £25-30k Plus van, phone and iPad. £25,000 - £30,000 a year Coach and mentor less experienced engineers on performance engineering techniques. Expleo is a trusted partner for end-to-end, integrated engineering, quality… About £31,000 - £44,000 a year Our team includes engineers, designers, security researchers and product managers, all focused on making the Internet safer for everyone. About £48,000 - £65,000 a year Queen's University Belfast We are currently recruiting engineers to deliver our current project portfolio with an aim to establishing capability for our future growth plans. £42,000 - £51,000 a year Our development engineers lead by championing a culture of personal ownership and customer-focused execution. As a member of ESO’s development team, you will be… About £47,000 - £58,000 a year The Software Engineer will develop software for our current and next generation cloud and mobile enterprise products. Their main tasks will include: About £30,000 - £43,000 a year Join WSP, and you’ll be at the heart of a team of international experts all dedicated to growing and sharing their expertise, and working on projects that… About £35,000 - £51,000 a year We are an accredited CPD employer with Engineers Ireland. Chartered Engineer (or working towards) with a relevant professional institution. About £38,000 - £50,000 a year Suitable candidates will possess a degree in Civil or Structural Engineering and have achieved Chartered Engineer status with ICE, IEI, IStructE or equivalent. About £62,000 - £86,000 a year Our team includes accountants, compliance specialists, former regulators, lawyers, civil engineers and IT specialists and we are part of a global network of… About £31,000 - £39,000 a year Your new role as a Process Engineer; What you will need to succeed as a Process Engineer; Vickerstock are delighted to be working with a leading multi… About £31,000 - £44,000 a year To support this projected growth we want to attract talented Senior Python Developers with 5+ years' commercial development experience who will become key… About £34,000 - £48,000 a year The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation… About £33,000 - £46,000 a year Receive and log quote requests that arise from various sources including field engineer, existing client requests, non-contract enquiries and internal requests… About £19,000 - £27,000 a year Realtime Associates Limited My client is on the lookout for a Contract Python Developer'. Why should I be interested in this Contract Python Developer position? £300 a day
OPCFW_CODE
<?php namespace Swagception\Validator; use Swagception\Exception; class ObjectValidator implements CanValidate { public function validate($schema, $json, $context) { //Check that the data is a json object. if (!is_object($json)) { throw new Exception\ValidationException(sprintf('%1$s is not an object.', $context)); } if (isset($schema->maxProperties)) { $this->validateMaxProperties($schema, $json, $context); } if (isset($schema->minProperties)) { $this->validateMinProperties($schema, $json, $context); } if (isset($schema->required)) { $this->validateRequired($schema, $json, $context); } if (isset($schema->properties)) { $this->validateProperties($schema, $json, $context); } $this->validateAdditionalProperties($schema, $json, $context); } protected function validateMaxProperties($schema, $json, $context) { //An object instance is valid against "maxProperties" if its number of properties is less than, or equal to, the value of this keyword. if (count(array_keys(get_object_vars($json))) > $schema->maxProperties) { throw new Exception\ValidationException(sprintf('%1$s has too many properties.', $context)); } } protected function validateMinProperties($schema, $json, $context) { //An object instance is valid against "minProperties" if its number of properties is greater than, or equal to, the value of this keyword. if (count(array_keys(get_object_vars($json))) < $schema->minProperties) { throw new Exception\ValidationException(sprintf('%1$s has too few properties.', $context)); } } protected function validateRequired($schema, $json, $context) { //Check keys against required properties. $missingFields = array(); foreach ($schema->required as $required) { if (!array_key_exists($required, get_object_vars($json))) { $missingFields[] = $required; } } if (!empty($missingFields)) { throw new Exception\ValidationException(sprintf('%1$s has missing required fields: "%2$s".', $context, implode('", "', $missingFields))); } } protected function validateProperties($schema, $json, $context) { //Validate each property in json against the schema specified in properties. foreach (get_object_vars($json) as $field => $val) { //If it's not set then it's an additional property. See validateAdditionalProperties. if (isset($schema->properties->$field)) { (new Validator()) ->validate($schema->properties->$field, $val, $context . '/' . $field); } } } protected function validateAdditionalProperties($schema, $json, $context) { if (!isset($schema->additionalProperties) || $schema->additionalProperties === true) { //If true, we allow all additional properties. //This is also the default behaviour for JSON schema and is unchanged by Swagger 2.0. //See "By default any additional properties are allowed." in https://json-schema.org/understanding-json-schema/reference/object.html } elseif ($schema->additionalProperties === false) { //We don't allow additional properties. Check whether there are extra fields we weren't expecting. if (isset($schema->properties)) { $extraFields = array_diff(array_keys(get_object_vars($json)), array_keys(get_object_vars($schema->properties))); } else { $extraFields = array_keys(get_object_vars($json)); } if (!empty($extraFields)) { throw new Exception\ValidationException(sprintf('%1$s has unexpected extra fields: "%2$s".', $context, implode('", "', $extraFields))); } } elseif (is_object($schema->additionalProperties)) { //If it's an empty object, we also allow all additional properties if (!empty(get_object_vars($schema->additionalProperties))) { //Fetch additional properties - ones not specified in properties. if (isset($schema->properties) && is_object($schema->properties)) { $extraProperties = array_diff_key(get_object_vars($json), get_object_vars($schema->properties)); } else { $extraProperties = $json; } //Validate all additional properties against the specified schema. foreach ($extraProperties as $field => $val) { (new Validator()) ->validate($schema->additionalProperties, $val, $context . '/' . $field); } } } } }
STACK_EDU