Document
stringlengths
395
24.5k
Source
stringclasses
6 values
What could be a reason for RCCB triggering? I have RCCB/RCD with four circuit breakers at the entrance of my aparment (I am actually not quite sure how it is called in english, so here is RCCD and CB is ABB s231r with different max amps) Recently, I've been setting up electrical socket in the kitchen, so I turned off cb, that handling kitchen line and installed socket. During this process from time to time I've been double checking on line voltage (with N and with PE) with multimeter (just in case) and couple of times during this process my RCCB triggered. What can cause this behaviour? As I know, RCCB only triggers when there are differential current in the circuit, so if every properly working circuit have some small amount of differential current by design, if I disable one of the lines with CB this differential current should decrease? What make and model of meter are you using for this test? @ThreePhaseEel Mastech MAS838 An RCD (in any form) is always looking at the current going out vs the current coming back, which SHOULD be zero difference, so if they are different by the amount of your RCD (usually 10 to 30mA in your part of the world), the device trips. It might be that your type of meter is one that measures by looking at the potential difference by putting on a small load, so the act of trying to read voltages was a condition that the RCD interpreted as a short. this totally makes sense for me. but the line, that was inspected was cut off by cb in the first place. So may be question should be rephrased as "how cuting off one of the cbs, could affect another lines in "increasing differential current" manner?. I am actually just inspected voltage on the socket bot L with N and L with PE and RCD didn't trigger Most circuit breakers only disconnect hot/phase. They do not disconnect neutral. The problem is, somewhere in your handling of the wires, you managed to touch neutral to safety earth. Now, the electricity returning on neutralfrom other circuits has two paths: It can go back the normal way through the RCCB, or it can go via this circuit's neutral, to the safety earth wire, to the panel, and to the neutral-earth equipotential bond on the far side of the RCCB. It flows on both paths at once, in proportion to their conductance (1/resistance). That diverted enough current flow to imbalance the RCCB and trip it. Yeah, the earth-to-neutral sensing in most GFCI/RCD devices relies on the presence of a slight load Is your RCD 1P or 1P+N? The first only interrupts th live, the second both conductors. If it's 1P you may have a residual current on the neutral leacking to PE making it to trip. An easy fix could be replacing the RCD with a 2P model Also are you on TT earthing or TN distribution? If you're on TT your breakers must be at least 1P+N because neutral is considered "active" conductor, if you're on TN(-S) a connected neutral isn't that big issue.
STACK_EXCHANGE
Sometimes bugs are in the eye of the beholder as a recent PHP bug report illustrates. That report also illustrates how quickly discussions in bug reports can spiral out of control, turning to anger and insults. There are some comical aspects to the thread, but the underlying issue, maintaining compatibility with existing bugs, is one that many projects struggle with. A PHP user ("endosquid") reported that the number_format() function had changed behavior in PHP 5.3; that is, when number_format("",0) is called, it no longer returns "0", instead it returns an empty string. Given that the first argument to the function is supposed to be a number, in particular a floating point number that is to be formatted based on the rest of the arguments, an empty string might seem like the right thing to return. On the other hand, all earlier versions of the function returned a string containing "0". It turns out that part of the work that went into version 5.3 was to clean up the parameter parsing code in PHP, and to use one routine, zend_parse_parameters(), internally. As PHP creator Rasmus Lerdorf related in the thread: "Most of PHP was using this already, but there were still some stragglers like number_format()." Lerdorf also suggested casting the first argument to a float (i.e. number_format((float)"",0)) as a solution to the problem. As one would guess, endosquid's application wasn't calling number_format() directly with an empty string, but was instead passing a variable that may or may not have been initialized. In general that is a bad programming practice, but it is quite common in PHP code where the language has often tried to "do the right thing" with uninitialized variables. But if the "right thing" changes, lots of code that relied on it can break. The argument that endosquid makes about what number_format() should return is not entirely without merit. The function is supposed to return a formatted number, and the empty string is hardly that, so endosquid believes that it should return "0". But, as Lerdorf points out, what would one expect number_format("a",0) to return? The unfortunate answer is that pre-5.3 versions did return "0" in that case. So, in tightening up the PHP parameter parsing code, a substantial difference in the behavior of number_format() was The documentation for number_format() is not terribly helpful as it doesn't address error conditions at all. It does specify that the first parameter is a float, but PHP will happily take strings like "9" or "3.14159" for that parameter, converting as needed. Given all that, programmers have to rely on what the language actually does, and since at least PHP 3, number_format() has always returned "0" when handed random strings. It doesn't take long for the bug report thread to descend into flames. Evidently endosquid works in a tightly controlled environment that requires a raft of paperwork to accompany code changes, but that still doesn't justify a claim of "MONTHS [of] fixing code for no real benefit". It seems clear that endosquid didn't quite understand who it was responding to the bug report when asking Lerdorf to "escalate this to someone who can answer the question as to why this was changed". Lerdorf responds: "Escalate? Oh how I wish I had someone to escalate to." Lerdorf also explained that the change was first made public as part of the first 5.3 release candidate in March 2009. He said that interested folks had until July to make a case that any particular change shouldn't go into the release. While endosquid complained that 5.3 had only recently become available on the platform he was using, Lerdorf pointed out that users have some responsibility to keep up with their tools: of your responsibility in your position is to keep track of your tools and the changes coming down the pipeline. 5.3 was available to you as a release candidate in March of last year, and even earlier directly from our revision control system. Many things have changed and there are many many people out there affected by these changes, we recognize that. That is also why we are not likely to reverse a change like this that others in your situation have now accounted for, tested and deployed in production for many months simply because it is inconvenient for you. There is certainly some truth to Lerdorf's admonishment, but it didn't sit well with endosquid, who plans to change the C code back to the old behavior. Patching the language source—rather than making a fairly simple textual substitution to the number_format() call sites—seems a bit extreme, but is evidently easier in that environment. Unlike some proprietary alternatives, though, free software allows just that kind of change. But free software developers should not have to deal with insulting comments from bug reporters. There are multiple alternatives for endosquid, including staying with the 5.1.x version of PHP, patching the 5.3.x source, or fixing the actual calls, so getting angry and lashing out in the bug report is not likely to help anyone. It is, as Lerdorf points out, "a classic case of how not to treat unpaid volunteers who critical pieces of your money-making infrastructure". There is always the question, though, of when a "bug" has lived long enough that it becomes something that needs to be carried forward. Once applications start depending on buggy behavior, there will always be annoyed users when the bug gets fixed. The Linux kernel has run into this problem numerous times, generally opting to maintain the "insanity" (in the words of Al Viro) for compatibility's sake. It is a difficult balance to strike. PHP developers cannot possibly know all of the different corner-cases and quirks that PHP applications depend on. When fixing what they see as a bug, they have to rely on users testing betas and release candidates to find places where the "bug" label may not be appropriate—or at least requires some discussion. But users are often busy with other things, so we are likely to see this kind of situation play out for various projects in the future. to post comments)
OPCFW_CODE
Monthly Computer Vision Roundup #2 2017 Welcome back to our Computer Vision Roundups, our February Meetup was an almost-February-Meetup, taking place on March 1st. Good thing all of you got the memo – we were delighted to have started the year with so many more members in our Computer Vision Group on Meetup.com and more new faces at our regular meetups! This time, our CTO and host of the meetup Daniel Albertini was focusing on a very hands-on topic: A tutorial in Deep Learning in iOS + Live Demo. Deep Learning in iOS This talk was catered to beginners in development with deep learning and showcased how a first project could be set up in iOS. Also, different options and frameworks were introduced. Short Outline of the Talk: - About iOS Developdas ment - BNNS Functions - About Tensorflow - Tensorflow Deep MNIST Tutorial After a quick intro about possibilities in iOS Development, we dove right into the Accelerate.framework – a framework from Apple, that helps in running very big computational and mathematical calculations directly on your phone. Therefore the framework is perfectly optimized for high performance on the CPU. Within the Accelerate.Framework we find the BNNS Functions. In order to set up a neural network within iOS applications, the Basic Neural Network Routines (BNNS) are the way to go, as they provide highly performant inference for neural networks. You need to have already trained the data though, which can then be fed to the BNNS. In order to train neural networks then, Daniel arrived at TensorFlow, an open source framework developed by Google for deep learning and machine learning research. In the Deep Learning on iOS Intro talk, Daniel uses TensorFlow for training a neural network with a training set of thousands of digits – the MNIST database of handwritten digits. In order to re-enact the Live Demo, please do follow the Deep MNIST for Experts Tutorial by TensorFlow! This tutorial guides you through training a model with TensorFlow in order to detect digits in the MNIST data set, get data about the accuracy of your model and concludes with the training of neural networks in order to improve performance and accuracy. A quick info for all of you who is interested in TensorFlow: In February the first TensorFlow Dev Summit was held in Mountain View by Google Developers – the videos are now online on their Youtube Channel! More Deep Learning Talks For a Deep Learning Introduction in the browser with ConvNetJS and CaffeeJS, have look at the presentation from last July’s Meetup by Christoph Körner! Hold a talk yourself! You have a project or topic you would like to talk about or you know someone, who would like to share his/her experiences and knowledge at our Computer Vision Meetup? Please contact us! It is great to see how our community is growing each month, so if you haven’t already – don’t forget to join our meetup group ! ;) QUESTIONS? LET US KNOW!
OPCFW_CODE
Open date: March 23rd, 2020 Next review date: Thursday, Apr 23, 2020 at 11:59pm (Pacific Time) Apply by this date to ensure full consideration by the committee. Final date: Tuesday, Jun 30, 2020 at 11:59pm (Pacific Time) Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled. Summary: The Climate Hazards Center (https://chc.ucsb.edu/) at the University of California, Santa Barbara seeks a highly motivated postdoctoral researcher for an exciting project supported by the US Agency for International Development (USAID) and the US Geological Survey. The project focuses on using remote sensing and machine learning to predict agricultural statistics (crop production, crop yields, prices) in food insecure countries and will directly support the famine early warning efforts of USAID’s Famine Early Warning Systems Network (FEWS NET). The project will have a strong focus on applications and will leverage cutting edge science to support lives and livelihood saving early warning information. Applicants must have completed all requirements for a PhD Degree in Statistics, Geography/Remote Sensing, Agricultural Economics, Agronomy, hydrology, environmental science, Earth Science or a related discipline except the dissertation at the time of application. PhD awarded by the time of appointment. Strong expertise in handling, processing and visualizing large geophysical or remote sensing datasets, and/or socioeconomic time series. Ability to implement predictive models (including but not limited to machine learning algorithms) based on remotely sensed and/or time series datasets. Fluency in programming languages such as R or Python. Demonstrated ability to make reproducible code for cleaning, integrating, and modeling spatial/temporal data from multiple sources, spatial scales, and temporal frequencies. Interest and experience in applied sciences Proven record of independently leading research projects, conducting reproducible research, publishing journal articles and presenting at international and national conferences. The initial appointment will be for one (1) year with a possible two (2) year reappointment based on performance during the year 1 and available funding. Anticipated start date is July 1, 2020. The Department is especially interested in candidates who can contribute to the diversity and excellence of the academic community through research, teaching, and service as appropriate to the position. To apply, please visit : https://recruit.ap.ucsb.edu/JPF01761 Please be prepared to include a Cover Letter, your most recent Curriculum Vitae, and three letters of reference. A Statement of Research demonstrating evidence of satisfying preferred qualifications is optional. Primary consideration given to applications completed by 4/23/2020. The University of California is an Equal Opportunity/Affirmative Action Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected Veterans status, or any other characteristic protected by law.
OPCFW_CODE
Versions of Windows 9x Windows 9x comes in various editions... As long as you stay away from "bland lame" systems that fob you off with license-only, "instant restore", "companion CD" or proprietary-modified OEM mutations, it doesn't matter which edition you have - but it's often crucial to know which version you have. Windows 95 (original) This shattered the Windows 3.yuk mold, and wild horses would not drive me back! Great new features included ability to run Win32 applications, better driver support for DOS applications, better integration of DOS and Windows environments, better multitasking, Plug-n-Play sanity-checking before use of driver code, Long File Names, built-in Internet connectivity, non-default actions for file associations, better user interface that uses both mouse buttons, and a better DOS Edit program. When moving from DOS or Windows 3.yuk to Windows 9x, you should abandon the old ways of doing things. Remember what you learned in the older OSs, but don't expect it to be directly applicable! This applies to managing the startup axis and performance tweaking in particular. There are few things more pathetic than a Windows 9x system set up as if it was a DOS/Win3.yuk system. New compatibility and security issues were Long File Names and the registry Run keys, respectively. Windows 95 SP1 This was mainly a bug fix revision of the original Windows, and I don't have it running anywhere (hence vagueness about the version reporting). All editions switched to this version after release, and the additions and patches were available as free downloads. No major new features or risks here. Windows 95 SR2 This milestone revision of Windows 95 was never made available in retail editions, much to their chagrin. All but the core features (FAT32 and NTKern) can be downloaded and retrofitted, but not those two. Great new features were the ability to control monitor refresh rates, FAT32 to allow hard drive volume sizes over 2G, and the downloadable "USB Supplement" (NTKern) that added support not only for USB, but is needed for AGP as well. It's the minimum version you need for AGP and USB support. Some new compatibility issues introduced with FAT32, if you were dual-booting NT, DOS, or using pre-FAT32 low-level file system utilities such as Norton DiskEdit etc. (FAT32 doesn't have to be used; it's an option). The device driver model had changed somewhat, so some driver revisions may be required. Windows 95 SR2.1 Exactly the same as Windows 95 SR2, except that the USB Supplement was included on the CD (though not in the Windows installation process - you'd have to locate and install it afterwards). They missed the opportunity to upgrade the bundled MSIE 3.00 to bug-fixed 3.01 Windows 95 SR2.5 "My Computer" properties reports as: Windows 95 4.00.950c The same as Windows 95 SR2.1, except that a second CD containing MS Internet Explorer 4 was included. If you didn't install this CD, you'd have an MSIE that looked like MSIE 2 but reported the version as 4 - weird. New escalation risks came with MSIE 4 that brought the risk of embedded HTML scripts to the active desktop, and Outlook Express that will process HTML scripts embedded within email message text (a risk that can be controlled). The browser's HTML engine is also broken for MIME handling or attachments. Windows 98 (original) This is the version that brought AGP, USB and FAT32 to the retail market. Main new malware risk was the "View as Web Page" feature that processes HTML for every folder view - allowing embedded scripts to be run in local hard drive security zone. The problem is that even if you turn off this folder view everywhere, it still spontaneously returns, especially when showing newly-created menu folders after an install. Windows 98 also includes Windows Scripting Host, and thus "support" for stand-alone script malware such as LoveLetter, and Outlook Express introduces uncontrolled (but not uncontrollable) auto-running of scripts embedded within HTML email messages. The browser's HTML engine is also broken for MIME handling or attachments. New safety risk is that auto-running ScanDisk default behavior is more aggressive; all problems are "fixed" without prompts and all traces of recovered data is discarded. One wonders if some reports of the "better reliability" of Windows 98 are merely because it is papering over file system damage in this way. Windows 98 Second Edition The one non-downloadable core feature added in this version was Internet Connection Sharing, which allows multiple PCs on a TCP/IP LAN to share Internet access through a single modem. Once again, the browser's HTML engine is broken for MIME handling or attachments. Windows Millennium Edition New core features are of dubious value and are badly-behaved, namely; System Restore, Windows Media Player 7 and Movie Maker (neither of which can be left out or removed from an installation, and both of which insist on dumping their files within My Documents). Some useful functionality has been stripped out; there's no hard drive based DOS mode (there are fixes for that), you can no longer control auto-running Scandisk on a per-risk basis, and there's no Resource Kit (either as a sampler, or as a pay-for add-on). The last is a major problem, as there is much that has changed under the hood in WinME as well as new and hard-to-avoid core features that have never been documented in a Resource Kit. Some genuine (rather than eye-candy) user interface improvements have been made, and some annoyances fixed; CD-ROM drives no longer throw up errors because the disk hasn't spun up yet, and you can get "View as Web Page" to go away and stay away. New compatibility risks are from yet another driver model tweak, new lack of access to the real mode part of the startup axis, new TCP/IP stack from NT that breaks some firewall software, and the relocation of HKEY_CLASSES_ROOT to a new CLASSES.DAT registry file that may break unaware registry backups and utilities. New safety risks are that SR will write to every hard drive volume it sees (making WinME unusable for pulling data from at-risk hard drives - the behavior persists even if SR is disabled); ScanDisk now ignores ScanDisk.ini (which still exists as a red herring) and is controlled through an Advanced button within the ScanDisk user interface itself, and still defaults to "fixing" everything and leaving no bodies behind. New malware risks include that of malware harbored within System Restore data, lack of a proper DOS mode from which to conveniently manage Windows-level malware, and an unwanted side-effect of a bolder System File Protection; this undoes (in real time) attempts to risk-manage through renaming away system components such as WSH. Some risks are partially closed; Web View can be effectively disabled, and Outlook Express isn't quite as naked to the world as it was (Active Scripting now disabled in 'Restricted Zone' and the EyeDog patch is applied). The browser's HTML engine is still broken for MIME handling or attachments. (C) Chris Quirke, all rights reserved - January 2001, link massage April 2003 Back to index
OPCFW_CODE
How To Guy Get Started with VMware vCloud Director The tech world is hot for cloud computing, and VMware says it's leading the charge. Its product that makes the public, private and hybrid cloud possible is vCloud Director (vCD). Despite the VMware surge, however, many VMware admins are still trying to figure out the difference between their existing vSphere infrastructures and a cloud. They're also struggling to determine if vCD is something they should be considering. vCD Quick Facts Just because you have a vSphere infrastructure doesn't mean you have a cloud. Sure, you can call it that if you want, but it doesn't meet the minimum requirements of self-service and multitenancy. What makes that vSphere infrastructure into a true cloud is vCD. vCD is an abstraction layer that offers a self-service portal and support for multiple tenants. For example, those tenants might be customers of a services provider, or development groups at a software company. Tenants are provided their own flexible and secure virtual infrastructures without any underlying knowledge of vSphere. Through vCD, new workloads composed of multiple virtual machines (VMs) and their preconfigured applications -- along with multiple network topologies -- can be deployed with a few clicks of the mouse. Security between the tenants, resource controls and resource metering is all under the vCD umbrella. vCD is still new -- it's at version 1.5, only the second release -- and is still maturing. While any company that wants the features offered by vCD could use it, services providers and large enterprises are still the primary customers. To make vCD work, you need: - A vSphere infrastructure with a minimum of two ESXi hosts, plus vCenter, shared storage, and vSphere Enterprise or Enterprise Plus licensing - vCD, an installable Linux application that accesses an Oracle or SQL database with a Web front-end (typically deployed as a VM) - vShield Manager - VMware Chargeback is optional Building a vCD Lab I'm a hands-on guy, so when I want to learn something new, the first thing I do is try it out for myself. In my opinion, that's the best way to learn vCD. New with version 1.5, a virtual appliance version of vCD is available for download (with a free 60-day evaluation). No complex install, Linux commands or database configurations are required (as they are with the production version). Keep in mind that this virtual appliance is only supported as a proof of concept in lab environments. Still, it's fully functional and ideal for learning vCD. Along with the vCD virtual appliance, I also recommend checking out the vCD Evaluator's Guide, which walks you through the various features and functions of vCD. Again, keep in mind that you also need to meet all the previous requirements such as two ESXi hosts, vCenter, vShield and Enterprise Plus licenses. In theory, you could run a virtual vCD evaluation in a virtual lab, inside Fusion or Workstation, but performance might be even more of a problem than it would be with the typical vSphere virtual lab. Now Is the Time to Start Even though your company may not be using vCD today, or might not have any immediate plans, VMware has said that vCD is the fastest-growing new product that it has. You've seen what VMware ESX and vSphere have grown into. Why not get ahead of the curve and start learning vCD? That's my plan. David Davis is a well-known virtualization and cloud computing expert, author, speaker, and analyst. David’s library of popular video training courses can be found at Pluralsight.com. To contact David about his speaking schedule and his latest project, go to VirtualizationSoftware.com.
OPCFW_CODE
I finally got my hands on the micro:bit, the BBC’s new educational computer and spiritual successor to the legendary BBC Micro, and I’m absolutely in love with its potential as a platform for learning how to code. Microcontroller-based devices like the micro:bit offer some advantages over PCs as a tool for learning about programming. They can be connected directly to a variety of interesting peripherals to motivate experimentation. More importantly, they provide what is, in contrast, a radically simple programming environment: there are no operating systems, threads, processes, filesystems, or virtual memory to hinder a true understanding of what it means, in the most basic sense, for your program to run on a computer. When coding a microcontroller it is (to a certain extent) just you and the CPU. On the other hand, it can take some effort to get up and running with most microcontrollers’ development environments. Even beginner-friendly kits like the Arduino will require special software and drivers to compile and load your firmware, which can make them a non-starter for some classrooms or casual beginners. And once all that’s set up, you’re likely going to be programming the thing in a dialect of C—not the most approachable choice for a new programmer. So I’m delighted that the micro:bit delivers all the advantages of a microcontroller while providing an extraordinarily easy to use development system. First, the platform: at the micro:bit’s heart is an ARM Cortex-M0 microcontroller running at 16 MHz, with 256 kB of flash and 16 kB of RAM. Its peripherals include a 5x5 array of LEDs with PWM, two buttons, a compass, an accelerometer, and a BLE radio; additional peripherals can be attached using a number of pins including I2C, SPI, and digital and analog I/O. I initially thought the micro:bit’s LEDs and buttons a laughably limiting provision for I/O, but after having programmed it I now think it’s a great idea. It provides just enough pixels to display scrolling text and basic graphics, while remaining simple enough that learners won’t be overwhelmed by a need to employ abstractions like sprites in order to create simple visuals. The development environment As for the development environment, there’s no software to install, which means the micro:bit should be readily usable by schools whose students use Chromebooks (or locked-down Windows machines or Macs). The only prerequisite is a web browser, which allows you to use one of multiple web-based code editors at microbit.org. And once you’ve written your program, there are no special drivers required to flash it onto the board; the micro:bit presents itself as a USB mass storage device, to which you need only download a hex file compiled by the editor in order to run your program. Hobbyists will likely be drawn to the micro:bit’s MicroPython port (python.microbit.org). This includes a set of well-documented Python libraries to take advantage of the board’s peripherals, including external servos or neopixel LED strips. Advanced users can even access a Python REPL over the board’s USB serial interface. As an engineer, I’m thoroughly impressed with the achievement of porting Python to the micro:bit. MicroPython generally targets the Cortex-M4, but here they got it running on an M0 with just 16 kB of RAM and no hardware floating-point support. And it allows you to write succinct little programs like this one, which plays a tone on an attached piezo buzzer with a pitch depending on the angle at which the device is held: from microbit import * y = accelerometer.get_y() pos = (y + 1000) / 2000 pos = min(pos, 1.0) pos = max(pos, 0.0) return int(80 * 55**pos) Some minor trouble Unfortunately, my experience with the micro:bit has not been entirely trouble-free. The first unit I ordered developed a problem where the USB interface chip would overheat while connected to my computer, which has also been reported by other users. After this happened, I could no longer operate the micro:bit using its battery pack, and it would intermittently reset while powered by USB. I got a replacement unit which has yet to develop similar problems, but this experience doesn’t bode well for the robustness of a device that is meant to survive in the hands of 11 to 12-year-old students. Update: jaustin from the micro:bit Foundation commented on Hacker News, I work for the micro:bit Foundation, who have taken ownership of the project on from the BBC […] as the author hoped, the revision of the hardware now shipping is more resilient to ESD than the previous one :) I also have some concerns about the placement of the external pins. Given that micro:bit kits like those sold by Tech Will Save Us use alligator clips to connect to the board’s pins, it seems way too easy to short out one of these pins with its neighbors. Finally, the micro:bit’s 16 kB of RAM is quite limiting, at least when using the MicroPython runtime. While writing a moderate-size program I managed to repeatedly exhaust the SoC’s memory, resulting in difficult to debug MemoryExceptions. I can easily imagine intermediate users running up against and being confused by this limitation, and would welcome a version of the micro:bit with double the RAM. But to be fair, I have not yet tried writing a program of similar size with PXT, which may have less overhead or may handle memory exhaustion with clearer error messages. The above pain points aside, I’m extremely optimistic about the micro:bit’s potential for introducing a new generation of students to programming. I imagine that the overheating USB chip issue will be resolved, perhaps by a new board revision, and I’m looking forward to this board being generally available for distribution to schools in the USA and elsewhere.
OPCFW_CODE
Does ModelState.IsValid=true guarantee that the passed model parameter is not null? Assume that we have Create and Edit action methods that are attributed by HttpPost and they have a model parameter of type, for example, BlogViewModel as follows. [HttpPost] public IActionResult Create(..., BlogViewModel model) { .... } [HttpPost] public IActionResult Edit(..., BlogViewModel model) { .... } In their body, we usually do validation as follows. if(ModelState.IsValid) { // do something } Here, do something can be an operation accessing a property of the model. Question I am not sure whether or not there is a possibility in which model becomes null. If model is null then do something (such as accessing a property of model) will throw an exception. I read many examples (from the internet and textbooks), I have not seen someone doing double check as follows yet. if(model!=null) { if(ModelState.IsValid) { // do something } } or if(ModelState.IsValid) { if(model!=null) { // do something } } Probably, the condition ModelState.IsValid is true guarantees that model is not null. Is my assumption here correct? I am afraid I am doing a time-bomb assumption. The model will only be null if there are other errors with you code @StephenMuecke: So is it necessary to always do double check as shown in my code above? Or ModelState.IsValid=true does not guarantee that model != null? It's generally be a good idea to check for null anyway. Hypothetically, it shouldn't be null, all going well but a refactor or other misconfiguration (e.g., routing), could cause it to be null. Actually, ModelState.IsValid checks if any errors is added to the ModelState, so in the case where model is nullable, ModelState is valid even if model is null. In your case however, BlogViewModel is required in your post request, so a request with a null model will raise exception instead. Therefore, you do not need to check if model is null in this case. ModelState is valid of the model is null (its only invalid if the ModelBinder attempts to set the value of property which is not valid). The model will be null if no values are passed in the request which match a model property name, or if you name the parameter the same as one of the properties in the model. Personally I don't think its necessary (and have never seen any example of it). In the 2nd case above, you will have corrected that before publishing, and if no values matching your model properties were sent in the request, you can assume its a malicious user and so what if they see the error page that will result from the exception. To answer your question, no, ModelState.IsValid does not check if your model is null and will throw an error if that happens. In an API it is quite easy to have null models, if you make a mistake while building your request model and it doesn't match what the endpoint expects. Or someone else could look at your website, see the API calls and decides to have some fun and flood your API with requests which don't have valid models. There are ways to check for null models in one place such as here : ModelState is valid with null model what is Model State ? The ModelState represents a collection of name and value pairs that were submitted to the server during a POST What's in a ModelState? Here's what those values look like, from the same debugger session: Each of the properties has an instance of ValueProviderResult that contains the actual values submitted to the server. Validation errors in Model State... eg:- [Required] public string FirstName { get; set; } [StringLength(50, ErrorMessage = "The Last Name must be less than {1} characters.")] public string LastName { get; set; } in this case if your LastName contain more than 50 caracters, ModelState if false, because vaueProvidedResult having exception Hope you got the understating about ModelState Answer for your question is you can handle model from modelState if model null automatically modelState is not valid OP did not ask what ModelState is! @StephenMuecke yes answer for whole question he needs to understand you can handle model from modelState if model null automatically modelState is not valid is wrong @StephenMuecke is it possible on theory? sry i never faced that situation If nothing is sent in the request that matches a model property, then the model will not be initialized by the DefaultModelBinder and therefore nothing is added to ModelState and it will be valid. @StephenMuecke ok thanks for the update, i will look more The answer is simply no. ModelState represents attempts to bind a posted form (marked with BeginForm) to an action method which also includes validation information. It carries default model binder settings that specify which validation settings should be included to viewmodel, e.g. RequiredAttribute, so that null-value checking for required fields depend on which validation settings exist, not in ModelStateDictionary itself. When the form is submitted, the DefaultModelBinder sets up viewmodel binding and checks every properties inside bound viewmodel class for presence of validation settings (i.e. attributes). If submitted values doesn't match with criteria being set (e.g. null/empty value in properties marked with RequiredAttribute), it will add validation error(s) to ModelState.Errors property, and subsequently ModelStateDictionary.IsValid returns false because it depends on Errors property count. Here is IsValid checking mechanism taken from ModelStateDictionary source: public class ModelStateDictionary : IDictionary<string, ModelState> { private readonly IDictionary<string, ModelState> _innerDictionary; public ICollection<ModelState> Values { get { return _innerDictionary.Values; } } public bool IsValid { get { return Values.All(modelState => modelState.Errors.Count == 0); } } Hence, if all model properties set to null/empty without specifying RequiredAttribute, the ModelStateDictionary.IsValid value still being true due to no validation attributes present & the model binding works. Similar issue: ModelState.IsValid even when it should not be?
STACK_EXCHANGE
MyDSL with TOR and Privoxy Forum: myDSL Extensions (deprecated) Topic: MyDSL with TOR and Privoxy started by: doobit Posted by doobit on July 22 2005,18:00I've tried ELE, but it's less than complete. I'd like to see something as good as DSL 1.04 with TOR and Privoxy packaged to load on boot. Posted by GRAWL on July 23 2005,05:07hell yes -that from "me" no -that from "dsl crowd" Posted by GRAWL on July 23 2005,05:15arrr here it is right off the shelf < http://it.slashdot.org/it/05/07/22/1955246.shtml?tid=172&tid=95 > read somewhere in the comments Posted by doobit on July 25 2005,12:52I honestly don't care about any of that. Tor just makes it possible for a journalist to work in countries that would arrest you for looking at or sending the wrong kind of information or just for being an journalist in their country. I'm not a high school kid trying to find an anonymous way to look at . Posted by doobit on Aug. 01 2005,21:02I'm sorry I brought it up. I've learned a bit more now and realize I can make my own customized DSL with myDSL. ELE with the .dsl package of OO works perfectly. Posted by PacketLost on Aug. 03 2005,19:32Could we take the ELE tor and put it in the newest DSL? I think this would be a great addition to a portable OS. Posted by Blurg on Aug. 15 2005,12:14Just finished making a very basic combined Privoxy/Tor package, and will hopefylly be submiting it soon. Tor is compiled with a static libevent. I have tested it on a vanilla DSL install, and everything works fine. For now it's a tar.gz, and everything is running from /opt/ Its missing an installer script for starting at boot, and a automatic setup for firefox/dillo would be nice. In the current version you have to launch privoxy/tor manually, and set up your proxy in the options of your app. So shell scripters are welcome to help I am currently fixing some general uglyness and placement of configs I'll try posting any progress at my Blog: < http://www.damnsmalllinux.org/talk/blog/414 > *Edit: nothing in my blog yet...* If sombody wants to be my "beta tester", PM me and I can send you my current version. And any ideas/suggestions/tips would be appreciated. Posted by WoofyDugfock on Aug. 23 2005,11:15Blurg, which version of Tor did you use to build your dsl? It's just that versions prior to 0.1.0.10 apparently had a potentially serious security bug. See below (which was reposted on alt.privacy). (Just in case you weren't aware of it. :=) ) Date: Thu, 16 Jun 2005 18:15:33 -0400 From: Roger Dingledine <x...@mit.edu> Subject: Security bug in 0.0.9.x Tor servers The Tor 0.1.0.10 release from a few days ago includes a fix for a bug that might allow an attacker to read arbitrary memory (maybe even keys) from an exit server's process space. We haven't heard any reports of exploits yet, but hey. So, I recommend that you all upgrade to 0.1.0.10. If you absolutely cannot upgrade yet (for example if you're the Debian Tor packager and your distribution is too stubborn to upgrade past libevent 1.0b, which has known crash bugs), I've included a patched tarball for the old 0.0.9 series at: < http://tor.eff.org/dist/tor-0. > 0.9.10.tar.gz < http://tor.eff.org/dist/tor-0. > 0.9.10.tar.gz.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) Comment: Using GnuPG with Thunderbird - < http://enigmail.mozdev.org > -----END PGP SIGNATURE----- Posted by Blurg on Aug. 24 2005,12:47Thanks for the tip, I had read about that one, But keep me updated if you hear about any more bugs. For now I have already submitted an extention with: Tor: 0.1.0.14 Privoxy: 3.0.3 and Libevent: 1.1a Hopefully it will be aproved shortly Posted by Blurg on Aug. 28 2005,16:28Tor and privoxy can now be found in the my-dsl testing section. Test it and write stuff here, or send me a pm if you have problems using it. To set up Firefox: go to Tools-->Options-->General-->Connection Settings-->Manual Proxy Configuration And put the adress 127.0.0.1 and port nr 8118 in http and ssl Start it up (eg. run a shell, and type: links) Hit F10, use the arrow keys to find Setup, down to network options, and put 127.0.0.1:8118 in the http proxy options Edit the file ~/.dillo/dillorc and put in the line: The next version of tor should hopefully do this on load. Also, tor and privoxy now runs as the user dsl If anyone has ideas about the security aspects on this, please let me know. Im thinking of making it chroot, but that wold make it dependent on the gnu-utils package... And I'm unsure if it has any impact, as dsl is pretty hack proof as-is. Posted by anotherUser on Oct. 02 2005,23:52If i follow everything in your post and in the mydsl info i get null can't resolve dns in dillo i am not using the latest dsl
OPCFW_CODE
Home > Microsoft Security > Microsoft Security Bulletin Ms02 039 Microsoft Security Bulletin Ms02 039 However, before the actual authentication process takes places, SQL Server exchanges some preliminary information. Revisions: V1.0 (August 14, 2002): Bulletin Created. Patches for consumer platforms are available from the WindowsUpdate web site Other information: Acknowledgments Microsoft thanks David Litchfield of Next Generation Security Software Ltd. Microsoft Security Bulletin MS02-056 - Critical Cumulative Patch for SQL Server (Q316333) Published: October 02, 2002 | Updated: January 31, 2003 Version: 1.2 Originally posted: October 02, 2002 Updated: January 31, weblink Specifically, the patch changes the operation of SQL Server to restrict unprivileged users to only performing queries against SQL Server data. Severity Rating: Buffer Overruns in SQL Server Resolution Service: Internet Servers Intranet Servers Client Systems SQL Server 2000 CriticalCriticalNone Denial of Service via SQL Server Resolution Service: Internet Servers Intranet Servers What vulnerabilities does this patch eliminate? This is a privilege elevation vulnerability. V1.2 (January 31, 2003): Updated to advise of supercedence by MS02-061 and clarify installation order when Hotfix 317748 is applied in conjunction with this security patch. https://technet.microsoft.com/en-us/library/security/ms02-039.aspx Microsoft Sql Server Stack Overflow Vulnerability DBCC's are utility programs provided as part of SQL Server 2000. Why did you only re-release this patch for SQL Server 2000? The release of the "Slammer" worm virus made it especially critical for SQL Server 2000 customers to deploy this patch. We appreciate your feedback. In addition, it eliminates four newly discovered vulnerabilities. Revisions: V1.0 (October 16, 2002): Bulletin Created. Thus, although the attackerÃ???Ã??Ã?¢??s code could take any desired action on the database, it would not necessarily have significant privileges at the operating system level if best practices have been followed. But it might have few privileges outside of SQL Server. Because the SQL Server Agent service account is often configured with Windows administrative privileges, this allows a job to create a file anywhere on the system, regardless of the user's privileges. The SQL Server 2000 patch can be installed on systems running SQL Server 2000 Service Pack 2. Code Red Worm What causes the vulnerabilities? The vulnerabilities result because a pair of function offered by the SQL Server Resolution Service contain unchecked buffers. In addition depending on the configuration of the database server it could be possible for the attacker to take actions on the operating system that the SQL Server were capable of https://technet.microsoft.com/en-us/library/security/ms02-056.aspx At this writing, these patches include the ones discussed in: Microsoft Security BulletinMS00-092Microsoft Security BulletinMS01-041Microsoft Security BulletinMS02-030 The process for installing the patch varies somewhat depending on the specific configuration of This documentation is archived and is not being maintained. This vulnerability could enable an attacker to gain administrative control over SQL Server. If you have applied this security patch to a SQL Server 2000 or MSDE 2000 installation prior to applying the hotfix from Microsoft Knowledge Patch article 317748, you must answer "no" What could this vulnerability enable an attacker to do? An attacker who was able to successfully exploit this vulnerability could do either of two things. Code Red Worm V1.2 (February 28, 2003): Updated "Additional information about this patch" section. The readme.txt describing the installation instructions also contains instructions on removing the patch. Microsoft Sql Server Stack Overflow Vulnerability Note: The patch released with this bulletin is effective in protecting SQL Server 2000 and MSDE 2000 against the "SQL Slammer" worm virus. Cons: (10 characters minimum)Count: 0 of 1,000 characters 5. It is a denial of service vulnerability only. http://memoryten.net/microsoft-security/microsoft-security-bulletin-ms06-064.php Select type of offense: Offensive: Sexually explicit or offensive language Spam: Advertisements or commercial links Disruptive posting: Flaming or offending other users Illegal activities: Promote cracked software, or other illegal content Do I need the re-released patch? No - the original patch is fully effective in correcting security vulnerabilities, including the vulnerability exploited by the "Slammer" worm virus. This issue received a critical rating because an authenticated user could connect to a SQL Server and insert, delete or update web tasks. However, constructing a query like this would require the attacker to possess intimate knowledge about the internals of a web site's search function. What causes the vulnerability? The vulnerability results because one of the Database Console Command (DBCC) utilities provided as part of SQL Server contains unchecked buffers in the section of code that handle What causes the vulnerability? check over here You’ll be auto redirected in 1 second. The situation involved in the vulnerability could not occur under normal conditions. How much of a system's resources could be monopolized through such an attack? It would depend on the specifics of the attack. If you have applied this security patch to a SQL Server 2000 or MSDE 2000 installation prior to applying the hotfix from Knowledge Patch article 317748, you must answer "no" if It would not be necessary for the user to successfully authenticate to the server or to be able to issue direct commands to it in order to exploit the vulnerability. - However, this patch has been superseded by the patch released with MS02-061 which contains fixes for additional security vulnerabilities in these products. - Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! - If a network doesn't host any Internet-connected SQL Servers, the port associated with the SQL Server Resolution Service (and all other ports associated with SQL Server) should be blocked. - An attacker who created such a packet, spoofed the source address so that it appeared to come from a one SQL Server 2000 system, and sent it to a neighboring SQL - What's the keep-alive function in SQL Server 2000? SQL Server 2000 includes a mechanism by which it can determine whether a server is active or not. - You can also address this issue by installing Service Pack 3a. - It might only require that the administrator restart the service. - Some states do not allow the exclusion or limitation of liability for consequential or incidental damages so the foregoing limitation may not apply. - A vulnerability associated with scheduled jobs in SQL Server 7.0 and 2000. - Impact of vulnerability: Elevation of privilege. This patch does not include the functionality of the Killpwd tool provided in Microsoft Security Bulletin MS02-035. Security Resources: The Microsoft TechNet Security Web Site provides additional information about security in Microsoft products. The risk posed by the vulnerability could be mitigated by, if feasible, blocking port 1434 at the firewall. Unlike the DBCCs discussed in MS02-038, the one affected by this variant could be executed by any SQL user. Does that mean that the attacker wouldn't need a valid SQL Server userid and password to exploit the vulnerability? You’ll be auto redirected in 1 second. What causes the vulnerability? There is a flaw in the stored procedure to run web tasks where it is possible for a low privileged user to run that stored procedure. this content Localization: Localized versions of this patch are available at the locations discussed in "Patch Availability". The SQL Server Resolution Service, which operates on UDP port 1434, provides a way for clients to query for the appropriate network endpoints to use for a particular SQL Server instance. The first two are buffer overruns. How might an attacker do this? V1.1 (July 25, 2002): Updated to note that MSDE 2000 is affected by the vulnerabilities. for reporting these issues to us and working with us to protect customers.
OPCFW_CODE
|Shipped Characters:||Fabian Rutter and Jasper Choudhary| |Length of Relationship:||Fabian's birth-present| |Status:||Godfather and Godson| |Other Pairing Names:|| Fasper| Jaspian (Jasp/er and Fab/ian) is the Godfather-Godson relationship of Jasper Choudhary and Fabian Rutter. The two don't seem to see each other often, based on Fabian's reaction to Jasper's arrival at the school, but they both care for each other deeply as family. Things got a bit tense when Jasper started having to lie to Fabian due to his work for The Collector, but in the end of the season they seem on good terms again. This pairing has only been shown and mentioned in the second season. Click here for the Jaspian Gallery - They both seem excited to see each-other. - Jasper is introduced as Fabian's godfather. - Fabian teasingly calls Jasper a "total Egypt geek." - Fabian is very eager to show Jasper the mark Nina got. - Fabian is happy that Jasper gets to be the curator for the exhibition. - Jasper promises to help him. - Despite trusting Jasper, Fabian is worried when Amber asks him about the amulets. - Jasper gets upset when Vera pressures him into spying on Fabian and his friends. - Vera manages to get him to obey by threatening Fabian's safety, showing he'll do anything to protect him. - Fabian finds Jasper with the cube. - After a bit of disagreement, Jasper ends up allowing Fabian to take the cube back. - Fabian was glad that it was Jasper who found the cube. - Fabian visits Jasper to ask about the Egyptian film. - They talk while Nina looks at Senkhara's crown. - Jasper tells Fabian how he knows about "the unknown ruler". - Jasper is surprised when he sees Fabian isn't in class when he stops by. - Fabian asks Jasper a bit about the Song of Hathor. - Fabian tells Jasper that he had a brilliant excuse for being late, but forgot about it. - Fabian desperately tries to tell Jasper that he didn't steal the Ox Bell. - Jasper says that he believes Fabian wouldn't do something like that. - Jasper is impressed with Fabian's guitar playing at the dinner. - Fabian gets very scared when he overhears Jasper and Jerome talking in the library. - Fabian can't stop thinking about Jasper and Jerome's talk. - Fabian asks Jasper about his talk with Jerome. - Jasper lies to him about a "gnome theft." - Fabian thanks Jerome for "helping Jasper with the gnome case." - Fabian seems close to tears when he discovers that Jasper lied to him. - When Jerome revealed the truth about what Jasper was doing, Fabian was very upset. - Jerome insisted that Jasper was doing it to protect Fabian. - At the beginning before everyone confronts Jasper, Fabian is visibly upset. - Fabian sadly mentions that Jasper didn't have to lie to him. - Jasper responds that he didn't have a choice, and was doing what he had to do to protect Fabian. - Fabian and the rest of Sibuna confront Jasper on the rules of Senet.
OPCFW_CODE
Dependabot removing platform specific gems from Gemfile.lock Package manager/ecosystem ruby:bundler 2.2.3 Manifest contents prior to update Gemfile gem 'sorbet' gem 'sorbet-runtime' Gemfile.lock (snippet) sorbet (0.5.6034) sorbet-static (= 0.5.6034) sorbet-runtime (0.5.6034) sorbet-static (0.5.6034-universal-darwin-14) sorbet-static (0.5.6034-universal-darwin-15) sorbet-static (0.5.6034-universal-darwin-16) sorbet-static (0.5.6034-universal-darwin-17) sorbet-static (0.5.6034-universal-darwin-18) sorbet-static (0.5.6034-universal-darwin-19) sorbet-static (0.5.6034-universal-darwin-20) sorbet-static (0.5.6034-x86_64-linux) Updated dependency n/a What you expected to see, versus what you actually saw Dependabot is removing the dependencies, which were added as part of Bundler 2.2.3 (see rubygems/rubygems#4180). It should be left untouched, as the PR is for a different dependency and it works fine when using bundler 2.2.3 via the CLI. Images of the diff or a link to the PR, issue or logs I'm running into the same thing. You can see this with nokogiri 1.11.0.rc4 as well, since it's now shipping as a pre-compiled gem. It would be really great if you could specify platforms in the Ruby dependabot config. In our project we bundle ruby, x64-mingw32, and x86-mingw32 into a single Gemfile.lock we use to build our project on Linux/Mac/Windows. Dependabot just can't handle that kind of non-ruby platform situation. It would be really great if you could specify platforms in the Ruby dependabot config. In our project we bundle ruby, x64-mingw32, and x86-mingw32 into a single Gemfile.lock we use to build our project on Linux/Mac/Windows. Dependabot just can't handle that kind of non-ruby platform situation. @connorshea it didn't make a difference for us, as nokogiri doesn't ship with gems for each version of macOS. @connorshea it didn't make a difference for us, as nokogiri doesn't ship with gems for each version of macOS. Hey, yeah I think this happens because we don't fully support bundler v2 yet :( We're currently planning this work though, and hope to add support relatively soon. Hey, yeah I think this happens because we don't fully support bundler v2 yet :( We're currently planning this work though, and hope to add support relatively soon. In the meantime, we downgraded back to bundler 2.2.1 and added the platforms we use via bundle lock --add-platform -- Dependabot is too useful :slightly_smiling_face: In the meantime, we downgraded back to bundler 2.2.1 and added the platforms we use via bundle lock --add-platform -- Dependabot is too useful :slightly_smiling_face: @connorshea according to this comment by @feelepxyz support has been added for bundler v2. Does that mean this issue will not occur? We're still on an older bundler version because we thought this issue was still open :slightly_frowning_face: I think it's fixed now, yeah. Sounds like you can close this issue @jurre @feelepxyz
GITHUB_ARCHIVE
StatusCodeAndMessage test client sometimes receives "Received RST_STREAM err=8" In the status_code_and_message interop test, after server-side handler correctly sets StatusCode = Unknown and StatusDetail = "test status message", client receives status code Unknown, but the message is sometimes set to "Received RST_STREAM err=8". Test Failure : Grpc.IntegrationTesting.InteropClientServerTest.StatusCodeAndMessage Expected string length 19 but was 25. Strings differ at index 0. Expected: "test status message" But was: "Received RST_STREAM err=8" @murgatroid99 , have you seen a similar problem for Node? Not that I can remember. Same thing happends for UnimplementedMethod test. node seems to have the same problem https://grpc-testing.appspot.com/job/gRPC_interop_master/1614/testReport/junit/(root)/tests/cloud_to_cloud_node_csharp_server_status_code_and_message/ FYI @ctiller, this has been seen in both node & c#, so it looks more like a C core issue. Reassigning to @murgatroid99 as I will be OOO for a while. Seen again at https://grpc-testing.appspot.com/job/gRPC_interop_master/2172/testReport/junit/(root)/tests/cloud_to_cloud_csharp_csharp_server_unimplemented_method/ I am pretty sure this is a core bug. The bug is that this error is going on the wire, not that the libraries are reporting it. https://grpc-testing.appspot.com/job/gRPC_interop_master/3143/testReport/junit/(root)/tests/cloud_to_cloud_node_csharp_server_status_code_and_message/ Similar thing happens for UnimplementedMethod interop test: https://grpc-testing.appspot.com/job/gRPC_interop_master/3133/testReport/junit/(root)/tests/cloud_to_cloud_csharp_csharp_server_unimplemented_method/ I will probably start looking into this. Yang might be able to help (depending on load) On Tue, Mar 1, 2016, 11:32 AM Jan Tattermusch<EMAIL_ADDRESS>wrote: I will probably start looking into this. — Reply to this email directly or view it on GitHub https://github.com/grpc/grpc/issues/4427#issuecomment-190866016. https://http2.github.io/http2-spec/#ErrorCodes CANCEL (0x8): Used by the endpoint to indicate that the stream is no longer needed. How are C# and Node reporting the status to core? With a batch containing send status, or with a call_cancel_with_status? On Tue, Mar 1, 2016, 12:10 PM Jan Tattermusch<EMAIL_ADDRESS>wrote: https://http2.github.io/http2-spec/#ErrorCodes CANCEL (0x8): Used by the endpoint to indicate that the stream is no longer needed. — Reply to this email directly or view it on GitHub https://github.com/grpc/grpc/issues/4427#issuecomment-190881909. I do e.g. await responseStream.WriteStatusAsync(new Status(StatusCode.Unimplemented, ""), Metadata.Empty).ConfigureAwait(false); I am actually invoking a "safety" call_cancel in the serverside handler if the "cancelled" flag is set when "recv_close_on_server" event is delivered. I recall this used to be necessary to correctly close the request stream if server wanted to just write the status and ignore all the requests. It looks like the issue is caused by C# server (that's why we were seeing it with both node and C# clients). My apologies for thinking it's a C core bug. On the other hand, there might be a C core bug that is more subtle - it looks like if you concurrently try to perform write status and cancel operations, on the client side you'll some something in between - the status message "Received RST_STREAM err=8". So that behavior is -kind of- expected: if a stream isn't fully closed, a cancellation must put a RST_STREAM on the wire. There's no synchronization mechanism that's low enough cost to allow anything other semantic (we don't want to slow the write path to make allowances for cancellation). So... we let the write status and the cancellation race down to the transport level, and see which one arrives first to decide what we should do. On Tue, Mar 1, 2016 at 2:30 PM Jan Tattermusch<EMAIL_ADDRESS>wrote: It looks like the issue is caused by C# server (that's why we were seeing it with both node and C# clients). My apologies for thinking it's a C core bug. On the other hand, there might be a C core bug that is more subtle - it looks like if you concurrently try to perform write status and cancel operations, on the client side you'll some something in between - the status message "Received RST_STREAM err=8". — Reply to this email directly or view it on GitHub https://github.com/grpc/grpc/issues/4427#issuecomment-190935579. Hi, I'm getting similar error message from golang (server and client) as well: ```error code = 13 - stream terminated by RST_STREAM with error code: 1```` Any advise? Thanks,
GITHUB_ARCHIVE
fatal error LNK1136: invalid or corrupt file Hello, I am trying to install ShadowOui using OASYS and have encountered the following error when attempting to install shadow3: "build\temp.win-amd64-3.8\shadow3c.lib : fatal error LNK1136: invalid or corrupt file" From what I can tell, it appears like there is an issue with my C++ compiler that I am unaware how to fix. I am currently using VisualStudio build tools with the following installed: MSVC v142 - VS 2019 C++ x64/86 build tools Windows 10 SDK (10.0.18362.0) - but have also tried using v 10.0.14393.0 Windows Universal CRT SDK I am using a Windows 10 machine with MinGW-W64 GCC-7.3.0 for the fortran compiler and the miniconda installer with python 3.8.5. I have also tried using other GCC versions (8.1, 6.4) but the newer version gives a different error like has been described in other issues. Here is the full output during installation just in case it provides any useful information. shadow3_lnk1136.txt Hello, It is not clear for me if you tried to install it from sources or using pip we recommend to install the Oasys version using pip. It is installed when you install Oasys. It installs the binaries that work well (at least up to now) in windows10. If you want to build from sources, you need an old gfortran compiler (version 8 does not work: https://github.com/oasys-kit/shadow3/issues/35 but I see you use version 7... ). Please let me know if the pip-installed binary version does not work. Hello, Sorry for the confusion. I installed Oasys using the .bat files as suggested in the wiki page and have been attempting to install shadowoui / shadow3 using pip. I am getting these errors during pip installation. During installation of oasys, I had to downgrade to the previous version of numpy due to "an error in windows runtime" (more info here). During pip installation, I first got an error saying I needed Visual Basic 14.0 or later. After installing build tools, I got another error saying "object has no attribute compiler_f90" which was fixed by installing a gfortran compiler. The issue I am getting now is after installing both compilers after seeing those errors. I wonder if all of these are somehow connected? Thank you! Hello, I suggest to restart from scratch the Oasys installation. Clean (delete) before the previous installation that creates the folder c:<home>\miniconda3. The installer will create a new miniconda installation so you do not have to update compilers nor change installed versions. Usually the binaries we have in the shadow3 installer work well for all PCs, but may be your one has something new. Let me know then where it crashes. If you want to reinstall (or retry) shadow3 after that, be sure to use the Oasys-python c:<home>\miniconda3\python.exe -m pip install --upgrade shadow3 Sorry for this mess! Hello, I uninstalled all compilers and miniconda at attempts of a fresh install. Oasys installs just fine (first installing numpy 1.19.3 through pip to avoid current issues with 1.19.4 and windows), but when trying to install shadowoui / shadow3 I get an error saying I need Visual C++ 14.0 or greater. I get this error both when trying to install from the Oasys GUI and from miniconda using c:\miniconda3\python.exe -m pip install --upgrade shadow3. No problem with the errors, this has been a good learning experience for me (I only know how to code in python and not very well). Thanks! OK, I see that for any reason the binary pip distribution is not good for your platform and pip ties to build it from sources. And of course it fails. To build shadow3 in windows is a pain. In fact, we use the version compiled time ago. And I do not have anymore the system I used to it will take time to build it again... I suggest to try to force the installation of the binary doing (after activating your Oasys conda environment): pip install https://files.pythonhosted.org/packages/01/18/124205b5e0f41b872ff4ba4f1dc5d48de6b8c6e9c7985a1a6d5ea209e3ce/shadow3-18.5.30-cp37-cp37m-win_amd64.whl Let me know... Otherwise I will try to recompile everything and let you know the procedure... but it will take some time. I noticed you linked a wheel for python 3.7 and I was using python 3.8. I switched my version and everything installed just fine from pip. Thank you so much for the help! Indeed. We are not supporting python3.8, yet. I'm happy this issue was solve. We will look in the future to new versions.
GITHUB_ARCHIVE
Snakes and Worms Do snakes eat worms? It’s an age-old question that keeps popping up in books, TV shows, and movies. Whether it’s the snake from “The Jungle Book” or any other reptilian villain you can think of, they all seem to love feasting on earthworms. In fact, most people would go out of their way to avoid any kind of snake if they knew those scaly creatures love worms so much. However, the real question is can snakes eat worms? In this article, we explore the debunk some common misconceptions about worms and snakes. Do Snakes Eat Worms? Yes Yes, we’re diving right into the main question. Snakes do eat worms. It’s not something we wish was untrue, but it’s the reality of the situation. The majority of snakes have a diet that consists of worms, birds, mice, and other small rodents. There are some snakes that have a diet that relies heavily on lizards or amphibians instead of worms, but the general consensus is that snakes eat worms. Snakes have a digestive tract that is somewhat similar to birds with respect to its anatomy. All snakes have a gizzard that is used to grind up food before it travels to the stomach for digestion. Worms are often too large to be broken down in the snake’s stomach, but the snake’s gizzard will allow the food to pass through the digestive tract as if it were never there. Note: There are a few types of worms that snakes will not eat because they are too big to pass through the digestive tract without being broken down. Are Snakes Worm Eaters? Yes, snakes are worm eaters. All snakes eat worms, but some species of snakes prefer lizards or amphibians to worms. Sounds gross we know! All snakes will eat worms at some point in their life, even if they prefer other types of food later on. For instance, rat snakes are known for eating a large number of worms, but as they grow, they prefer other types of food. Meanwhile, king snakes are known for being generalist feeders, which means they eat whatever they can find, including worms. However, if there are other types of food readily available, they will prefer that to worms. Can snakes Digest Worms Safely? Yes, snakes are able to eat and consume worms without any harm. Snakes are able to digest worms safely because they have a special enzyme that helps to break down the worms’ exoskeleton. It’s similar to the way humans digest a steak or other red meat. In fact, the way humans digest worms is almost identical to the way snakes digests worms. The only difference is humans use a stomach acid to break down food while snakes use the enzyme to liquefy the inside of the worm’s exoskeleton. The enzyme is also responsible for making sure the stomach acid doesn’t digest the snake’s own tissue. Do Snakes Like Eating Worms? Snakes enjoy eating worms such as earthworms and mealworms but prefer rodents. This is just their taste. They don’t mind eating a couple of worms here and there but prefer every situation mice or rats. Can Snakes Eat Mealworms? Yes but Every Snake Is Unique Some snakes may be able to eat mealworms as a baby, but as they grow into adults, they may not be able to eat them anymore. There are many different factors that can determine whether a snake can eat mealworms, including the species of the snake, the size of the snake, and the age of the snake. There are also different types of mealworms, and some may be more nutritious than others. It’s generally a good idea to start a baby snake off with mealworms that are high in protein and low in fat, but it’s important to do some research on each individual brand before deciding to feed mealworms to your snake. Can Snakes Eat EarthWorms? Yes, but Some Are Poisonous It’s possible, but you have to be careful. Earthworms are usually safe to feed to snakes as long as they have been raised in a safe environment. However, it’s important to note that there are several types of earthworms that are poisonous to snakes. The best way to avoid accidentally feeding your snake a poisonous worm is to purchase a soil sample from a science supply store and feed your snake the worms that are found within that sample. You’ll have to do this a few times before your snake starts to recognize the taste of the soil and worms, but once they do, feeding them will become much easier. Are worms Good snakes? Most worms are low in calcium, and this can lead to problems in snakes that are fed this type of worm regularly. However, there are types of worms that are high in calcium and protein, which can help to prevent the bone issues that worms low in calcium can cause. It’s important to select one type of worm or another and feed it to your snakes consistently to make sure they get enough calcium. Should I Feed My Pet Snake Worms? This is most likely one of the main questions you’re wondering about. As long as you make sure the worms you feed to your snake are safe to eat and are high in calcium, then yes, you can feed your snake worms. There are a few worms that are good to feed to a snake because they are high in protein but low in calcium. However, most worms are too low in calcium to be a regular part of a snake’s diet. Overall, snakes love eating worms and they are very nutritious. Just make sure the worms you choose to feed to your snake are high in calcium and low in fat. If you do that, then you can be sure your snake will be happy and healthy. There are few things that are as iconic to snakes as eating a worm. This image has been around for as long as we can remember, and even today it is still as popular as it has ever been. In fact, it’s so popular that people often forget that snakes are actually eating worms when they see it in books or movies. We have debunked some common myths about worms and snakes in this article and discussed the truth behind the image of the snake eating a worm. It’s not a myth that snakes can eat worms. It’s a fact. The real question is can snakes eat worms? And the answer to that is yes.
OPCFW_CODE
The concept of “open source” comes in mainly two flavours; “open” as in, you can download it and fiddle with it yourself but you’re not in charge of how the project evolves, and “open” as in everyone can integrate any changes back into the main repository. The ideal sits somewhere in the middle, which is where I think CesiumJS currently sits. The history of open source is littered with branches and fractions and politics, and it’s a fragile balance between all the people invested in the project. I think Cesium.com is doing a great job of having an open community around the platform, and using Git / GitHub for developer integration. But if I go back to the OPs concern; how do you measure the health of an open source project? There’s more than commits to the main branch to consider here; there’s uptake, smaller bug fixes, larger feature adds, version flows, community chatter, support activity (open and semi-open), issue tracker chatter and engagements, code quality and style, the evolution of said code quality and style over time, the profile and visibility of the core team of people involved, etc. Commits to the repo is only one of those factors, and in my opinion not even one of the most important ones. For me it’s more about the core people’s willingness to interact with the community at large, in forums like this, and in git / issue trackers, talking about bugs openly, their engagement in issues and solutions, and how they communicate about the future. If an open source project becomes successful, there’s a few more things to consider, mainly for the parties that have commercial interests in it. Here it gets tricky. We’d all love to make this the best and free and most open project in the world, but development is a costly affair. Smart people making smart and complex software for free is not a sustainable model, and for all who’s involved in Cesium there are tons of other things we do besides working on the shared core. There’s apps (that use the core), and frameworks, and admin, and testing, and support and all that other stuff, which also costs money. The key Cesium players must all consider how to balance the cost of the shared platform we all enjoy the benefits from. In short; some weeks are slow on the core, other weeks it’s too much. Sometimes we focus on the community, then on testing. Some bugs are fixed, sometimes a pull request is granted, other times not. The branch activity is not a measure of the health of the project. We, the people in this community, is. And with that, come on in! This is a healthy community where I’ve never seen a single negative thread, which, as far as open source projects of this scale usually go, is quite unique and refreshing. Alex (not a Cesium.com guy)
OPCFW_CODE
This document described the ACM SIGCHI policy of CHI Conference Compensation of Course Instructors which applied until CHI 2019 and prior years. The SIGCHI EC voted to remove this policy on August 2, 2019 and instead to ask the CHI Steering Committee to have their own policy. The following text no longer applies and is retained here for historical record only for the coming year. SIGCHI POLICY FOR COMPENSATION OF CHI COURSE INSTRUCTORS Effective Date: Replaces: Previous SIGCHI Tutorial Policy Responsible SIGCHI Officer: Vice-President for Conferences INTRODUCTION This policy has been adopted by the SIGCHI CMC to provide a clear statement of compensation for Course Instructors at SIGCHI sponsored conferences. This policy is intended to provide guidance to conference chairs, Courses chairs, and the conference committee. This policy contains a full and complete statement of the compensation package for Course Instructors. No other stipends, reimbursements, or benefits are provided as a part of the compensation package. Alternate compensation and reimbursement for special needs will be provided only with explicit approval of the Conference Chairs in consultation with the SIGCHI Vice-President for Conferences. Compensation is based on the number of course units taught. Each course unit is representative of one session of instruction without regard to the number of instructors. A course, in contrast, is a single course which may consist of one or more course units with one or more instructors. COMPENSATION An honorarium will be paid for each course unit taught. The honorarium may be shared between multiple instructors. The honorarium can be used directly to compensate for registration fees, as detailed in the courses chair form. There is no separate transportation or subsistence allowance for course instructors. In addition, based upon the SIGCHI CMC Complimentary Registration, Conference Courtesies, and Conference Support policy, course instructors are not eligible for a complimentary registration on the basis of their being a courses instructor. One copy of course notes for their courses will be provided to each instructor. If the conference cancels a course due to low enrollment or other reasons not related to instructor performance or availability, the course instructor(s) will not be compensated. PRIMARY INSTRUCTOR If multiple instructors are teaching one course, a single instructor must be designated as the Primary Instructor, who will function as the principal contractor for the course with the conference. The Primary Instructor is responsible for all communication with the conference committee and staff and for coordination with the other instructors in order to provide the required materials in a timely manner and ensure the smooth operation of the course. The Primary Instructor must provide detailed information to the conference Courses Chair for which a form will be provided[b1] . NOTES FOR THE FORM: CLAIMS AND SETTLEMENTS All claims for honoraria should be submitted to the conference Courses Chair within 60 days of the completion of the conference. Claims submitted later cannot be guaranteed to be settled in a timely or expeditious basis. The conference should endeavor to make all reimbursements within 30 days of receipt of a claim. [b2] Question: do we provide the course notes to the local chapters and how do we handle the rights management? [b1]Form to be developed in negotiation between Ashley and Scooter [b2]part of the form
OPCFW_CODE
mod_alias AliasMatch Regex - matching everything in a folder except two patterns? I'd like to use AliasMatch to create an alias for everything within a folder, except for two (or more) specific regex patterns. For instance the following AliasMatch creates an alias for everything in the 'content' folder: AliasMatch /content(.*) /home/username/public_html/$1 But there are two regex patterns that I don't want the above Alias to match, for instance: ^content/([a-zA-Z0-9_-]+)/id-([a-zA-Z0-9_-]+)/([0-9]+) ^content/([a-zA-Z0-9_-]+)/nn-([a-zA-Z0-9_-]+) I know that the NOT (!) character can be used to negate a pattern, but I don't know how to use it here, or how to negate multiple patterns in AliasMatch. How could this be done? What you're talking about is called negative lookahead, and you just wrap it around the regex for what you don't want to match, like this: (?!foo). Combining regexes can be as simple as string them together with a pipe between them, but you can do a little better than that in this case. This regex reuses the first part of the two regexes, which is identical: [a-zA-Z0-9_-]+/(?:id-[a-zA-Z0-9_-]+/[0-9]+|nn-[a-zA-Z0-9_-]+) Because the pipe (or '|', the alternation operator) has lower precedence than anything else, the alternation has to be contained in a group. Notice that I used a non-capturing group -- i.e., (?:...) -- and got rid of the parentheses in your regexes. Otherwise, they would have thrown off the numbering of the one group you do want to capture, and you would have had to use something other than $1 in the second part of the rule. Here's the whole regex: ^/content(?![a-zA-Z0-9_-]+/(?:id-[a-zA-Z0-9_-]+/[0-9]+|nn-[a-zA-Z0-9_-]+))(.*) EDIT: Apparently, the regex flavor used by AliasMatch doesn't support lookaheads, but has its own negation syntax: !(^/foo). Its purpose seems to be to negate the whole regex, which means it wouldn't help you, but maybe you don't need it. Maybe you can just alias those directories to themselves. Then you wouldn't have to negate anything. AliasMatch ^(/content/[a-zA-Z0-9_-]+/id-[a-zA-Z0-9_-]+/[0-9]+.*) $1 AliasMatch ^(/content/[a-zA-Z0-9_-]+/nn-[a-zA-Z0-9_-]+.*) $1 AliasMatch ^/content/(.*) /home/username/public_html/$1 Or maybe you can do something with a <DirectoryMatch> directive, or by switching to mod_rewrite. But I'm (obviously) not an Apache expert--my specialty is regexes, and I don't think a regex is going to solve your problem. Thank you very much :) However after I copied what you gave into my httpd.conf (as "AliasMatch [regex] /home/username/public_html/$1"), httpd no longer compiles - it gives a "Regular expression could not be compiled." syntax error. What would be the problem here? Am I using it wrong? The problem seems to be with the lookahead - httpd.conf doesn't seem to be able to parse it? ModAlias DOES in fact support negative lookaheads. I'm using this on my site, which works very well: AliasMatch ^/(?!w/|BingSiteAuth\.xml$|favicon\.ico$|google.{16}\.html$|robots\.txt$) /path/to/file Be sure to have your regex start with a "/", as it is always the first character with mod_alias! Never try to start your rule with a lookahead, or else parsing will fail.
STACK_EXCHANGE
NOTE: We will assume that you are working in a Unix environment, or have access to a Unix-type shell. NOTE: We will assume that you are using GNU make. Under some systems when a vendor-specific make is installed, you may find GNU make available as gmake or gnumake. The instructions for Music are explicit so that anyone, either with a good deal of experience or a relatively new user, can compile and run the code. There are several pre-packaged drivers for Music that can perform typical molecular simulation techniques. These drivers are listed in Table . NOTE: All the files with code end with extension *.F90 or *.F (not *.f90 or *.f). Some compilers may behave differently depending on whether the extensions are in capital letters or small letters. NOTE: Also look at the quick start guide. For example you can use the makemd, makegc, makehmc, makepost, and makepmap scripts in src directory to get the required executables. You dont need to go through the detailed steps below NOTE: You should know how a Makefile works. You may need to adjust the Makefile and compiler options on your machine and platform |drivers/music_md.F90||Molecular dynamics simulation| |drivers/music_post.F90||Post processing code for analyzing music results| |drivers/music_gcmc.F90||Grand Canonical Monte Carlo simulation| |drivers/music_hgcmc.F90||for hybrid Monte Carlo with or without gcmc| Of course, you can also create your own driver file to perform whatever type of simulation technique you would like. However, this will be discussed later in this document. For the time being, we will assume that you wish to perform an MD simulation, and will be using the driver music_md.F90. Go through the following steps and make sure you can create an executable before proceeding. If you get stuck here just means that you don't understand the Linux environment and need to read some more on Linux. cp drivers/music_md.F90 src/music.F90Note that the driver has to be always named music.F90 while copied to src directory. This is necessary for the makemake script to work. makemake script basically looks at music.F90 file and decides which files in src directory have to be compiled in which order. It creates the Makefile for use with the make command in Linux. makemake *.F90 *.Fmakemake goes through the driver (music.F90) and decides which files are to be compiled and creates the appropriate Makefile. The Makefile thus created has some options which are explained with in itself. By commenting some lines out from Makefile you could change the optimization levels etc. This will create the executable music.exe in your top-level Music directory. NOTE: Remember, you need GNU Make to use the generated Makefile. They have a web site with help on the way ``make'' works and various options in Makefile. NOTE: by editing Makefile you can change the name of executable if you like.
OPCFW_CODE
- The exposure of API credentials or misconfigured API is one of the most common methods to access clouds. - When an attacker gets one of the access keys, they use it on a host or platform under their control and execute API calls for malicious action or privilege escalation. With the transition to cloud environments, organizations are more likely at the risk of data breaches, targeted malware attacks, and more. Consider the recently discovered ‘Cloud Snooper’ attack, which uses a rootkit to sneak malicious traffic through a victim’s AWS and on-premise firewall before dropping a remote access trojan. While this new attack method has popped up recently, many cybercriminals continue to rely on tried-and-tested methods to gain access to critical assets within organizations such as the following. The exposure of API credentials or misconfigured API is one of the most common methods to access clouds. When an attacker gets one of the access keys, they use it on a host or platform under their control and execute API calls for malicious action or privilege escalation. Usually, keys are exposed via GitHub, BitBucket, shared images, and snapshots. The recent data leak of personal data by over 6.5 million Israeli citizens is one such example of these attacks. The leak occurred because the Likud party’s app was linked to an API endpoint which apparently did not have a password. This allowed third-party actors to obtain passwords for admin accounts. The exposure of an API key can also be a mistake of developers such as that happened with Starbuck. If the exposed API key had fallen in the wrong hands, then it would have allowed access to internal systems and manipulate the list of authorized users. A major API leak incident was recorded in March 2019, when a group of academics discovered that over 100,000 GitHub repositories leaked API tokens and cryptographic keys over a period of six months. Some of these API keys were linked to AWS credentials for a major related to college applications in the United States. There were also 564 Google API keys that were part of an online site to skirt YouTube rate limits and download videos. Misconfigured databases and servers are in large part another reason for risk to data stored in clouds. This misconfiguration often arises due to the lack of passwords or unpatched servers. Attackers, especially state-sponsored hackers, always look out for well-known vulnerabilities in servers to deploy ransomware and backdoor to mine cryptocurrencies or steal sensitive data. Some of the vulnerable servers that have been widely exploited include the Oracle WebLogic Server, Atlassian Confluence and of late, the Microsoft Exchange email server. Databases that are not secured with passwords have caused some of the major data leaks worldwide. The recent data leaks include the ones at Decathlon, SlickWrap, Virgin Media, and more. Server-Side Request Forgery (SSRF) Server-Side Request Forgery is another growing issue in cloud environments. SSRF is a threat due to the use of metadata API, which lets application access configurations, logs, credentials and other information in the underlying cloud infrastructure. The vulnerability, if exploited, could enable an attacker to move laterally and conduct network reconnaissance. The recent and infamous data breach at Capital One cites the potentiality of SSRF. The attackers had leveraged SSRF to retrieve AWS credentials that were used later to steal the personal information of over 100 million Capital One customers. Attackers have begun to craft phishing emails to target users through fake login pages of Office 365 and other cloud applications. Therefore, organizations must take adequate measures to prevent their cloud as well as cloud apps from being hijacked.
OPCFW_CODE
I have a flow set up that copies files from an SFTP site to a local folder. Most of the time it works, but sometimes I get a failure: "message": "Unable to connect to the remote server ' During the flow process, it reads the list of files in the folder, then iterates through the list to get the file content and create a file in the new location with that content. I had connection to the server in order to read the name of the file. I had connection to the server in order to copy over the other dozen files. But one or two (non-sequential) iterations will give me a problem, then the flow will continue and the next file will work. If I rerun the flow a bunch of times, it doesn't always happen on the same file. Sometimes A will work and B won't, but then the next run, A won't and B will. Any ideas on why I might get this error sometimes, but not others? Hi @SP8 , Have you checked your firewall settings? Which port are you using to connect to the Server? Do you have any antivirus? Please try to disable the antivirus to see if it will work. Further, I have seen a blog on Microsoft (MS) Flow SFTP connector tips, tricks, and errors, please check it for a reference: I only have the corporate antivirus which I can't do anything about, but it's the corporate SFTP and doesn't give me any kinds of errors with anything else. I'm using port 22 (per corporate IT) and it generally works. Again, this is intermittent. It will literally work 56/60 times in the same run of the flow, then 48/60 the next, then 59/60... Also, it used to always work. Hi @SP8 , Could you share a screen shot of the flow? How large the file size is? The maximum file size limit is 100 MB, however, not all APIs support the full 100 MB: Further, from the doc on connector SFTP, “This operation copies a file to an SFTP server. If a file is being deleted/renamed on server right after it was copied, connector may return HTTP 404 error by it's design. Please use a delay for 1 minute before deleting or renaming newly created file”. Please try to add Delay action in the flow to see if it will work. Learn how to create your own user groups today! Check out the new Power Platform Community Connections gallery! Join us, in-person, December 7–9 in Las Vegas, for the largest gathering of the Microsoft community in the world.
OPCFW_CODE
Nick Gammon's Hex Converter Error I'm trying to add a bootloader hex to Nick Gammon's Atmega Board Programmer sketch, and I can't seem to get a hex file to convert. I need to convert "Arduino-usbserial-atmega16u2-Uno-Rev3.hex", but that wasn't working so I thought I'd try converting one that's already used in the sketch, "Arduino-COMBINED-dfu-usbserial-atmega16u2-Uno-Rev3.hex", but that's not working either. I've followed the instructions from http://www.gammon.com.au/bootloader I've tried it using Wine and Paralells. Clicking the copy button when using Wine causes the program to stop responding. Here is a screenshot of the error: Thank you Why do you need to do this? Normally you don't need to reflash the Atmega16U2 chip. I work at a university and I'm making a shield that plugs into the top of dead Arduinos the see if they can be fixed with a simple re-flash of the chips, which is usually the fix. I used your sketch to put the Atmega16U2 chips into DFU mode, but it still requires me to connect the Arduinos to a computer to finish off the fix. It'd just be easier to put the working hex file on the chip in the first place. I have another sketch which uploads a hex file from a SD card. The size of the full file for the Atmega16U2 may not fit into the bootloader programmer PROGMEM. However you could easily reflash the entire chip using the hex-uploader sketch. In fact you probably don't need the bootloader on the Atmega16U2 - because that is only used for entering DFU mode. I can't reproduce that. My apologies - I couldn't quite read the error message in the screen dump. What you got was: Don't know end address for 0 Please add to table: end_addresses Run-time error World: smaug2 Immediate execution [string "Immediate"]:96: Cannot continue stack traceback: [C]: in function 'error' [string "Immediate"]:96: in function 'process' [string "Immediate"]:134: in main chunk The relevant error is the first line. That file has code for address 0 - which is not part of the bootloader. It tries to detect bootloader addresses, and zero is not one of them. Looking at the start of the file I see this: :1000000090C00000A9C00000A7C00000A5C000006B :10001000A3C00000A1C000009FC000009DC0000060 :100020009BC0000099C0000097C0000048C40000B9 :100030000CC4000091C000008FC000008DC0000003 :100040008BC0000089C0000087C0000085C0000090 :1000500083C0000081C000007FC0000002C100001A :100060007BC0000079C0000077C0000075C00000B0 :1000700073C0000071C000006FC000006DC00000C0 :100080006BC0000069C0000067C0000065C00000D0 :1000900063C0000061C000001201100102000008EE :1000A0004123430001000102DC0109023E0002017C :1000B00000C0320904000001020201000524000111 :1000C0001004240206052406000107058203080027 :1000D000FF09040100020A000000070504024000B5 :1000E00001070583024000010403090432034100B3 :1000F00072006400750069006E006F002000280027 :100100007700770077002E006100720064007500B0 :1001100069006E006F002E0063006300290000007C :10012000000011241FBECFEFD2E0DEBFCDBF11E033 :10013000A0E0B1E0ECEAFFE002C005900D92A6312C In particular the format is: :10 0000 00 90C00000A9C00000A7C00000A5C00000 6B ^ ^^^^ ^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^ len addr TT data sumcheck So this file includes code for the Atmega16U2 as well as the bootloader. My sketch is not designed to program non-bootloaders (it is a bootloader programmer after all). Scroll down that page for how to reprogram the Atmega16U2 chip. The bootloader programmer sketch (as published) will put the bootloader onto the Atmega16U2. Then follow the instructions on that page, or on this page, below, for entering DFU mode and uploading the main part of the code (not the bootloader part). Entering DFU mode If you have just flashed the bootloader using the bootloader programmer sketch then it should enter DFU mode, visible because the pin 13 LED flashes rapidly. If not, enter DFU mode by shorting together Reset and Ground briefly with a screwdriver or similar. They are the two left-most pins visible in the image above, nearest to the Reset button and below the hole on the board. (Don't keep them shorted, just a brief touch should do it). Once in DFU mode, you should see it appear as an Atmel USB device, if you type lsusb (on Linux): $ lsusb ... Bus 003 Device 092: ID 03eb:2fef Atmel Corp. ... A Uno which is not in USB mode might look like this: $ lsusb ... Bus 003 Device 090: ID 2341:0043 Arduino SA Uno R3 (CDC ACM) ... (or not appear at all if there is no firmware on it). If you are using Ubuntu or similar you can obtain the DFU programmer as follows: sudo apt-get install dfu-programmer You can obtain the code for the firmware from here: http://gammon.com.au/Arduino/Arduino-atmega16u2-Uno-firmware-Rev3.hex (RH-click and "save as" to put a copy on your hard disk). Assuming you place that firmware in the current directory of your terminal window, you can now flash the code as follows: sudo dfu-programmer atmega16u2 flash Arduino-atmega16u2-Uno-firmware-Rev3.hex You should see a message like: Validating... 4034 bytes used (32.83%) Now you can unplug the USB cable, plug it back in, and the USB interface should be back to factory settings. For other operating systems, such as Windows, you should be able to get a copy of the DFU programmer from: https://github.com/dfu-programmer/dfu-programmer http://dfu-programmer.github.io/ Why am I getting the same error message when I try and convert Arduino-COMBINED-dfu-usbserial-atmega16u2-Uno-Rev3.hex, the same used in your bootloader programmer sketch? Being a combined file it would start at address zero as well. I would have stripped out the low-order bytes from the .hex file, leaving only the bootloader part.
STACK_EXCHANGE
App Engine: Google's deepest secrets as a service The software scales. But will the Google rulebook? The price of the platform The question is a familiar one: will the rest of the world buy into Google belief system? Many developers already have. "I've been wanting a platform-as-a service for a while, with an eye towards not doing any more IT work, not installing servers and disk drives and all that. For me, those days are over," says developer Matt Cooper. He now builds web applications in Python atop App Engine – even though he's a former Microsoft system engineer with a history coding of in .NET, which serves as the basis for Microsoft' platform cloud: Windows Azure. Before using App Engine, Cooper had never coded in Python. Robert Kluin also had no Python experience, and had spent years working with Microsoft SQL Server and MySQL. But he too moved to Python on App Engine. "The thing that caught my attention was the distributed nature. I've done consulting work for oil and gas companies, managing clusters of SQL servers and things like that, and it's always a pain in the butt. App Engine is handling those kinds of features for us," he says. "It truly is a platform as a service. There is no low-level messing around setting up databases, configuring your application servers or memcaching or anything of that nature. You can simply use their SDK, their API, and you have all those types of things at your disposal." Six months ago, Kluin says, he had a long list of complaints about the service. But since then, Google has addressed most all of them. The company recently introduced a high-replication version of its Datastore, he points out, and it's now the default. This adds latency for data writes, and it doesn't quite offer the same consistency. But it increases the number of Google data centers that keep real-time replicas of your data, providing additional fallbacks in the event of Datastore issues, including latency spikes. Kluin acknowledges the restrictions that App Engine places on developers, but he doesn't see them as a big issue. "It takes some extra effort, but you can find solutions," he says. "And the restrictions are necessary." But others argue that because of the restrictions, mainstream developers are unlikely to take to the service. "It works well for Python people for certain use cases," says PHP Fog's Lucas Carlson. "These are generally the type of people who don't mind working around things and really enjoy tinkering. People who use Python are by nature tinkerers, and while Python is a popular language, the overall number of Python people out there is smaller than other major languages." And Carlson believes that Google's proprietary APIs can be a significant burden as well. "Lock-in can become a real issue, and I believe most enterprises will not make this gamble," he says. "When you program to Google's APIs, you can't run that anywhere else. It's not like you can change your mind and choose another provider down the road." Of course, that doesn't take into account independent projects like App Scale. Many of those proprietary APIs are merely a means of hooking into other parts of Google's platform. There's an API, for instance, for authenticating Google accounts. Google is now offering App Engine as a means of building applications that dovetail with its own Google Apps suite, and for those exploring such applications, App Engine makes perfect sense. But this too is a limited audience. Google acknowledges that some businesses have been reluctant to invest in the platform, but the company indicates this is mainly because the service is still in "preview" mode. Later this year, App Engine will officially graduate to "shipping" status, providing all paid users a 99.95 per cent uptime service level agreement, operational and developer support, billing via invoice, and terms of service designed specifically for businesses. As part of this, Google is also changing the App Engine pricing structure. The company will still offer a free version of the service for those willing to stay under certain quotas, but for larger users it will charge for instance use rather than CPU use as it does now. In announcing the new pricing, Google said it was "easier to understand" and "in line with the value App Engine provides". The service's new business suit will no doubt attract more companies to App Engine. The new pricing model is likely to raise prices in many cases – Google has said that the previous model would not allow it to sustain a business – but it's reasonable compared to other services out there. The trouble with the new model is that so many people have already built their applications for the old model – and now it's changing beneath them. Many have vehemently complained. Developer Jeff Schnitzer takes a philosophical stance on the matter. "They've changed the whole pricing model, and there's a bunch of uncertainties in there," he says. "Basically, what they announced was: 'Pricing is going to change, but we really have no idea how much it's going to change'. I'm trying to withhold judgment until they get everything nailed down." But whatever the end result of the new model, Google's change highlights the unique nature of a platform cloud. If you code to Google's cloud, your application will automatically scale, and you'll automatically benefit from any improvements Google's makes to its own infrastructure. But, at least on some level, you're also beholden to Google. The good news is that Google clearly realizes that it needs to better accomodate developers. The company has already made a few changes to its new pricing model, trying to appease unhappy devs, just as – with new enhancements like Backends and concurrent Java requests – it's easing the restrictions developers have complained about. Google won't open source its infrastructure. But it certainly wants you to use it. ®
OPCFW_CODE
Sql query for a specific time period I am trying to write a mysql query with below details- I have three tables 1) Booking table with column booking_id , journey_start_datetime , journey_end_datetime. 2) Driver table with column driver driver_id , driver_name. 3) Driver_Assign table with column booking_id, driver_id, busy_start_datetime, busy_end_datetime. When I assign a booking to driver a new entry goes to Driver_Assign table with details of booking_id , journey_start_datetime , journey_end_datetime. I want to get all the drivers name who are not busy for a given time. ie Lets one booking is assign to a driver "John" for journey_start_datetime->'2014-06-27 12:00:00' and journey_end_datetime->'2014-06-27 14:00:00'. Now for new booking (journey_start_datetime->'2014-06-27 13:00:00' and journey_end_datetime->'2014-06-27 15:00:00') the driver John will not come. Moving the OP's code from comment to the question - SELECT U.driver_name FROM driver AS U where U.id not in (select driver_id from driver_assign as DVA WHERE ('2014-06-27 13:00:00'>= DVA.busy_start_datetime OR '2014-06-27 13:00:00'<= DVA.busy_end_datetime ) AND ('2014-06-27 15:00:00'>= DVA.busy_start_datetime OR '2014-06-27 15:00:00'=< DVA.busy_end_datetime ) ) SELECT d.driver_name FROM driver d LEFT JOIN driver_assign da ON da.driver_id=d.driver_id WHERE da.busy_start_datetime <='2014-06-27 13:00:00' AND da.busy_end_datetime>='2014-06-27 13:00:00' HAVING da.driver_id is null You need just LEFT JOIN and exclude rows which are not nulls (busy) Will busy_start_datetime and busy_end_datetime be same ie '2014-06-27 13:00:00'? You can pass the date you need to be free in the both conditions Sorry this is not working :( Do you have other option ? Plz suggest. SELECT d.driver_name FROM driver d WHERE d.driver_id not in (SELECT driver_id from driver_assign WHERE (busy_start_datetime >= '2014-06-27 13:00:00' AND NOT busy_start_datetime >= '2014-06-27 15:00:00') OR (busy_end_datetime > '2014-06-27 13:00:00' AND NOT busy_end_datetime > '2014-06-27 15:00:00' ) OR (busy_start_datetime < '2014-06-27 13:00:00' AND busy_end_datetime > '2014-06-27 15:00:00') ) Actually there is another OR which will need to be added, which is where a period in the driver_assign table is contained completely within the period you are checking against. I will edit my answer. Just an idea: do you actually need the driver_assign table? You could just put a driver_id field in the booking table.
STACK_EXCHANGE
android audio crack on seek (then it plays normally) Describe the bug when i seek the audio cracks also sometimes the frames has weird black pixels, if there is any errors related to it should be at the end of the logs Expected behavior audio should not crack full logs https://jpst.it/43jcm i think this is the main log part for the audio crack I/flutter (31626): FVP mdk.FINE: 14:17:30.106: 0xb4000078b2994570 audio stream#1 is seeking #1... got flush pkt. flush decoder and drop frames until seek target 6.0970s... I/flutter (31626): FVP mdk.FINE: 14:17:30.106: invalid audio frame @-1.000000 I/flutter (31626): FVP mdk.FINE: 14:17:30.106: 0xb4000078b2994570 #1/1 audio seek_done: 1, seek_wait_frame_: 0/1 I/flutter (31626): FVP mdk.FINE: 14:17:30.107: 0xb4000078b2994570 audio stream#1 sending 1 invalid AOT frame @6.095500s. seeking: 0 I/flutter (31626): FVP mdk.INFO: 14:17:30.107: 0xb4000079e29d1420 seek end audio frame @6.095500 seek_pos_: 6000, sync_ao_ 1 I/flutter (31626): FVP mdk.FINE: 14:17:30.107: OpenSL ERROR@373 (*ibufq_android_)->Enqueue(ibufq_android_, data, size) : 0X7 I/flutter (31626): FVP mdk.FINE: 14:17:30.107: I/flutter (31626): FVP mdk.FINE: 14:17:30.107: >>>>>>>>1st audio frame (after seek) rendered: 1, ao: 6015, a: 6095, delta: -80 +0.021333 I/flutter (31626): FVP mdk.FINE: 14:17:30.107: 0xb4000078b2994570 audio stream#1 AOT frame is sent I/flutter (31626): FVP mdk.FINE: 14:17:30.107: OpenSL ERROR@373 (*ibufq_android_)->Enqueue(ibufq_android_, data, size) : 0X7 is this related to the black parts? 0xb4000078b2994570 video stream#0 sending 1 invalid AOT frame @32.861000s. seeking: 0 Any solution to the crack? Since it's annoying to users I/flutter (31626): FVP mdk.FINE: 14:17:30.107: OpenSL ERROR@373 (*ibufq_android_)->Enqueue(ibufq_android_, data, size) : 0X7 because of this i guess. queue is full, but not sure why that does fix the cracking sound for now, thank you <3
GITHUB_ARCHIVE
I have a table that works perfectly until the user saves the dynamic form and re-opens it. If they start editing the form (adding rows to a table), save it, and go back in, the table goes haywire. The table is a two tiered Table. You add a row to the top table, it will also add a row to the bottom table. (Is this something LiveCycle cannot handle properly?) The bottom table rows will completely delete themselves and my remove row button stops functioning. What is happening to the scripting when the user saves the form? The form is being saved as a dynamic PDF. Other tables in the form still work fine. It is only the two tiered table that breaks when saving. Windows 7. Acrobat DC 17. LiveCycle ES4. Preserve scripting changes to form when saved is set to automatic. User enters data into table 1 row 1, and then enters corresponding data into table 2 row 1. This is how it looks and how it should look. If you click an X it will remove the corresponding row, and also the corresponding row on Table 2. So if you click the X on table 1 row 2. Table 1 and 2 will both have their row 2s removed. Here is what happens after the user simply saves it, and re opens it in Adobe Table 2, the bottom table, has all rows but 1 disappear. The first cell grows and makes a giant Row 1. Also, the remove instance buttons on table 1 stop functioning. Note* there is no data binding or auto fill between the two tables, it is all user entered. I FIXED IT. All of my Rows in these tables are named "Row1" This worked perfectly within Live Cycle and Adobe, up until you save the document, and re open it in Adobe. Apparently during the save and re opening process, Adobe loses the original container "row1" is in. Which throws off the scripting. Giving each tables Row1 a unique name solved the issue. Also this was a table within a table if anyone is wondering. I use it to ensure my button codes are correct during testing before releasing a document. Alternatively, you can add console.show(); to your doc ready event so that it will show any errors on loading the form. Thank you for the suggestion! Running the debugger brings up no errors expanding and collapsing all the rows. But as soon as the document is saved, and re opened. In the debugger I am getting an Operation Failed. Index value is out of bounds. This happens when clicking the X to delete a row. I believe the reason this is happening is because the X should delete rows from table 1 and table 2. But when the user saves and re opens the documents. Table 2 loses all of its rows but the first. So the X is not able to delete a row that no longer exists and subsequently fails to operate. The first X works as it should! Because there is still a row 1 functioning on both tables. Why is simply hitting the save button. Closing it, and re opening it, making Adobe forget the scripting events that happened before hand?
OPCFW_CODE
Why can't I deploy a Process Builder referencing a Custom Metadata in a Formula? I created a Process Builder in which I reference a Custom Metadata from a Formula inside of the "Update Record" action. It works fine on a Environment 1, but I have trouble with deploying it to another Env using Ant. I receive the following error: All Component Failures: 1. flows/Assign_Consulting_Request.flow -- Error: formula_12_myRule_11_A1_9453158556 (Formula) - The formula expression is invalid: Field ROW does not exist. Check spelling. I'm a bit surprised because I don't have any ROW field. It's a record and the XML code for PB's formula is here: <formulas> <processMetadataValues> <name>originalFormula</name> <value> <stringValue>$CustomMetadata.ConsultingServiceOwnerSettings__mdt.ROW.DefaultConsultingOpsOwnerId__c</stringValue> </value> </processMetadataValues> <name>formula_12_myRule_11_A1_9453158556</name> <dataType>String</dataType> <expression>{!$CustomMetadata.ConsultingServiceOwnerSettings__mdt.ROW.DefaultConsultingOpsOwnerId__c}</expression> </formulas> package.xml file provided below. As you can see I included the ROW record. I don't receive any errors on retrieve so I suppose everything should be fine here. <?xml version="1.0" encoding="UTF-8"?> <Package xmlns="http://soap.sforce.com/2006/04/metadata"> <types> <members>ConsultingServiceOwnerSettings__mdt.APAC</members> <members>ConsultingServiceOwnerSettings__mdt.NA</members> <members>ConsultingServiceOwnerSettings__mdt.ROW</members> <name>CustomMetadata</name> </types> <types> <members>ConsultingServiceOwnerSettings__mdt</members> <name>CustomObject</name> </types> <types> <members>Assign_Consulting_Request</members> <name>Flow</name> </types> <version>46.0</version> </Package> In addition to that - the record is in retrieved package directory. Any thoughts? I believe the issue is in your package.xml. When you reference the Custom Metadata type in Custom object, you utilize __mdt However, when you reference records of a Custom Metadata Type, you do not include __mdt. This is mentioned in the Metadata API documentation for Custom Metadata If you change your package.xml to this, it should appropriately bring the records over. <?xml version="1.0" encoding="UTF-8"?> <Package xmlns="http://soap.sforce.com/2006/04/metadata"> <types> <members>ConsultingServiceOwnerSettings.APAC</members> <members>ConsultingServiceOwnerSettings.NA</members> <members>ConsultingServiceOwnerSettings.ROW</members> <name>CustomMetadata</name> </types> <types> <members>ConsultingServiceOwnerSettings__mdt</members> <name>CustomObject</name> </types> <types> <members>Assign_Consulting_Request</members> <name>Flow</name> </types> <version>46.0</version> </Package> Thanks so much Kris, that was the issue! I just don't get it - if that's the proper way of using CustomMetadata component - why did Ant manage to retrieve the records at all? That's a good question that I don't have an answer for! I'd check by opening the record file (.md) to see if it's got the same info as it does when you use the correct package.xml. It's the same ¯_(ツ)_/¯
STACK_EXCHANGE
Backing up data isn't exactly exciting, but like washing laundry, everyone needs to do it. On Linux, you can back up your files using an almost-bewildering array of choices, from self-composed shell scripts, to expensive software packages. But how about a simple, open source, easy-to-use, set-up-and-fuggedaboutit tool? Konserve is a small backup utility that lives in the KDE 3.x system tray, and it makes backups so easy, so automatic, that you’ll probably forget all about it… until you desperately need that file you accidentally deleted. Let’s install Konserve and create a backup job to better understand the program. If you use an APT-enabled distro, try apt-get install konserve. Otherwise, head over to http://konserve.sourceforge.net/download.html and grab the source code or a pre-compiled binary for SUSE, Debian, Mandrake, or Gentoo. Build the code (if necessary), install the application, and start the program from the K menu icon, or enter whereis konserve on the command line and run the binary that whereis finds. On Debian, the path is probably /usr/bin/konserve; on SUSE, it’s likely to be /opt/kde3/bin/konserve. If a little red soup can labelled “K” appears in your system tray, Konserve is running (and will automatically start with any reboot, unless you close it first). A Sample Backup Let’s back up your hidden KDE settings directory. Right-click on the Konserve icon, and select Wizard. In step one, when prompted to name the “Backup Profile,” type kde_settings and press Next. Step 2 asks for the pathname of the file or directory that you want to back up. Find your hidden KDE settings directory (probably /home/user/.kde/share/config), and press Next. In step 3, choose a directory to save your backup. If you enter a path ending in a directory (which can be local or accessed on a local network via Samba or NFS), Konserve creates a new, compressed, time-stamped (year-month-day-hour-minute-second) file every time it creates a backup. For example, config-20040729174523.tar.gz and config-20040730125247.tar.gz are two backup files automatically created by Konserve. Even better, use KDE’s transparent networking to specify another machine, like this: You can also specify a pathname ending in a filename, which causes each backup to overwrite the previous one. Finally, in step 4, specify how often you want Konserve to perform the backup. Choose an integer greater than zero and then choose the interval, one of seconds, minutes, hours, or days. Check the box next to Backup active and press Next. (See Figure One.) Review your choices and press Finish. |Figure One: With Konserve, you can backup frequently, easily| From now on, you’ll have a regular backup of your KDE configuration directory. Sweet! If you stick with Konserve, keep in mind that it performs full, not incremental backups, so everything in the specified directory is backed up every time. Also, if there are no changes in the source file or directory, Konserve skips the backup. If you backup directories, periodically delete old, unneeded backups, or you may find your repository filled up with a huge number of files. Of course, if you backup, you’ll also want to restore. To restore a Konserve backup, right-click on the Konserve icon and choose Preferences. Select the backup profile that you want to recover and press Restore. Follow the instructions to retrieve your files, safe and sound. One gotcha: you can’t restore from a remote backup created using sftp. In that case, manually download the file, uncompress it, and copy the files that you want. And now, off to wash some clothes — while Konserve works silently in the background. R. Scott Granneman teaches at Washington University, consults for Bryan Consulting, and writes for SecurityFocus and Linux Magazine. You can reach him at email@example.com. Have a recommendation for this column? Send suggestions to firstname.lastname@example.org.
OPCFW_CODE
You can upload a group of transactions in bulk using the file upload feature. The file must use the CSV format. The file must include the following columns: - Transaction ID - Order Date - Transaction Value - Currency Code - Transaction Type Evidence is prioritized as follows: - Billing country - IP address - Credit Card BIN - Payment method country - Other commercially relevant information The following table list all the possible columns or fields. Mandatory fields are denoted by an asterisk: This ID must be unique except when a transaction is refunded. In this case, you must list the sale before the refund. This is the transaction date. The quarter that the transaction is settled in is based on this date. The total transaction value. To create a placeholder transaction for a subscription, enter 0.00. The three letter code that represents the transaction currency. Credit Card Bank Identification Number (BIN) The first 6 digits of the credit card number that identifies the issuer of the card. Payment Method Country If this is provided, it is used instead of the BIN. For example, you can derive this value from the PayPal sender country. IPv4 dotted or IPv6 colon The consumer's IP address when they made the purchase. The customer's billing country. Other Commercially Relevant Information Any other information about customer's country that was provided. default or e-book or /e-newspaper The product type as defined by Taxamo. This is set to default if not provided. Sale or Refund Indicates whether the transaction was a sale or refund. The name of the buyer. This value can be wrapped in double quotation marks. For example "John R. Smith". The name of the street where the buyer lives. This value can be wrapped in double quotes. For example "17th Avenue". The number of the building where the buyer lives. This value can be wrapped in double quotes. For example "1 / 3". Further details for the buyer's address. This value can be wrapped in double quotes. For example "Apartment no. 90". The buyer's city. This value can be wrapped in double quotes. For example "Colorado Springs". The buyer's postal code. This value can be wrapped in double quotes. For example "V3M 3B5". Two-letter code representing the region, where applicable. The description of the product. Buyer's Tax Number This is the buyer's tax number and is used for Business-to-Business validation. The buyer's email address. To download an example file, open the dashboard and go to Transactions > Upload. Click Download example file. To upload the file, complete the following steps: - Log on to the dashboard and go to Transactions > Upload. - The Block file processing if transactions with non-matching evidence are found. checkbox is selected by default. We recommend that you do not change this. If you want to accept all transactions even where conflicting evidence exists, choose the Accept all transactions, even those with non-matching evidence. Evidence with the highest priority will be used. checkbox. If you want to flag transactions with conflicting evidence without accepting them, choose the Flag transactions with non-matching evidence, but create transactions where the collected evidence matches. checkbox. - Click Choose File and select your CSV file. - Click Submit for processing. The report is available under the Recent uploads section. Updated 7 months ago
OPCFW_CODE
What you need to know about changes to Microsoft Power Apps and Flow licencing The changes announced by Microsoft for its applications have been in place since October. These changes allow users to work on the applications they need when they need them. After discussing the changes for Dynamics 365 licences, we now present those for Power Apps and Microsoft Flow. Here’s what’s new with Power Apps and Microsoft Flow licences: What’s new with Power Apps What’s Power Apps, again? POWER APPS SIMPLIFIES BUSINESS APP CREATION. With Power Apps, anyone can build a personalized app in a few hours. Your apps will be able to connect to your data and systems to better meet your unique process needs. New Power Apps plans The new plans for Power Apps fall into three categories. You can choose between the plans: - per app - per user - included with Dynamics 365 apps First, there is the per-application plan for those who need to use a maximum of two custom applications: If you want to use an unlimited number of custom apps, the per-user plan is for you: Finally, there’s the Power Apps plan included with Dynamics 365 apps: What’s new with Microsoft Flow What’s Microsoft Flow? MICROSOFT FLOW ALLOWS YOU TO CREATE AUTOMATED WORKFLOWS WITHOUT ANY CODING KNOWLEDGE. In minutes, you can automate workflows across your various services and applications. Flow is included with Dynamics 365 apps Dynamics 365 licences include the rights to use Flow to customize and develop Dynamics 365 apps. Flow use within Dynamics 365 is limited to Dynamics 365 app integration. For triggers and actions, Flows included in Dynamics 365 can connect: - to any data source within the scope of the rights of use of Dynamics 365 - directly with the Dynamics 365 app (via built-in triggers and actions) If the integrated Flow isn’t within the Dynamics 365 app, standalone Flow licences will be required. Power Apps and Microsoft Flow—Restricted entities Standalone Power Apps and Flow plans will continue to have “read access” to Dynamics 365 restricted entities. Restricted entities have changed: Changes to user rights for Power Apps and Microsoft Flow licences User licences for Dynamics 365 Enterprise have always included full user rights for Power Apps and Microsoft Flow. But since October 2019, these rights are defined in greater detail: - Dedicated Power Apps licence: Users of Dynamics 365 Enterprise apps can still execute Power Apps with Dynamics environments, but the execution of Power Apps applications in other environments require a dedicated Power Apps licence. - Additional Microsoft Flow licence: An additional Microsoft Flow licence is required to execute flows that aren’t mapped to a Dynamics 365 app. - Grandfathering for existing clients: Existing Dynamics 365 clients benefit from a 12-month extension of the existing licence conditions of full user rights for Power Apps and Flow starting October 1st, 2019 or after the expiration of the current Dynamics subscription period, whichever is longer. In conclusion, you now have an overview of the changes in Power Apps and Microsoft Flow licences. They’ll allow you to focus your tools on your real needs. But that’s not all: there are also new policies regarding API calls. We’ll tackle that in the next instalment of this series. Do you have questions about your transition? Contact our team for personalized advice. For more information, you can also check Microsoft’s guide for Power Apps and Flow.
OPCFW_CODE
savetrees and \raggedright interact badly MCVE \documentclass{article} \usepackage[subtle]{savetrees} \raggedright \begin{document} rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \end{document} produces On the other hand, if I comment \usepackage[subtle]{savetrees} I have Is it possible to use together savetrees and \raggedright? Note1: texlive 2016 on Debian Sid Note2: the screenshots were taken using little care and with two different programs, hence differences in scaling and quality Note3: prompted by Steven's answer i tried placing \raggedright before \usepackage{savetrees} but the result is the same... Or instead load savetrees with the paragraphs=normal option: \documentclass{article} \usepackage[subtle,paragraphs=normal]{savetrees} \begin{document}\raggedright rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \end{document} [* EDIT *] \documentclass{article} \usepackage[subtle,paragraphs=normal]{savetrees} \usepackage{lipsum} \begin{document}\raggedright rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \lipsum \end{document} The demo picture shows a right aligned text. Look at the first line, "ewt" was squeezed in and the spacing in the first two lines seems a little bit different. But it isn't right-aligned. I'll add a \lipsum to the example to demonstrate. I was confused, too. It looked right-aligned, but you are right...it is not. Both answers, incidentally, appear to use the same amount of paper: 16 lines into page 2 for the lipsum version of my example. savetrees uses microtype to modify a little bit interword spacing so that lines that are almost aligned are eventually right aligned, this is apparent from the first line where, adjusting the interword spacing, an extra word was forced in... If one turns off (tracking=normal) the fiddling with microtype they will find exactly the same line breaks that are obtained w/o savetrees (figure 2 of my OP). So it seems that with your answer most lines are ragged, and a few are aligned thanks to microtype intervention... A good answer, possibly acceptable, let's wait a little. Thank you. One year, 3 months and two weeks... Perhaps load the ragged2e package (before savetrees) and use \RaggedRight instead of \raggedright. \documentclass{article} \usepackage{ragged2e} \RaggedRight \usepackage[subtle]{savetrees} \begin{document} rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \end{document} Not exactly an acceptable answer but for sure a useful one! Thank you Looks good to me: why isn't it acceptable? @JPi Useful because it solves my problem, not acceptable because it doesn't answer specifically my question. I guess the answer is no unless you set paragraphs=normal. See my answer below. You get the same with the standard setting: mixing \looseness=-1 with \raggedright is a very bad idea. And savetrees tries doing \looseness=-1 for every paragraph, which is another bad idea. \documentclass{article} \begin{document} \raggedright rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \looseness=-1 rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \end{document} I propose a different way for trying to minimize the number of lines in paragraphs: \documentclass{article} \usepackage[subtle]{savetrees} \usepackage{lipsum} % just to show \raggedright is respected % fix the silly thing savetrees does \let\everypar\markeverypar % use a high value for \linepenalty instead \AtBeginDocument{\linepenalty=2000 } \raggedright \begin{document} rgeg wrg wrg rgw rwg rwhrwghwrg e qeg rg wrg wr grw rh gr gwrg wrg rg wgr ewt rw wrt rwt rw ewt rtw t rqqqqqqqqqqqqqqwt w we tw et ewt w t wet wet wqet w t r wr thueiyuey w ui eiuwy iywe iuyiew iie i ewiu ewiye iyewiuy iuew iueyiuyewiiuey iq q qiyieqy iqiuyiriuy r \lipsum[4] \end{document} If I remove the setting to \linepenalty, this is the result. As you can see, one line is saved. Learned something, thanks. Guess fixing the silly thing savetrees does is the same as loading it with paragraphs=normal. @JPi Possibly, but the options are treated in a very contorted way and it's difficult to understand what does what. Thank you for the example without savetrees, things are much more clear now.
STACK_EXCHANGE
The Ideal Test Case Software Testers fixate on the difference between the best and the real. The most obsessed testers focus on the difference between the best and the ideal. Regression test cases are the tests that make sure the application behaves according to specification and hasn’t changed since the last time it was checked. I’ve spent my life chasing the ideal regression test case — the ideal that can be run by anyone, on any app, and people care whether it passes or fails. The Ideal Manual Test Case The ideal manual test case is one that is written in a way that is easily read by anyone, people care when it fails, and mixes specificity and ambiguity in a way that makes it robust across insignificant changes in the app. The ideal manual test case is so clearly written that it doesn’t need the author to interpret how to setup, execute and validate the test. Almost always, the original author of the test eventually moves on. The best manual tests can be read executed by anyone on the test team, their test manager, the developer or even product manager in a pinch. Even great tests are written just to add more ‘coverage’ —but that isn’t ideal. If no one cares when a well-written test case fails it isn’t ideal. Sometimes tests are written for features that aren’t important to the business, users, or future direction. Ideal manual test cases withstand the test of time. Ideal doesn’t mean fully specified or overly specific, ideal tests focus on the purpose of the test and nothing else. If a setup, execution, or validation, step isn’t relevant — it shouldn’t be in there. This perfect level of ambiguity ensures the test doesn’t need to be updated when the product changes in irrelevant or insignificant ways, and allows other testers to add some variety and interpretation to the test case delivering additional variation with the same spirit of the author. The ideal manual test case is clear, important, and balances specificity and ambiguity — but, it isn’t the ideal test case. Ideal manual test cases are expensive and slow to write, execute and maintain. Manual test cases require human time and labor so they are expensive and slow. Even ideal manual regression test cases take time away from the most valuable asset testers bring to bear — their ability to explore the app with creativity Manual test cases are great for teams to get started with formal testing and add some rigor to the regression testing process, but the ideal manual test case isn’t the ideal test case. The ideal test case should also be automated. The Ideal Automated Test Case The ideal automated test case should have all the attributes of an ideal manual test case, but have the added benefit that it is automated by a machine. Done right, machine automation is far faster and less expensive and consistent than human testing. If the ideal Manual test case was automated, it can be run on every new build and free up human testers’ time for doing what they do best: exploratory, opinionated, and abstract quality checking keeping the business, customer, and engineering needs in their mind all at the same time. The ideal automated test case is stable, maintainable and and efficient. Test automation has to be stable or it is quickly ignored. Unstable, flaky, inconsistent automation often uses more human time investigating false failures and nursing the automation back to health than simply executing the tests manually. Unstable automated tests are the norm — everywhere. Ideal tests can be run an infinite number of times without failure, which requires an sophisticated dance between the test code and the application under test. The test automation must allow for variances in the application’s timing. Ideal automation isn’t flaky. Ideal test automation requires zero maintenance. Ideal automation doesn’t break when the new build of the application has a button in a slightly different location, changed color, or changed the implementation under the hood — it should keep executing the test case, dealing with ambiguity just like a human would. Ideal automation also deals with changes in the flow or protocol of the application. The ideal test automation would notice these changes, but keep marching to make sure the new design and implementation continues to meet specification without breaking. The goal of automation is to increase efficiency. Efficiency in terms of cost, and time and complexity. Ideal automation frameworks, infrastructure and tests require the minimal amount of work and compute and configuration to setup and execute. The pass/fail results of ideal automation are delivered in an easy to use, relevant and timely way to the people that need to know — it makes sure the human interpretation of results is relevant efficient. Even if test automation succeeded on all fronts of stability, maintainability and efficiency, they still aren’t the ideal test case. The ideal test case has a few more attributes. The Ideal Test Case The ideal test case has all the attributes of the ideal manual and automated test cases, with some superpowers: it can run on any platform and any app, framework-independent, any one can create them quickly, and most of them already exist somewhere so testers don’t have to recreate them. Yes, ideal tests exist in a giant global test case brain that has already tested tens of thousands of similar apps. The ideal test case runs on any app or any platform. The ideal test case doesn’t need to be re-written for each platform (web, mobile, etc.). The needs of the user, the business and often even the application code are platform-independent — the ideal test is platform-independent too. If you think about it, odds are the test case you are writing right now was written for another app already — if only you knew about it, and could re-use that test case. Every app has login tests, and tests to search for generic products, add products to the shopping cart, etc. The ideal test case would be discoverable, and re-usable on every app so humans could stop wasting time rebuilding test cases from scratch for every new app platform and test. The ideal test case is framework independent. Non-ideal test cases are one-off sentences or lists in a spreadsheet, or trapped in the schema of a particular test case management system or XML/JSON doc. Not only does this make tests less re-usable, but they have an external dependency that costs time and/or money. If the framework or API’s change, or you want to change test case management systems, this can be expensive, inefficient and painful. The ideal test case would be defined in a portable, independent format. The ideal test case requires no knowledge of programming languages, complex syntax or need of a complex tool. A test is ideally created by simply pointing and clicking a visualization of each important step and validation. Or an ideal test is automatically created based on records of real-world user interaction. The actual setup and execution of these test flows isn’t hardcoded — the execution engine figures out dynamically how best to execute the test case. Ideally no code, complex language or tool is required to define, execute and report test results. The ideal test case is the one you don’t have to write or execute — it is just there and knows which apps it applies to. Ideal test cases are written independently of the platform and application so they can be shared and re-used. For an e-commerce application, the ideal test suite would be somewhere in the cloud just waiting for your application and it knows how to test your login, search and cart functionality. The ideal test case is one that testers and developers don’t have to think about, you just point your application to the reusable test cases, they explore your application, determine which tests are applicable and automatically execute and report test results. The ideal test is one that is not actually written by the team at all. An almost magical attribute of the ideal test case is that the test execution can be benchmarked against other apps. If the same tests are executed against every other e-commerce application, the team will know if 90% pass is good or bad. The team will know if 2.5 seconds for a facebook login flow is normal and expected. The team will know if there are tests/features that are normally expected to pass for similar apps. The ideal test case isn’t pass/fail, it is pass/fail with a global context to understand how bad a failure, or good a pass actually is. The Ideal Test Team Much like ideal test cases, the ideal test team builds towards these ideal test cases, draws on experience across many past projects and aspires to test every app on the planet. These days, I have the privilege to work with just such a team. Jason Arbon, CEO test.ai
OPCFW_CODE
AD RMS Policy Templates Applies To: Windows Server 2008, Windows Server 2008 R2 Rights policy templates are used to control the rights that a user or group has on a particular piece of rights-protected content. AD RMS stores rights policy templates in the configuration database. Optionally, it maintains a copy of all rights policy templates in a shared folder that you specify. Some examples of rights policy templates are: Company Confidential. Such a template could be used to allow only employees to view content, but not forward, copy, or save the document. Expires in 30 days. This could be used to ensure that content is not valid after 30 days. A letter of offer, an RFP, or perhaps a draft version of a document would be consumable for only a set period of time. Must be Connected to Consume. This ensures that recipients have connectivity to a licensing server and are not using cached copies of a use license to consume content. This could be used in a case in which a template is subject to change and you want the recipient to consume only the latest version. Also, if a computer is lost or stolen, the RMS-protected content would not be accessible to the person who found or stole it. When AD RMS attempts to verify group membership, the results are cached. This can become an issue if a document was protected by a template that assigned rights to a particular group. For example, Bob is a user and a member of the Support group. Bob receives a document that only allows members of the support group to consume it. Because Bob is already a member of the group, he would be able to consume it. However, if Alice were then added to that group, she would not be granted access until the Active Directory Domain Services cache expired. To disable cache settings on Windows Server 2008 there are two possible ways of accomplishing this. Under the DRMS_Config database access the DRMS_cluterpolicies table and change the value of PolicyData cell to 0 for UseDirectoryServicesCacheDatabase and EnableNoRightsCaching. This will disable all database caching. EnableNoRightsCaching is new to AD RMS and is used to cache ‘No rights’ failures. For security purposes, this allows you to determine who might be trying to access a piece of content that they do not have the rights to. To disable only Active Directory caching, under the DRMS_Config database access the DRMS_cluterpolicies table and change the value of PolicyData cell to 0 for the following: Prior to making any modifications to your AD RMS databases, these databases should be backed up. To ease administration of the rights policy templates, AD RMS in Windows Server 2008 introduced a rights policy template creation wizard. To ease distribution of rights policy templates, AD RMS has also introduced a new rights policy template distribution pipeline. This new pipeline allows an AD RMS client to request rights policy templates stored on the AD RMS cluster and store them locally on the client computer. This functionality is available with AD RMS clients in Windows Vista with SP1, Windows Server 2008, Windows 7, and Windows Server 2008 R2. For AD RMS clients that are not running on Windows Vista with SP1, Windows Server 2008, Windows 7, and Windows Server 2008 R2, you must manually distribute the rights policy templates from a central location to the client. Some distribution methods include using Systems Management Server, Group Policy, or manually copying the templates to the client computer as described at the above section. For more information on setting up rights policy templates see AD RMS Rights Policy Templates Deployment Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=153712).
OPCFW_CODE
How can I show order quantity relative to cities where customers in cities have at least three orders? I use this SQL statement to be able to join three tables (Order1, Order2, and Customers) to show the order quantity for each customer from each city they're addressed to. But how can I show the rows of order quantities for customers in cities who have done at least three orders? In other words I'm trying to aggregate on the cities connected to the customers who have done more than three orders. Table structures: Customers has the columns CustomerNr, City Name Order1 has the columns Ordernr, CustomerNr Order2 has the columns Ordernr, Order quantity The SQL statement so far: SELECT Customers.CityName, Order2.OrderQuantity FROM Order1 INNER JOIN Order2 ON Order1.ordernr = Order2.ordernr INNER JOIN Customers ON Customers.CustomerNr = Order1.CustomerNr It would be helpful if you could provide us some sample data, concerning the input and the desired result. The result I want is to show the rows for the cities where customers have made at least three orders, and I want to add a column (to the output view) named AS 'Total quantity' with their total orders in numbers, which you get from the column 'Orderquantity' in table Order2. In other words: The output should be in two columns: 'City names' (listing the rows with cities where customers have made at least three orders) and 'Order Quantity', which should be a column you add AS 'Order quantity' to show the total orders to these customers. And the order quantities can be fetched from the column "orderquantity" from table 'Order2'. What you're missing are aggregate functions. To get the sum of a column, you can use SUM(columnName) in your select statement. To get proper results, you will have to group by a field as well. In this case, you want the sum per customer so you can do something like this: SELECT c.customerNumber, c.name, SUM(o2.quantity) AS totalQuantity FROM customers c JOIN order1 o1 ON o1.customerNumber = c.customerNumber JOIN order2 o2 ON o2.orderNumber = o1.orderNumber GROUP BY c.customerNumber, c.name; To filter on an aggregate condition, you need to add a having clause. Here, you can require that each group have a minimum of 3 rows: SELECT c.customerNumber, c.name, SUM(o2.quantity) AS totalQuantity FROM customers c JOIN order1 o1 ON o1.customerNumber = c.customerNumber JOIN order2 o2 ON o2.orderNumber = o1.orderNumber GROUP BY c.customerNumber, c.name HAVING COUNT(*) >= 3; Here is an SQL Fiddle with some dummy data that I tested with. select c.CityName, Sum(o2.OrderQuantity) as Quantity from Customers c inner join Order1 o1 on c.CustomerNr = o1.CustomerNr inner join Order2 o2 on o1.OrderNr = o2.OrderNr group by c.CityName having sum(o2.OrderQuantity) >= 3 @ecinna . . . Your question says: "show the rows of order quantities for customers". This answer shows summaries. If this doesn't do what you really want, you should ask another question. If it does, you should be clearer about your questions in the future. Sample data and desired results always help.
STACK_EXCHANGE
An awesome team of students from our education program made this wiki. Samsung ATIV Book 7 NP740U3E-K01UB Troubleshooting Use this page to troubleshoot problems with your device. Cursor slow to respond or not responding at all When using the trackpad, the cursor has a noticeable delay in moving around the screen, left and right clicking might also be sluggish or completely non-responsive. The trackpad’s connector has come loose First remove the bottom of the laptop’s case by unscrewing it and undoing the clips near the hinge. Keep in mind the case is metal and could be sharp. Next unplug the battery from the mother board. The plug will be on the right-hand side of the device, above the speaker. Unscrew the screws holding the battery in place and set it aside. The back side of the trackpad is now visible. Inspect the ribbon connecting it to the mother board. If it has come undone place the ribbon back in its connector. Reconnect the battery and turn the device on to make sure the problem is resolved. The trackpad is faulty Follow the above steps to gain access to the trackpad. Disconnect the ribbon from the mother board. Next, unscrew the track pad and remove the silver tape holding it in place. Rotate the track pad out and set it aside. Rotate the new track pad into place and screw it into the device. Connect the ribbon into the mother board and place the battery back in the device. Turn the device on and make sure the trackpad is functioning properly. Some of the keyboard keys are difficult to press and are unresponsive When using the keyboard, you will notice that some keys are hard to press and are unresponsive. There is something stuck under some of the keyboard keys First observe the keyboard for an unevenness among the keys. Notice that if one key looks higher than the other there is most likely something under that key. Second remove the bottom of the laptop’s case by unscrewing it and undoing the clips near the hinge. Keep in mind the case is metal and could be sharp. Next up plug the battery from the mother board. The plug will be on the right-hand side of the device, above the speaker. Unscrew the screws holding the battery in place and set it aside. Next unscrew the mother board and under that you should find the keyboard. After removing whatever was under the keyboard reattach the motherboard and the battery and double check that the problem is resolved. The Keyboard is faulty Follow the previous steps to regain access to the internal section keyboard. Now with the other parts of the computer out of the way slowing start to remove the faulty keys. Then after those keys are removed carefully replace them. After the faulty parts are replaced reattach everything and put the computer back together. Double check to make sure everything is working properly. Speaker sounds unclear and Distorted The sounds when running audio or audio-visual file sounds unclear and distorted, it also breaks up sometimes. The Connection to the Motherboard is broken First the Laptop should be opened up by unscrewing the bolts holding it together. This should be carried out carefully to avoid any more damages to the laptop and harm to yourself for the metal material could be harmful. Secondly, the speakers are located on both sides of the laptop so carefully locate the wires that connect them up to the motherboard inspect if the connection is secure if not tighten/secure the wires of the speakers firmly on the motherboard, you might use an adhesive if required to hold these wires strongly to the motherboard. Turn on the device and test to make sure the problem is resolved. The Speaker is Broken Following the previous steps and precaution in opening the Laptop, disconnect the speakers wires from the motherboard. Take out the speakers and replace it with new speakers. Turn device on to ensure problem is solved USB drive not working Cannot connect usb devices to the usb drives on the right hand side of the laptop. Dirty USB ports Use a can of keyboard cleaner to blow dust out of the usb ports. Unscrew and unclip the bottom of the laptop’s case to remove it.. The wide black above the battery should connect the motherboard to the board containing the USB ports. If it is not, plug the ribbon back into the appropriate port. Remove the bottom of the laptop’s case by unscrewing and uncliping it. Disconnect the ribbon connecting the motherboard and the board containing the USB ports, also, be sure to disconnect the speaker from this board. Unscrew the board and lift it out. Plate the new board and screw it in. Reconnect the speaker and ribbon, and reattatch the bottom of the case. Laptop Battery is Dead After pressing the power button multiple times and leaving it in the charge the laptop fails to turn on. Unplug laptop from the charge and let the computer cool down. Find a cool area to let the laptop cool off. If that does not fix the issue the laptop has a different problem. Unscrew and unclip the bottom of the laptop to remove the case. Nest unplug the battery from the motherboard. Access that everything is connected under and around where the battery goes. Next reattach the battery and double check that all parts are connected properly. Burned Out Battery Check that everything is connected correctly on the inside of the laptop by repeating the above steps. Unscrew and unclip the bottom of the laptop to remove the case. Nest unplug the battery from the motherboard. Next, replace the dead battery with a new one. Laptop Touch Screen is Unresponsive The laptop touch screen is not working after the PC screen wakes up. This is due to the laptop’s power management setup in conjunction with device drivers. Upon screen wake-up, the drivers in charge of the touch screen are not re-enabled. Therefore the touch screen remains unresponsive. Please see below for troubleshooting solutions. Run Hardware Troubleshooter and Check Status of Touch Screen Device Follow these steps: a) Press the ‘Windows + W’ key on the keyboard. b) Type troubleshooting in the search box and then press enter. c) Click hardware and sound and run the Hardware and Devices Troubleshooter. d) Follow the On screen instructions. Once this is done, restart the computer and check the status. If the above method does not work, please continue reading for more in-depth solutions Debugging Outdated or Incompatible drivers Uninstall and Reinstall the Generic Touch Screen Drivers. - Press “Windows Logo” + “X” keys on the keyboard. - Click on “Device Manager” from that list. - Expand “Human Interface Devices” and search for the Touch Screen device from the device list, right click on it and then select “Uninstall”. On the un-installation window, if you have an option: “Delete the driver software for this device” you may select that and then remove the incompatible/corrupted drivers from the - Follow the on-screen instructions to complete it and the restart the computer. After the restart, please open the Device Manager again and then click on the “Scan for hardware changes” button. Check if your operating system detects the Touch Screen device and installs an appropriate driver automatically. Disabling / Re-enabling Device Drivers One By One You may try to disable and enable the drivers one by one to check which drivers disables the touch screen and proceed accordingly. You may access the computer manufacturer website to get the latest drivers. Verifying that all Windows Updates are up to Date Make sure that you have installed all the updates (including the optional updates) available for the operating system via Windows Update.
OPCFW_CODE
asc2map [options] asciifile PCRresult spatial specified by data type option; if data type option is not set: data type of PCRclone Options can be given related to the layout of asciifile and the way asciifile must be read. These options are described in the operation section. Other options are: PCRclone is taken as clonemap. If you have set a global clonemap as global option, you don’t need to set clone in the command line: the clonemap you have set as global option is taken as clonemap. If you have not set a global clonemap or if you want to use a different clonemap than the global clonemap, you must specify the clonemap in the command line with the clone option. -B, -N, -O, -S, -D and -L This data type option specifies the data type which is assigned to PCRresult (respectively boolean, nominal, ordinal, scalar, directional, ldd). If the option is not set, PCRresult is assigned the data type of PCRclone or the global clone. The data in asciifile must be in the domain of the data type which is assigned to PCRresult. For description of these domains see the description of the different data types. –small or –large In most case, the default cell representation will be sufficient. If you want, you can specify the cell representations: Nominal and ordinal data types cell values are represented by small integer cell representation (default) cell values are represented by large integer cell representation if option -D is set; –degrees of –radians values on asciifile are interpreted as degrees (default) values on asciifile are interpreted as radians nodatavalue is the value in columnfile which is converted to a missing value on PCRresult. It can be one ascii character (letters, figures, symbols) or a string of ascii charaters. For instance: -m -99 -98. or -m j83s0w. Default, if this option is not set, 1e31 is recognized as a missing value. By default, whitespace (one or more tabs, spaces) is recognized as separator between the values of a row in the asciifile. If the values are separated by a different separator, you can specify it with the option. The separator can be one of the ascii characters (always one). In that case, asc2map recognizes the specified separator with or without whitespace as separator. For instance, if the values in asciifile are separated by a ; character followed by 5 spaces, specify -s ; in the command line (you do not need to specify the whitespace characters). The asciifile is converted to PCRresult, which is an expression in PCRaster map format. PCRresult is assigned the location attributes of PCRclone (number of rows and columns, cell size, x and y coordinates), or if the option --clone is not set in the command line the location attributes of the global clone. The asciifile must contain data values separated by one or more spaces or tabs. Values may contain the characters: -eE.0123456789. Valid values are for instance: -3324.4E-12 for -3324.4 x 10-12 .22 for 0.22 The most simple conversion is a conversion ignoring the layout of your data on the asciifile (ordering of data by rows, row definitions or headers for instance). This simple conversion is performed default. All the characters on your asciifile will be interpreted as data. The operator scans the asciifile starting at the top line from left to right, than the second line from left to right etc. Each time a value is scanned it is assigned to a cell on PCRresult until PCRresult is totally filled with cell values. If the asciifile contains a larger number of values than the number of cells on PCRresult, the remaining values are simply ignored. The values are assigned to PCRresult starting with the top row on the map and ending with the bottom row. The first value which is filled in is the first value in the asciifile, the second value is the second value in the asciifile etc.. This conversion imposes almost no restriction on the layout of the asciifile: if your data are ordered in a number of rows and columns which corresponds with the number of rows and columns on PCRresult it will result in a correct conversion, but if they are not ordered this way (for instance they are on one line in the asciifile) a conversion is also possible. conversion from ARC/INFO ascii files¶ In ARC/INFO, grid maps can be converted to an formatted ascii file using the ARC/INFO gridascii command. These output files from ARC/INFO are converted to the PCRaster map format with asc2map using the option -a without setting the options -s, -m, -h and -r. These latter options will be totally ignored if you set them in combination with -a. The output asciifile from ARC/INFO will contain a header. The number of rows and columns of the original ARC/INFO map given in the header must correspond with the number of rows and columns of PCRclone. The remaining location attributes in the header are ignored during conversion since they are taken from PCRclone (cell size and x,y coordinates). If the header contains a no_data_value, each value in the asciifile which corresponds with the no_data_value is assigned a missing value on PCRresult. If the header does not contain a no_data_value the value -9999 is recognized as a missing value. conversion from Genamap ascii files¶ In Genamap, grid maps can be converted to an formatted asciifile using the Genamap audit command. These output files from Genamap are converted to the PCRaster map format with asc2map using the option -g. The number of rows and columns of the original Genamap map, given in the header of the output file from Genamap must correspond with the number of rows and columns of PCRclone. Assignment of missing values can be specified by the option -m. Do not use the options -s, -h and -r in addition to -g. If you do set them, they will be totally ignored. conversion from asciifiles with an exotic format¶ Two options can be used to impose the command to take into account the layout of your asciifile. They can not be used in combination with the options -a and -g. This is used if the asciifile contains a header with information which must be ignored during scanning. The option -h must be followed by linesheader which must be whole number larger than 0. This is the number of lines which will be skipped at the top of the asciifile. The asciifile is scanned starting at line linesheader. The option -r results in skipping of data in asciifile each time before asc2map starts with filling a new row on PCRresult. Rows on PCRresult are filled in as follows: First a number of lines on asciifile is skipped. The number of lines which is skipped is given by asciilinesbeforemaprow, it must be a whole value equal to or larger than 0. Then, the asciifile is scanned until the first row on PCRresult is filled with data. At that point, the remaining data on the line in asciifile are skipped plus data on the next asciilinesbeforemaprow number of lines. Then, the next row on PCRresult is filled with the data read from the row on asciifile after the skipped rows. Using asc2map for generating a PCRresult of data type ldd is quite risky: probably it will result in a ldd which is unsound. If you do want to create a PCRresult of data type ldd use the operator lddrepair afterwards. This operator will modify the ldd in such a way that it will be sound. This operation belongs to the group of Creation of PCRaster maps asc2map --clone mapclone.map -S -m mv -v 4 AscFile1.txt Result1.map 210 2.5 3 8 2.7 -0.5 0 4 MV 3.2 0.01 2 asc2map --clone mapclone.map -D -a AscFile2.txt Result2.map NCOLS 4 NROWS 3 XLLCENTER 120 YLLCENTER 120 CELLSIZE 15 NODATA_VALUE -9999 -9999 0 2.3 8.9 0.8 351 -9 360 45 -9999 370 10
OPCFW_CODE
import { Injectable } from '@nestjs/common'; import { Category } from './category'; import { InjectModel } from '@nestjs/mongoose'; import { Model } from 'mongoose'; import { TaskService } from 'src/tasks/task.service'; import { Task } from 'src/tasks/task'; @Injectable() export class CategoryService { constructor(@InjectModel('Category') private readonly categoryModel: Model<Category>, @InjectModel('Task') private readonly taskModel: Model<Task>) { } //OBTER TODAS AS CATEGORIAS async getAllCategories(){ return await this.categoryModel.find().exec(); } /* //OBTER CATEGORIA PELO ID async getById(id: string){ return await this.categoryModel.findById(id).exec(); } */ //OBTER CATEGORIA PELO NOME async getCategoryByName(name: string){ return await this.categoryModel.findOne({name : name}).exec(); } //CRIAR CATEGORIA async createCategory(category: Category){ const createdCategory = new this.categoryModel(category); return await createdCategory.save(); } /* //EDITAR NOME CATEGORIA BUSCANDO PELO ID async updateCategory(id: string, category: Category){ await this.categoryModel.updateOne({_id: id}, category).exec() return this.getById(id); } */ //EDITAR NOME DA CATEGORIA BUSCANDO PELO NOME //Primeiro, busca as tarefas da categoria informada e substitui a categoria dessas tarefas pelo novo nome //Por fim, altera o nome da categoria informada pelo novo nome async updateCategoryByName(name: string, newCategory: Category){ await this.taskModel.updateMany({category: name}, {category: newCategory.name}).exec() await this.categoryModel.updateOne({name: name}, {name: newCategory.name}).exec() return this.getCategoryByName(newCategory.name); } /* //DELETAR CATEGORIA BUSCANDO PELO ID async deleteCategory(id: string){ return await this.categoryModel.deleteOne({ _id: id}).exec(); } */ //DELETAR CATEGORIA BUSCANDO PELO NOME //Primeiro seta o atributo 'category' de todas as tarefas associadas a essa categoria para "" //Por fim, deleta a categoria async deleteCategoryByName(name: string){ await this.taskModel.updateMany({category: name}, {category: ""}).exec() return await this.categoryModel.deleteOne({name: name}).exec(); } }
STACK_EDU
The goal of this project is to take the human-made annotation data from a 2D chorography and use this data to create a 3D model of the city it depicts to reveal interesting information encoded in these maps. To contextualize this goal, let’s imagine we have a 2D chorographic view as the reference for a landscape and city, and a blank 3D world to fill. If you want to recreate this map in 3D, one approach might be to individually design and place each object in the map, such as houses, fortifications, and religious buildings, which are abundant in the Civitates maps, or the Book of Fortresses perspective drawings. As you might expect, this approach is tedious, and more importantly, reiterates work done for similar objects, because you may have multiple instances of the same object type (like a fortified tower) with different shapes and sizes. This is where Houdini comes in. Among 3D modeling and animation software, Houdini distinguishes itself by specializing in something called proceduralism. Houdini’s procedural generation of nodes makes it easy for people to design objects that can be reused many times, even if the instances of that object must be different. For example, say a chorographic view contains two disconnected pieces of city walls, both with different sizes and numbers of merlons. Rather than creating one wall with 7 merlons and one wall with 11 merlons, we can instead create a generic wall with n merlons using some math and proceduralism. Now, if your wall model is designed well, it can be used for every wall in the maps you are modeling – as long as it is parametric, i.e individually adjustable. This is done through HDAs (Houdini Digital Assets), which are essentially packaged models of 3D objects with a parameter interface. So a wall HDA might have a parameter for height and width, so you only have to worry about controlling these values and not everything that happens underneath. The next problem is, even if we have good models and our 2D map as a reference, how do we go about putting the right values into the parameter interface? If a map has 100 houses, 20 fortress parts, and a multitude of other objects, this would still require individually setting the parameters for each object in the map. Luckily, Houdini comes in handy here too. Using our annotation data from Supervisely, we can extract certain information about each object in the map, and import this data directly into Houdini. This data includes information such as the position of each object in a 2D plane, its size in the x and y direction, and other object-type specific information. So, if you can figure out a way to set the parameters of your HDAs based on this imported data, then you have automated a large chunk of the process to go from a 2D map to a 3D model of your city with minimal “direct” modeling by hand. Overall, Houdini’s ability to facilitate procedural and data-driven modeling is the reason why it is so integral to this project. Once the object HDAs are made and the map/view you want to model is annotated, you are well on your way to “automatically” generating a city, which is an exciting prospect for historians and art historians who want to study these objects from new vantage points.
OPCFW_CODE
You seem to to accommodate Mapping "Web Folders" in Windows XP? You seem to expected. 1) Open a new browser. Displaying error message for invalidrefresh your session.J’arrive à me connecter sur lesRights Reserved. If so, what settings need to be changed here, so I'm grateful you posted with bug report. ntlmfilter http://videocasterapp.net/could-not/solution-nagios-error-could-not-expand-host-groups.php host Cifs Error 13 CHANGE: Fixed IE and WebFolders were ignoring the Connection:close=20 header sent by winstone, and continuing What to do with my pre-teen daughter who to open TCP port 8080. Folders" as drivers under windows XP? resolving to be concerned about? Torvalds or The Open Group in any way. I noticed that the challenge doesn't seem to passed back to access the Web Folders without it getting into this authentication loop. This has notout this field. Mount Error Could Not Resolve Address For Unknown Error where tomcat seems to have the browser return something, while winstone sits there waiting.Forcing lmCompatibility value to '0' Not sure what's see this here screen while IE will display a error. you're looking for? Mysterious cord Cifs Unable To Determine Destination Address I understand that I can have CSS turned off. AppleR/C Soaring Digest - Sep 2006ADePT User GuideState-of-Georgia-Incentive-Area-Clean-Energy-Tax-Credit-(Corporate)Nashville-Electric-Service-Green-Power-Providers-ProgramNew FileGamma -(not aware of doing anything to correct it despite trying). error And it's intended do be in some cases able to use different IPsIntroduced support for WebDAV locking (WebDAV compliance class 2). error an Angle of Incidence on an airplane?Ext 4, pop over to these guys refresh your session. Could you please make it configurable in the josso-gateway-config.xml file? > I have this the port 0.0.0.0:8009".If I made the URL not use the "udrive" prefix perhaps1.1" takes over actually processing the WebDAV conversation. Contents of /etc/hosts is (among other lines): 192.168.1.28 NAS-5h1-15 192.168.1.29 NAS-5h2-20 into making it's URL's not redirect to 8080.but commons-logging jar is required. Previous company name is ISIS, out this field. It would be advisablemap the mount by hostname within the fstab.If a share requires login susceptible to XML-based denial of service attacks. host Camera PL10 English User Manual2011 Video InstructionsPersonal Audio v. I configured the firewall Mount Error Could Not Resolve Address For Server Unknown Error fstab domain-name-system or ask your own question. mon problème, je te remercie par avance. There were a few very difficult-to-find bugs in http://videocasterapp.net/could-not/fix-nagios-error-could-not-find-any-host-matching.php this contact form Rights Reserved.I was wondering whether someone has cifs nadia and maya versions (2 different PCs).I'm able to "login" to my samba share with the following command: smbclient //vvlaptop/Documents host support for WebDAV locks. console lists the following message: "18:39:59.864 EVENT NOTICE: AJP13 is not a secure protocol. Program terminated with signal 11, Segmentation fault. #0 0x005d569e in apr_array_pstrcat (p=0x9409158, Could Not Resolve Address Of Host Loadrunner Please don't fillrid of when I had the correct java runtime.. files & dirs, but it cannot upload, rename or delete files. You seem to600 ~/.smb/* Edit the file ~/.smb/smbnetfs.auth to insert credentials. error the security constraint like 1st one.On the machines you're trying to connect to set the followingin the smb.conf file: netbios name = MachineName and restart samba. Browse other questions tagged ubuntu mount cifs http://videocasterapp.net/could-not/info-pom-xml-first-line-error.php admitted Historical Influences?This has not"Cancel" when prompted for their username and password will not go over well. post a blank message. I can live without Ubuntu Cifs Mount Error Could Not Resolve Address Safe? another tab or window. Additionally, added machine-name.domainname as an alias in your hosts file: 192.168.1.28 machine-name machine-name.domainnameTurns out different commands are needed for Tomcat debug log and Winstone logs are attached,how to list on CV? Browse other questions tagged mount samba work on winstone correctly,=20 =66rom firefox/IE/WebFolders . Sign up for the SourceForge newsletter: I agree to receive quotes, newsletterssince the body of the initial request was clogging the socket=20 input stream. Now my log file looks ok but I'm still getting the Mount Cifs Hostname learn about the possibilities of the Alfresco platform. cifs I don't know where to UNIX is a registered fixed the problem. A penny saved is a penny What DETAILS: This version introduces Smbnetfs For each non-jossified application add a It's also no Problem host whether setting this property to true would have any affect. error So it looks like Rights Reserved. I understand that I can v. Setenv.sh)Sais-t-on (2012)United States v.
OPCFW_CODE
Using dataclasses and dictionaries to solve the "box problem" The project is to sort items - using a particular algorithm - into boxes. I am having trouble placing items as values into a given box using dictionaries. My primary problem is that I can't figure out how to retrieve 1 value of a key in a dictionary when it has multiple values. My secondary issue is that I fear that I am overcomplicating my program and creating functions that are unnecessary. I am having trouble with this function in particular: def roomiest(boxList, itemList, boxDict, itemDict): """ For each item find the box with the greatest remaining allowed weight that can support the item and place the item in that box :param boxList: The sorted list of boxes( large to small ) :param itemList: The sorted list of items ( large to small ) :param boxDict: Dict w/ boxes :param itemDict: Dict w/ items :return: If boxes were able to fit all items(1); items in box with individual weights(2); Box name with max weight(3); items with their weights that were left behind(4) """ sortedItems = sortLargest2Small(itemList) sortedBoxes = sortLargest2Small(boxList) for item in sortedItems: for box in sortedBoxes: itemWeight = keywordSearchDict(item, itemDict) boxRemWeight = keywordSearchDict(box, boxDict) if itemWeight <= boxRemWeight: itemDict[ box] = # Need to add item to box with its weight as well as modify the remaining weight the box # can hold For context, here is my code. This is an example of what the text file would look like: pastebin I'm confused, I see you're passing in boxDict and boxList (same with item...), but your comment seems to indicate that boxList is contained within boxDict? @NotAnAmbiTurner That is correct, boxList is the sorted version of boxDict but without the box names I think you are looking for dictionary .get() and .update(). Hard to tell what the input to your function looks like, but here are some notes: itemDictBox = {} for item in sortedItems: for box in sortedBoxes: itemWeight = itemDict.get(item, 0) boxRemWeight = boxDict.get(box, 0) if itemWeight <= boxRemWeight: itemDictBox.update(box) You do have a lot of extra code that you can simplify. If you are storing values in text, you can use ','.join(some_list) to use comma delimiter, and some_string.split(',') to convert back to list. Functions would perform better without the nested for loops also. It seems like you could just loop through the items or boxes, or just have a dictionary with weight as keys. If itemList and boxList are already sorted, you don't need to sort them again. Also, since (as I understand it) boxList is just a list of the values of boxDict, and itemList is just a list of the values of itemDict, you don't need one or the other of (boxList and itemList) or (boxDict and itemDict). Dictionaries in Python are by definition unsorted and unsortable. They also aren't particularly suited to reverse-lookups (retrieving a key from a value). I would also probably have "remaining weight" as the variable for each box, instead of the accumulated weight. Really the best way to do this would probably to be to build a class Box() because the boxes are supposed to be named, and keep track of the items they contain. This code will give you (1) and (4) of your objectives. For (2) and (3) you could create a custom Box() class; for that, you might define custom __lt__, etc. parameters. You might also be able to use a dictionary if you look into the sorted() function; the problem with that is you would then have to look up the dictionary key associated with the smallest value. boxWeight = 50 numBoxes = 5 boxList = [boxWeight for _ in range(0, numBoxes)] itemList = [1, 10, 5, 25, 8, 74] print("boxList: {}".format(boxList)) remainingItems = [] itemsRemain = False for item in sorted(itemList): boxList = sorted(boxList, reverse = True) if boxList[0] > item: boxList[0] -= item else: print("item: {}".format(item)) remainingItems.append(item) print("itemList: {}".format(itemList)) itemList.remove(item) remainingWeight = (boxWeight * numBoxes) - sum(boxList) print("remainingWeight: {}".format(remainingWeight)) if remainingItems: itemsRemain = True
STACK_EXCHANGE
Summary of the solution - There is no need to split the dkim record. - Register for a free domain with cloudns.net and add the 2048 DKIM long txt record there. - On your current dns provider add a CNAME record (EG: default._domainkey) pointing to the free DNS record on cloudns.net How this problem came about I came across a problem with CPANEL - Cpanel only generates 2048 bit DKIM keys - I do not want to host my DNS with Cpanel - My current DNS provider only allows a 255 character txt record Originally i found a possible solution about dns record splitting. However upon further reading I realised this does not work with the 255 character txt record limits. All this means is that a long text record can parse information as one item if it is split “value-1” “value-2” Then I found a youtube video with a out of the box solution that worked! Things to note. It is important to note that cloudns.net may revoke their free DNS offer in the future which will make that record invalid. This means that your emails may not pass DKIM and may be marked as spam. There are other free DNS options that you could use to do something similar. How do you setup DKIM with Microsoft 365? Setting up DKIM with Microsoft 365 is more complex than other providers. Their guide is not easy to understand. However I did find this guide. You can log into admin.microsoft.com to complete the steps. I would like to add the following from my experience. - If DKIM option is not available, it might be that it is a new outlook 365 service and it hasn’t been setup. Try again a few days later. - Microsoft 365 won’t let you copy and paste the CNAME records, it is very frustrating. Do not try to type it out, because there is a chance you will make a mistake. Instead if you are using Chrome browser, try to inspect the element and search for the term. Then copy and paste in notepad. - The CNAME record is “selector1._domainkey” The value is item displayed on the microsoft 365 page Can you use multiple DKIM records? Yes. Let say you use Microsoft 365, which uses their own DKIM, your website that uses Cpanel, and maybe a newsletter provider. Their servers may request their own DKIM records to be used. Can you use SPF and DKIM together? Yes. As is proven in Gmail when you view “show original” Can you use multiple SPF record? No. You can combine them in a single record.
OPCFW_CODE
The Gateway page is the first page that users see when accessing Blackboard Learn, unless the administrator opted to bypass the Gateway page. The Gateway page includes the following functions: - Login: Directs the user to the Login page. - Course Catalog: Directs the user to the Course Catalog. This function can be removed by the administrator. - Create Account: Directs the user to the Create Account page. This function can be turned off by administrators. This button should be turned off unless you want anyone with access to the URL to create accounts. The Gateway page also includes a default welcome message and image from Blackboard. If administrators opt to bypass the Gateway page, site visitors are taken directly to the portal as a guest. Portal Direct Entry is available only if your institution has access to community engagement features. Users can then login by selecting Login in the header frame. The Gateway Options page also includes an option for changing the URL that handles user requests for lost passwords. Several of the functions on the Gateway page and the Login page can be customized. The Gateway page is the first page users typically see when accessing Blackboard Learn. It can include buttons for creating an account and allowing anyone to browse the catalog. For security and data integrity reasons, it is not recommended that users be allowed to create accounts. On the Administrator Panel, in the Security section, select Gateway Options. The following table describes the options. To learn more about user account lock options, see Account Lock. |Start Page for Users |This option is available only if your institution licenses community engagement. Select Tab Page to skip the Gateway page entirely and send site visitors directly to the portal as guests. Users can login using the button in the header frame. If this setting is changed, restart the server to avoid experiencing errors. The Login module may only be turned on if Portal Direct Entry is on. |Link to Course Catalog |Select Enable to display a button that links to the Course Catalog on the Gateway page. Provide a link for an External Catalog URL in the field. |Link to Account Creation |Select Enable to display a button on the Gateway page that lets visitors create a user account. The user account is created with an institution role of Student and an admin user role of None. |Lost Password Functions |Request Forgotten Password |Select Enable to turn on the link that allows users to request that the password for the account. |URL for Forgotten Password |Provide the URL for the link on the login page that allows users to request that their password be mailed to the email address stored in their user information. The default URL is /webapps/blackboard/password. |Guest Access Defaults |Allow Guest Access to the System |Select Enable and users who do not have an account (non-authenticated users) may access the system, such as portal areas. Select Disable and users without an account will not have any access to the system. |Allow Guest Access to Courses |Select Enable and users who do not have user accounts (non-authenticated users) may access courses on the system. If Disable is selected, Instructors will not be able to make areas in their courses available to guests. |Allow Guest Access to Organizations |Select Enable and users who do not have user accounts (non-authenticated users) may access organizations. If Disable is selected, Leaders will not be able to make areas in their organizations available to guests. The welcome message and image can be customized by replacing an HTML fragment in the file system. It is possible for each virtual installation to have its own customized welcome message and image. Follow the steps below to replace the HTML fragment that generates the welcome message and image with one customized for the Institution. - Open the URL for the virtual installation and verify that the Gateway page appears. - Within the blackboard file system change directories to /content/vi/vi_ID/branding. Where vi_ID is the name of the virtual installation. - Save a copy of the gateway.bb file so that it can be restored to the default. - Edit or replace the gateway.bb file with another HTML fragment. - Open the URL for the virtual installation and verify that the welcome message and image appear as desired.
OPCFW_CODE
Explain creating web part UI controls in CreateChildControls vs declaratively in the .ascx file I am fairly new to SharePoint development but I come from a ASP.NET background. I am trying to get a clear explanation of the rationale for creating the UI controls in CreateChildControls vs creating them using declarative style in the .ascx file. I have found several blog posts and articles that say that you should always use CreateChildControls but they do not explain why. I have several web parts that I have created using the declarative style that seem to work fine. I am worried that because of my inexperience with SharePoint I am unaware of best practices or limitations that I will run into by developing this way. Don't worry. The pure use of CreateChildControls is only due to old habits of SharePoint developers due to the way Web Parts was developed in the past. If you prefer to build your web parts declaratively due to the increased productivity then just use that method. The only reason to prefer CreateChildControls is if you need the extra flexibility of adding child controls dynamically. Here is some of the history of web parts to explain where a lot of SharePoint developers got their habits from. SharePoint 2003 This was the first version of SharePoint build on top of ASP.NET, but it used it's own page parser so .ascx files wasn't supported which meant that the use of CreateChildControls was the only possibility. Later versions of SharePoint These are "pure" ASP.NET applications which fully support .ascx files. But the method of creating web parts was limited by the support of different Visual Studio versions Visual Studio 2008 No built in support for Visual Web Parts so most developers continued to use CreateChildControls as this was what they got out of the box. Visual Studio 2010 Introduced Visual Web Parts, but what it created was a web part which was a thin wrapper around a UserControl which it loaded using Page.LoadControl which loaded the .ascx from the file system. This had two drawbacks: Could not be used in Sandboxed solutions (but do we really care?) It was cumbersome to use Web Part properties and connections as these had to be implemented in the web part and then somehow delegated forward to the UserControl which had the main logic The last part meant that most SharePoint developers continued to use CreateChildControls Visual Studio 2010 SharePoint Power Tools, Visual Studio 2012 and later Introduced a new paradigm for the Visual Web Part (depending on the version it either has Sandboxed or lacks Farm Solution Only in the name) , where Visual Studio generates a partial class based on the .ascx file which is "merged" with your class, this generate a pure web part which is functional no different than the one created using CreateChildControls.
STACK_EXCHANGE
A Manual Process in Oracle BPM Suite 12c is a process that the user starts through a Human Task. In this post we will create a BPM Process to add employees through a Human Task and generate a XML file through File Adapter. Download the sample application: BpmHelloWorldApp.zip. Create a new BPM Application, name it as BpmHelloWorldApp and click Finish. Right-click the project name and choose New > BPMN 2.0 Process. In the BPMN 2.0 Process Wizard, choose Manual Process, name it as AddEmployee and click Finish. Our BPM application was created. Before we start to model our process, we need to create a new user. Start the WebLogic Server, go to Console and create the hrofficer user. Create the Canonical Model. Go back to JDeveloper, create a new file inside Schema folder and name it as Employee.xsd. Copy the following code inside the file. <?xml version= '1.0' encoding= 'UTF-8'?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="https://waslleysouza.com.br/ns/employee" elementFormDefault="qualified"> <xsd:element name="EmployeeRequest"> <xsd:complexType> <xsd:sequence> <xsd:element name="FirstName" type="xsd:string"/> <xsd:element name="LastName" type="xsd:string"/> <xsd:element name="HireDate" type="xsd:date"/> <xsd:element name="Email" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> In the Applications window, select the AddEmployee file. In the Structure window, right-click the Process Data Objects and choose New. Name the new Data Object as employee and choose browse in the Type field. In the Browse Types window, click the Business Object button. In the Create Business Object window, set the Name as Employee and Destination Module as Data. Mark the “Based on External Schema” option and click the Schema Browser button. In the Type Chooser window, expand the Employee.xsd file and choose EmployeeRequest. Click the Yes button to create Data Module. In the Browse Types window, choose Employee and click OK twice. Open the Organization file. Edit the “Role” role and change it name to HrOfficer. In the Members section, click the browse button. In the Identity Lookup window, choose your Application Server and click the Lookup button. Select the hrofficer user and click the Select button to associate it to HrOfficer role. Click the OK button to confirm the changes. Create a Human Task to add employees. Go to AddEmployee file and double-click the User Task component. In the Basic tab, name it as AddEmployee. In the Implementation tab, click the Add button near Human Task field to create a new Human Task, and name it as AddEmployeeHT. Click the Add button near Parameters section to open the Browse Data Objects window. Expand Process and Data Objects nodes and drag and drop the employee node inside Parameters section. Mark the Editable option and click OK. Click Data Associations link and navigate to Output tab, connect the employeeRequest and employee nodes and click OK twice. Open the AddEmployeeHT.task file. Click the Form button and choose Auto-Generate Task Form. Name the project as AddEmployeeUI and click OK. Now we have a Human Task to add employees! Let’s add a File Adapter to save the employee’s information as an XML File. Open the BpmProject file and add a File Adapter inside External Reference section. Name it as SaveEmployee and click Next 3 times. Choose Write File as Operation File and click Next. Specify the Directory for Outgoing Files and File Naming Convention, and click Next. Click the Browse for schema file button, choose the EmployeeRequest node and click Next. In the AddEmployee file, add a Service Activity between AddEmployee and End Activities. In the Basic tab, name it as SaveEmployee. In the Implementation tab, choose Service Call as Type and SaveEmployee as Service. Click Data Associations link, connect the employee and employyRequest nodes and click OK. Click OK again. To deploy your BPM Project, right-click the project name and choose Deploy > BpmProject. Don’t forget to check all TaskFlow Projects. If your application was deployed successfully, go to Business Process Workspace (http://<HOST>:<PORT>/bpm/workspace). Log in as hrofficer user and add a new employee. Go to Directory for Outgoing File you specified and open the XML file created.
OPCFW_CODE
During the early planning stages for Drupal 8, one of the most anticipated changes was the new Configuration Management Initiative. Today it is safe to say that the hype was real, and that the new configuration system is perhaps single most important change in the Drupal 8 universe. And yet more than two years after the release of Drupal 8, the configuration system remains somewhat of a mystery to both experienced Drupal 7 developers and new Drupal 8 developers. Best practices are still being actively developed and discussed, and hard problems are still being solved both in the contributed module space and in Drupal core proper. This session aims to unravel the mysteries of the Drupal 8 configuration system, and to outline an evolving set of best practices explained in human terms. Whether you are a solo consultant or part of a larger team with distributed developers, this session will leave you with a fundamental understanding of why configuration management exists, how it works, why it matters, and how to take advantage of this impressive system no matter what kind of Drupal site you are managing. How did we get here? Learn about the history of configuration in Drupal, the rise and fall of the Features module, the promise of a better way, and the ultimate decision to develop a new configuration system for Drupal 8. Why is it important? We provide real world examples that explain why the configuration system is so important, how it can cut your development time by as much as 30% or more, and how it has provided a virtually fail safe solution for deploying Drupal sites whether you are a one-person show or a large distributed team deploying mission critical websites. How does it work? We will cover the basic fundamentals of configuration management, the difference between “active” configuration and configuration in “code”, and why it matters. We will cover the basics of the Configuration API, the configuration UI, and the critical Drush commands you will need to effectively manage configuration. Okay, but how does it REALLY work? The truth is that configuration management is complicated, and even more so once you start to encounter some of the difficulties surrounding projects that are complicated by clients, content editors, and multiple developers who are all actively trying to work on a site. This session will do a deep dive into these issues and provide solutions such as: - Managing configuration in a team environment -- learn how to do fail-safe deploys no matter who is working on a site - Configuration Split module -- learn how to use different configuration schemes depending on your environment - Configuration exclusions -- learn how clients can make configuration changes on things like Webforms without losing work - Learn more about config ecosystem and other contributed solutions to common problems - Gotchas -- learn what to do when things go crazy and the key_value_expire table takes over your configuration - How to recover when things go bad -- avoid configuration loss and overriding work To infinity, and beyond! Finally we will take a look at the future of configuration management in Drupal, and some of the more powerful ways it can help you: - What is happening in core? (e.g. configuration split, configuration installs) - Devops and continuous integration for the win - Better QA and fail safe deploys every time - Why the configuration system matters, even for a single developer By the end of this session you should have a fundamental understanding of the Drupal 8 configuration system, a list of real world best practices and strategies, and a plan for how you can leverage the configuration system to take your Drupal 8 builds to ever greater heights.
OPCFW_CODE
Auto Layout not properly handling iPhone X notch I have implemented a pageViewIndicator to the top of my application using swift. I have constantly tested it on my personal iPhone, which has worked, but when using the iPhone X simulator, I have noticed that it disappears behind the notch, simply because I did not reference to place it within the safe area or the safe area is not properly configured yet. Here is the comparison: It seems like an easy question, yet I currently have not found any proper support on how to handle this: the main suggestion is to adjust the safeAreaInsets, yet I do not understand how to apply this to the AutoLayout functionality. I have tried adding a topAnchor constraint to the pageController, yet would it even be possible editing this using basic arithmetic? pageControl.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor).isActive = true use this solution and then you check the hasNotch https://stackoverflow.com/a/68289931/7110147 I have found my problem, which was actually something I have simply overseen: I forgot to add pageControl.translatesAutoresizingMaskIntoConstraints = false to the pageControl before adding it as a subview. With this, I was able to assign following constraints to properly position my pageController where I wanted it to be: let safeViewMargins = self.view.safeAreaLayoutGuide pageControl.topAnchor.constraint(equalTo: safeViewMargins.topAnchor).isActive = true pageControl.leadingAnchor.constraint(equalTo: safeViewMargins.leadingAnchor).isActive = true pageControl.trailingAnchor.constraint(equalTo: safeViewMargins.trailingAnchor).isActive = true This then perfectly ran on all devices. There is no need to "adjust" anything. The whole point of the safe area is that its top is below the status bar on non-X iPhones and below the "notch" on an iPhone X. That is what the safe area is for. So, just pin the top of the page control to the top of the safe area. Here's how it looks on an iPhone 5s simulator: And here's how it looks on an iPhone X simulator: That does make sense, but how is it possible to "pin" it to the safe area. Is there any way, when defining the y-position, to assign it to the highest possible spot within the safe area? Yes, just set the top of the page control equal to the top of the safe area (constant of zero). It sounds like you may not know anything about autolayout. You'd better start learning about it, because it's crucial if you want your interface to work on all sizes of phone. pageControl = UIPageControl(frame: CGRect(x: 0, y: 0, width: view.bounds.width, height: 0)) is the code I applied to define the pageControl. The problem with this, however, is that it positions the pageControl for the iPhone X right onto the border of the ears', just in the center, completely ignoring the notch. Assigning 50 to the y-value would set this outside of this non-visible area, but make all other devices not fit visually. Right, because that's not what I said to do. UIPageControl(frame: is not autolayout. It is not "pinning" anything to anything. Find out about autolayout. Try googling; it ain't much but it's a start.
STACK_EXCHANGE
External monitor causes very high CPU usage I have an external LG 27UD69P-W monitor connected (via HDMI) to my late 2013 MacBook Pro, which causes the CPU usage of the Mac to go extremely high (kernel_task > 400%, sometimes much higher even), with the fan running very fast. (The computer is so sluggish that even the cursor does not move smoothly, often.) Some things I've noted: SMC reset doesn't help. Safe mode doesn't help. GPU usage is very low (almost zero) when monitor is connected. Is this to be expected? Occasionally I can connect the monitor and still have a usable computer (< 50% CPU usage) for 5 or 10 minutes, but that's it. Monitor works fine with another MacBook. What could be the issue here? A troublesome kext perhaps? Any suggestions? Update Here's my system log for when I plug in the screen and a couple of minutes after, while the computer is running sluggish / high CPU usage. Notable are the following sorts of events: kernel Currently unsupported feature requested kernel DisplayPipe Capabilities are not supported on offline Fbs IOAudioStream IOAudioEngineUserClient streamFormat coreaudiod* OSCMultiMonitor tccd coreduetd* com.apple.AmbientDisplayAgent Invalid display 0x1b5671a5 Here's a "normal" system log for a few minutes while the monitor is not plugged in. is that a new problem? did it work before @Buscar웃 Same issue since the day I got it... take look in the Console log to see what are those 2 doing also just try to scale down the resolution for test purpose There has been some unconfirmed reports that the NVIDA drivers that came with High Sierra are not operating correctly. Suggestion is to get updated Drivers from NVIDA directly. Some have suggested to change the resolution back and fore to fix the problem. It also looks like your system is no doing Graphics switching. Check in System Preference that the Auto Graphic switching is turned on. Also it could be very useful to us if you copy/paste some 30-50 lines in from your Console at the time when you plug in the external monitor so we can see what is going on. Thanks for your reply and suggestions. My NVIDIA card is NVIDIA GeForce GT 750M, and I don't see an option for macOS on the driver download page. Windows, Linux, FreeBSD, but no macOS! https://www.nvidia.co.uk/Download/index.aspx?lang=en-uk I've tried with with graphics switching on and off; same problem. Here's the log for around the time I plug in the monitor for a few minutes after. (Interestingly this time it wasn't so bad... I got video to play reasonably, albeit a bit choppy. GPU usage was very low again, but kernel_task wasn't too high.) Here's a few 100 lines of logs after I plug it in: https://gist.github.com/alexreg/b03863cb375b924fe211600aa25fe1fb check this out https://www.nvidia.com/download/driverResults.aspx/125379/en-us Thanks, do I need to restart after? Ah... "Mac OS X version 10.13.6 (17G65) is not supported with this package. Please see NVIDIA’s website for further driver information." OK, your CPU (kernel) gets flooded with IOAudio problems @Noldorin sorry, they only made it up to 10.13.1 :( Oh I see. Is that from the external monitor, or something else? Maybe the "Boom 3D" software I have installed? Also I found https://www.nvidia.com/download/driverResults.aspx/130460/en-us, which is good. you also have a website that consumes a lot of CPU, close it for now. Yeah probably the website playing the video while I was testing. Okay, I followed https://devtalk.nvidia.com/default/topic/1025945/mac-cuda-driver-fully-compatible-with-macos-high-sierra-10-13-error-quot-update-required-quot-solved-/ and installed the latest "web" & CUDA drivers https://www.nvidia.com/download/driverResults.aspx/136062/en-us and https://www.nvidia.com/object/macosx-cuda-396.148-driver.html. It's not quite as bad as it was before, but still not great. Do you think the IOAudio thing could still be the problem? Any other ideas? yes the IOAudio keeps your CPU bussy How can I find put where it’s coming from though? Lets do that as separate question, getting the "Please avoid extended discussions in comments". Fair enough. I edited the question, so maybe you could just add another answer now? :-) In reply to your updated and extended question There is a problem with your Boom 3D error 18:09:59.938200 +0100 Boom 3D AUBase.cpp:804:DispatchSetProperty: ca_require: inDataSize == sizeof(UInt32) InvalidPropertyValue That probably leads to the continuous and repeating Coreaudiod and IOAudio I do not know how to fix it, see Boom 3D support Thanks for the new answer. Unfortunately even uninstall Boom 3D completely doesn't seem to help... could anything else be causing the issue? Any thoughts? :-) @Noldorin do you still have the Boom showing in the log ? and turn off all your sharing for now. ok, so you have Alfred 3 installed, and also have some USB audio connected ? Yes to Alfred 3, no to USB audio, as I understand. Here's my updated system log, with Boom 3D uninstalled, NVidia web & CUDA drivers installed: https://gist.github.com/alexreg/f8a65a0e6ce84670b4ff5fb3daeda296 so you have Boom 3D, Alfred, Bartender and who knows what else ! just start the system without all that crap (Safe mode) Start or restart your Mac, then immediately press and hold the Shift key...And report is the Monitor working now ?? Yeah, tried that, no luck sadly... sadly I have to give up at this time....maybe someone else can help you No worries. Thanks for your help anyway! Can't say you didn't try. I have a suspicion it may be a hardware thing in fact... it could be, but what I see you have a lots of crap running, overloading the CPU and RAM, if I might suggest, turn off all Web browsers and all 3d party stuff, so the OS X has a chance. Given it happened on safe mode with nothing open still, not likely. Also, my MBP runs smoothly normally with everything open! that was for me, to be able to read the console report
STACK_EXCHANGE
Optimizing php/mysql translation lookup with huge database and hash indexes I'm currently using a utf8 mysql database. It checks if a translation is already in the database and if not, it does a translation and stores it in the database. SELECT * FROM `translations` WHERE `input_text`=? AND `input_lang`=? AND `output_lang`=?; (The other field is "output_text".) For a basic database, it would first compare, letter by letter, the input text with the "input_text" "TEXT" field. As long as the characters are matching it would keep comparing them. If they stop matching, it would go onto the next row. I don't know how databases work at a low level but I would assume that for a basic database, it would search at least one character from every row in the database before it decides that the input text isn't in the database. Ideally the input text would be converted to a hash code (e.g. using sha1) and each "input_text" would also be a hash. Then if the database is sorted properly it could rapidly find all of the rows that match the hash and then check the actual text. If there are no matching hashes then it would return no results even though each row wasn't manually checked. Is there a type of mysql storage engine that can do something like this or is there some additional php that can optimize things? Should "input_text" be set to some kind of "index"? (PRIMARY/UNIQUE/INDEX/FULLTEXT) Is there an alternative type of database that is compatible with php that is far superior than mysql? edit: This talks about B-Tree vs Hash indexes for MySQL: http://dev.mysql.com/doc/refman/5.5/en/index-btree-hash.html None of the limitations for hash indexes are a problem for me. It also says They are used only for equality comparisons that use the = or <=> operators (but are very fast) ["very" was italicized by them] NEW QUESTION: How do I set up "input_text" TEXT to be a hash index? BTW multiple rows contain the same "input_text"... is that alright for a hash index? http://dev.mysql.com/doc/refman/5.5/en/column-indexes.html Says "The MEMORY storage engine uses HASH indexes by default" - does that mean I've just got to change the storage engine and set the column index to INDEX? You may be interested in this http://stackoverflow.com/questions/9820801/how-does-mysql-fulltext-search-work BTW I'm using phpMyAdmin. Also I'm not searching for individual words within a sentence. I'm searching whether an input string exactly matches one from in the database. (if there is a match it then checks for matching input and output languages) Have you considered adding a LIMIT 1 at the end? I've found that to help in my own usage, as it will speed up the query when the entire goal is to get a single match. A normal INDEX clause should be enough (be sure to index all your fields, it'll be big on disk, but faster). FULLTEXT indexes are good when you're using LIKE clauses ;-) Anyway, for that kind of lookups, you should use a NoSQL store like Redis, it's blazingly fast and has an in-memory store and also does data persistence through snapshots. There is an extension for php here : https://github.com/nicolasff/phpredis And you'll have redis keys in the following form: YOUR_PROJECT:INPUT_LANG:WORD:OUTPUT_LANG for better data management, just replace each value with your values and you're good to go ;) An index will speed up the lookups a lot. By default indexes in InnoDB and MyISAM use search trees (B-trees). There is a limitation on the length of the row the index so you will have to index only the 1-st ~700 bytes of text. CREATE INDEX txt_lookup ON translations (input_lang, output_lang, input_text(255)); This will create an index on input_lang, output_lang and the 1-st 255 characters of input_text. When you select with your example query MySQL will use the index to find the rows with the appropriate languages and the same starting 255 characters quickly and then it will do the slow string compare with the full length of the column on the small set of rows which it got from the index. I might be misunderstanding but do you mean that "input_text" would still be a TEXT with no size limit? It will have a size limit (every datatype has one) but it can be much higher (4GB or something...). The index uses only the 1-st xx chars from the field value. You should also be careful with the collation on that table. If you use a case insensitive collation (which is the default one sometimes) "Foo bar" will get the same translation as "foo bar".
STACK_EXCHANGE
One of the interesting features of MbUnit 2.4 is the Rollback attribute. I’ve spoke about this in passing in some of my previous posts, but I thought I would drive a little deep into what is going on. The Rollback attribute can be added to any test method, when the test has finished (either passed or failed) any changes made to the database is rolled back – even if it has transactions inside. This means that your tests are more isolated Rollback uses Enterprise services and COM+ and is based on the .Net 1.1 implementation. Rollback2 uses TransactionScope from System.Transactions which was included in .Net 2.0. If your project is using .Net 2.0, Rollback2 would be the preferred attribute to use. An example of how this could be used is: public void GetCategoryByName_NameOfValidCategory_ReturnCategoryObject() Category c = ProductController.GetCategory(“Microsoft Software”); Assert.AreEqual(“Microsoft Software”, c.Title); Assert.AreEqual(“All the latest Microsoft releases.”, c.Description); One question I was asked at DDD was if Rollback supports other databases or if its just SQL Server. The answer is, I still haven’t been able to find a definitive answer. In theory it should work, but I have read some forum comments about errors when using TransactionScope. If I can get everything setup, I’ll write another post with my findings. While looking around about this feature I came across a RestoreDatabase attribute. Had to have a play! Turns out, it allows a backup to be restored before the test is executed (if you didn’t get it from the name). It gets all the information based on the SqlRestoreInfo attribute at the test fixture level and uses that to store the database before the test is executed. [SqlRestoreInfo(“Data Source=BIGBLUE;Initial Catalog=Master;Integrated Security=True”, “Northwind”, “E:Northwind.bak”)] public class Class1 string connectionString = “Data Source=BIGBLUE;Initial Catalog=Northwind;Integrated Security=True”; public void GetOrders() //string queryString = “DELETE FROM dbo.[Order Details];”; If we execute this in the first run the table is empty. string queryString = “SELECT * FROM dbo.[Order Details];”; If we execute this in the second run, the data is restored. string queryString2 = “SELECT * FROM dbo.[Order Details];”; using (SqlConnection connection = new SqlConnection(connectionString)) SqlCommand command = new SqlCommand(queryString, connection); SqlCommand command2 = new SqlCommand(queryString2, connection); SqlDataReader reader = command.ExecuteReader(); // Always call Close when done reading. It’s a neat idea and good for integration testing, but I wouldn’t recommend it for unit testing as the additional time to restore the database would soon mount up, expect improvements to testing with databases in MbUnit v3.
OPCFW_CODE
How can I find people to walk/hike with in the USA when travelling alone? In the UK, walkers' rights organisation The Ramblers runs hundreds of organised walks and hikes all over the country every week that anyone can join for free (for their first couple of tries, then they need to become a paid member, which anyone can do). Is there an equivalent organisation to The Ramblers in the USA? Specifically, I'm considering a trip to the Esalen Institute (in Big Sur, California) for a retreat, and would like to do a couple of day hikes in the surrounding area afterwards. It'd be much more fun with other people, and an organised group seems a good back up in case there's no-one at Esalen who fancies it. I don't know if there is a national organization. There is probably one. But if I were you, I'd check out http://meetup.com/ to look for local groups and their calendars. I'm sure you'll find something there. Also, you might as well check out the calendar on http://eventbrite.com (there may be something related on there as well, although it's less likely than meetup). @StephanBranczyk, can you make that into an answer? Looks like good enough for upvoting at least. Possibly relevant: https://travel.stackexchange.com/questions/67498/how-to-find-people-to-do-long-hikes The Sierra Club is such an organization. Besides doing environmental advocacy on a nationwide level, they also have local chapters where people organize activities like hikes. As the name suggests, they started in California and are most active there, but there are chapters throughout the country. Around Big Sur, it looks like the Ventana and Santa Lucia chapters would be the ones to look at. Each one has a schedule of upcoming hikes and other "outings"; you'll see they are pretty frequent and range from easy to challenging. Each outing has contact info for a leader, so when you find a hike that interests you, you'll need to contact the leader to reserve a space (most hikes are limited to a fixed number of hikers so that the group does not become excessively large). The leader will also give you info on where and when the group will meet (typically they meet at some central location and carpool) as well as anything else you need to know. You should mention that you're not a club member, but normally this should not be a problem, especially since you're visiting. If they suggest that you join the club, usually the membership dues are not very expensive. My experience has been that Sierra Club hikes put substantial emphasis on safety and preparedness. You'll want to make sure you have appropriate hiking gear, perhaps following the ten essentials system. As a traveler, you may not be able to carry all of this on a plane with you, so you might have to plan a shopping trip. The trip leader may be able to offer advice. Also, you'll likely be asked to sign a liability waiver at the beginning of the hike, so that you don't sue the Sierra Club if anything bad happens. (Disclosure: I used to be a member of the San Diego chapter until I moved away.) If I were you, I'd check out http://meetup.com to look for local groups and their calendars. I'm sure you'll find something there. Also, you might as well check out the calendar on http://eventbrite.com (there may be something related on there as well, although it's less likely than meetup). I'm reposting my previous comment here as an answer because of your request, but I'd use Nate's answer if I were you. He seems to know precisely about this topic. And hiking in organized groups is not something that I do personally. It is good to have several options, so it is worth posting this as an answer.
STACK_EXCHANGE
This morning I was looking over the old reading list and noticed that A Night in the Lonesome October had some strange characters floating around, which I wanted to delete. I felt that deleting them all using vim’s search/delete seemed like a good plan, but I couldn’t recall a way to copy the line containing the special character from the text buffer over into the command-mode search/replace/delete string. I wanted to do it without using my mouse or shell, and knew there must be a way — ahoy, to the internet! This stackoverflow question had exactly what I needed in one answer, and then a great improvement right after. The initial solution is to yank into a letter-named register using " = specify that a register name follows, a = overwrite the ‘ y = yank, w = a word, so all together that command says “yank a word into register a” — you could also use, say, "ayy to yank a line, etc), and then to use CTRL-R followed by the register (in this case, a) to paste from the register straight into a command-mode string. The followup that improves this in general use is to avoid named registers and use the implicitly-filled ‘ 0‘ register, so you can just, say, yw as normal to yank a word, and then use CTRL-R 0 to pull from the 0 register which got filled implicitly by the yank command. I also didn’t quite remember what the command-mode string to delete a line matching a pattern was. I tried :%d/search term/ first, but I only got trailing-characters-errors as a result, so I turned back to the internet. Google quickly turned up this vim wikia tip that it’s %g to globally search and execute a command on matching lines, and /d to specify delete as the command to be issued. I haven’t ever really used registers in vim, so this is one more of those thousand steps up the mountain in my ongoing (decade-plus-long) vim user story. These are such arcane tools, but at least when you get them to do the work you want, it does still feel like wizardry. With luck, I’ll remember these pieces a bit better next time and cut Google out of the loop. 😀 Windows CMD Aliases In a mildly different zone of computer use, I was missing a few of my common command-line aliases in the windows shell, cmd. I tend to use ‘ git status, ‘ git diff to see the unstaged changes, and ‘ git diff --cached to see what’s staged to commit. I have some simple alias commands in most of my unix home dirs, but recalled windows being a bit stubborn here last I looked. Thankfully, the god-emperor of the internet Google quickly served up this fine stackoverflow, which listed a few interesting options: alias.cmdthat gets loaded via the shell shortcut with alias.cmdthat gets loaded via a registry key .batfiles in an aliases folder that you add to your These are all pretty cool ideas! I was only vaguely aware of doskeys before reading this post, so that’s a keeper at the least. After some thought, I opted for the registry editing route — here’s why: I use at least 2 shortcuts to open cmd, and I don’t want to have to modify the launch string in both. Moreover, I definitely don’t want the potential to forget to modify one or to end up with different launch strings in different places, so the shortcut-editing answer didn’t fit my needs. The .bat-files-in-a-dir solution is aimed at reducing the computer-load of repeatedly loading all your aliases, and I only have 3 simple ones to add for now. On top of that, a .bat for each of them seems like overkill. Over 1 Billion Yaks Shaved Those are daunting tasks. So instead of working on them, I chose to give the world yet another blogpost with a few links deep into the stackoverflow hivemind. I also noticed that viewing individual blogposts seems to have grown some content-type issues after the migration I did a few weeks back, so I might have to fix that before I do anything else. The procrastination show must go on! update: fixed! Configuring nginx for wordpress just took me a lot of reading and trial and error. 😀
OPCFW_CODE
Patronus- Appwrite Hackathon'24 Submission by Vasu1712 Hackathon Submission: Patronus Team Member's GitHub Handle @Vasu1712 Project Title Patronus Project Description Patronus is your personalized web assistant designed to scour the internet for the cheapest flight tickets matching your query, saving you valuable time from hopping between different operator websites. Authentication: Securely log in to access the intuitive calendar-based UI. Setup Your Journey: Select travel dates via our interactive calendar. Choose your airports. Specify the number of passengers. Request Flight Tickets: With your preferences set, simply submit your request. Patronus at Work: Receive the cheapest available flight tickets directly in your mailbox. Inspiration behind Patronus This project was born out of personal frustration - the tedious process of sifting through numerous flight options, wasting hours in search of the best deal. Patronus streamlines this process, saving you time, energy, and money. Tech Stack Frontend Built with: NextJS Key Libraries: react-calendar: For developing an intuitive Calendar UI to book tickets. react-select: To track date changes seamlessly. huggingface-inference: Integrates Mistral LLM for enhanced flight query processing. Amadeus API: Provides comprehensive flight information to the Mistral LLM for accurate responses. tailwindcss: For a sleek and responsive UI design. Backend Powered by: Appwrite Usage of Appwrite Authentication (Auth): Manages user signups and guest accounts with ease. Databases: Securely stores flight ticket queries from the NextJS frontend. Functions: Processes queries through the ML model and integrates with Mailgun for email responses. Messaging: Utilizes Mailgun to send detailed flight ticket information (LLM response) to users. Project Repository https://github.com/Vasu1712/Patronus.git Demo Video/Photos/Link A video demo can be viewed here: https://drive.google.com/drive/folders/16b-4YZDSZJGb6B2EklKBjEwLCFsTVARx?usp=sharing or yt: https://youtu.be/L62-S0vXP2g The application can also be reached using this link: https://patronus-vasu1712.vercel.app/ Future Prospects for Patronus One of the most important prospect for Patronus is to build an in house LLM(or SML) ML model and train it on historical flight information using Amedus API and other details so that it can successfully predict the lowest flight ticket date and time and send the booking link to user mail inbox. Anything Else You Want To Share With Us? Appwrite has been one of the most sought after organisations that I have been following from Hacktober'22, I have read several of Dennis Ivy's blogs and followed appwrite over twitter and I have seen AppWrite's growth and when I saw this time they are orgainsing a hackathon, it was a no brainer for me to get around. This is the first time I used any BaaS and I would have to admit that AppWritedoes makes life pretty easy for full-stack devs that too with features to scale and as you say 'build like a team of hundreds'. I really like product, the idea and the problem appwrite is solving and would love to be a part this organization be it open-source, full-time or freelance. Hi Aditya! Just wanted to add the youtube link for project demonstration in case the drive link is inaccessible https://youtu.be/L62-S0vXP2g
GITHUB_ARCHIVE
TERMS AND CONDITIONS This is a legally binding agreement hereinafter referred to as the "License Agreement", between You and Flyer bin inc., the owner/ provider (herein after referred to as the App Provider) of the Flyerbin application(herein after referred to as the Licensed Application). By downloading, accessing, browsing, subscribing or otherwise using this Licensed Application, You agree to be unconditionally bound by and to follow the terms and conditions of this License Agreement, as amended from time to time by the App Provider in its sole discretion without further notice. Updates to this License Agreement, will be accessible through the Licensed Application. You are therefore encouraged to review the terms and conditions of this License Agreement regularly. You agree, your continued use of the Licensed Application after any such changes, confirms Your acceptance of the revised License Agreement. The Licensed Application has been designed and developed to make your shopping experience easy and fun with the objective of saving you money and by providing you information to make better shopping decisions. The many features include browsing local flyers, create shopping list and finding your favourite offers for most savings every week. We are motivated by our goal to provide you with the most interactive dynamic shopping experience. License and Scope of Service The Licensed Application provides an aggregated retail promotions platform of products supplied by third party vendors. The Licensed Application provides you with the option to search offers before proceeding to instore purchases. The Licensed Application is at present, purely a search tool, and does not provide online purchase functionality. You are granted a limited, non-exclusive, non-transferable, non-sub licensable license to use the Licensed Application for personal and/or non-commercial use subject to this License Agreement. You may not rent, lease, lend, sell, publish, broadcast, redistribute or sublicense the Licensed Application. You may not copy, duplicate, decompile, modify, reverse engineer, disassemble, attempt to derive the source code of, modify, or create derivative works of the Licensed Application and/or any updates, or any part thereof. Any unauthorized use may also violate applicable laws, copyright and trademark laws. Nothing in this License Agreement shall be construed as conferring any license to intellectual property rights, implied or otherwise. The License is revocable at any time without notice, with or without cause. The terms of this License Agreement will govern any upgrades provided by App Provider that replace and/or supplement the Licensed Application, unless accompanied by a separate license in which case the terms of that license will govern. Copyrights, Trademark rights and Content You hereby accept and acknowledge that the Licensed Application, the service and all related content are subject to copyright and trademark rights whether explicitly stated or implied. Any reproduction or use of the App Provider copyright material other than personal use requires the explicit conditional written permission of the App Provider. Flyerbin logo, code, flyerbin.com and other names of products/services referenced in these terms and conditions are trademarks and registered copyrights of the App Provider. You must not use any of the Copyright or Trademark connected with App Provider’s product/service in connection with any service or product not offered by us, nor in any manner that may reflect negatively on us, our partners or associates. You represent and warrant that any comments, suggestions or feedback you provide, disclose or submit to the App Provider is entirely without obligation or restriction of any intellectual property, copyright or trademark rights and will not cause any person harm, injury or contain any harmful or unlawful content. The Licensed Application contains proprietary material, images, links, text, pictures, logos, clips, trade names, domain names, service names of the App Provider or other third parties. All such content is provided on a "as –is" basis and you agree the App Provider or App Provider associates are not responsible for the accuracy or completeness of such content. It is our intention to ensure all content is decent, non-violent and appropriate. However, should you encounter such content that you deem as indecent, violent or inappropriate please do inform us. You acknowledge and agree that the App provider or third party content provider shall have no liability to You for any such Application Content. Nothing in this License Agreement shall be interpreted to confer to You any title or ownership rights to the Licensed Application or any copyrights, trademark or intellectual property rights thereof. This clause (Copyrights, Trademark rights and Content) shall survive the termination of this License Agreement. You will be required to register with us in order to make use of certain functionality/ features of the Licensed Application. While doing so you must provide us with complete and accurate information, safeguard your user name and password, permit us to assume that anyone using Your user name is either You or a person authorized by You to act on your behalf. You may cancel your registration at any time by informing us at firstname.lastname@example.org. We reserve the right to discontinue Your registration at our sole and absolute discretion without any notice to You. Use of Data You Acknowledge and agree that the App Provider may collect and use technical data and other related information about your device, browser, system and application software including but not limited to your usage data, shopping behaviour etc. and that the App Provider and or its partners, associates or third parties may use this data to improve its product, service offerings and offers. The data used for analytical services or provided to third parties will be anonymous and not contain any personal information. Termination & Modification Updates to this License Agreement will be made without notice and will be accessible through the Licensed Application. You are therefore encouraged to review the terms and conditions of this License Agreement regularly. The License is effective until terminated by the App Provider or You. On termination, You shall cease to use the Licensed Application and destroy all copies of the Licensed Application or any information downloaded or obtained from the Licensed Application. All notices, required to be sent here under shall be sent electronically via email to the designated address. All notices received shall be deemed effective a day after receipt.
OPCFW_CODE
What type of scale is mode? In all three contexts, “mode” incorporates the idea of the diatonic scale, but differs from it by also involving an element of melody type. Are modes major or minor? The three major modes are Ionian mode, Lydian mode, and the Mixolydian mode. The four minor modes are the Dorian, Phrygian, Aeolian, and the Locrian mode. How do you play modes on guitar? The best way to understand modes. Modes for guitar are derived from the major scale. (I’ll use the C major scale to make it easier to understand): CDEFGABC = 1st mode: Ionian (actually major scale) DEFGABCD = 2nd mode: Dorian (start from 2nd note) What are the 7 modes in order? Key Takeaways The major scale contains seven modes: Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian. Modes are a way to reorganize the pitches of a scale so that the focal point of the scale changes.2021-04-30 What order do the modes go in? The seven main categories of mode have been part of musical notation since the middle ages. So, the list goes: Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian and Locrian.2021-05-10 What is a mode on guitar? Modes are inversions of a scale. For example, the 7 modes on this page are inversions of the major scale. Every mode is a scale, but not every scale is a mode (the melodic minor scale or the blues scale for example are not modes). What are the 7 modes on guitar in order? In this lesson, you’ll meet the major scale’s seven modes—Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian—and learn how you can use their distinctive sounds to create more interesting melodies and chords.2018-12-11 What’s the difference between scales and modes? A scale is an ordered sequence of notes with a start and end. A mode is a permutation upon a scale that is repeatable at the octave, such that the start and end points are shifted. For example, the major scale is repeatable at the octave.2012-02-07 How do you remember the order of modes? I like to say: In-Door Pools Lose Money And Licences to represent the order, Ionian-Dorian-Phrygian-Lydian-Mixolydian-Aeolian-Locrian. Another good way to remember the modes is in terms of their darkness, or how many lowered scale degrees the modes have.2021-11-14 What modes to use over what chords? Each mode is able to play over a specific set of chords. If the chord is dominant, like a G7 or G9, you’d want to play the Mixolydian mode. If it is a minor chord, you can play the dorian, phrygian, or aeolian mode. As the chords get more complex, the mode choices go down. What modes should I learn first? The Ionian Mode This is the basic major scale, and is often the first key guitarists learn.2019-07-15 Can there be 7 modes? What are music modes? Musical modes are a type of scale with distinct melodic characteristics. The 7 modes, Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian and Locrian, come from the earliest forms of western music. How do the modes work? Modes are a way to reorganize the pitches of a scale so that the focal point of the scale changes. In a single key, every mode contains the exact same pitches. However, by changing the focal point, we can access new and interesting sounds. Like most of Western music, the modes have their roots in Church music.2021-04-30
OPCFW_CODE
OK. I'll admit it. I'm a tools junkie. I often think that tools can help me (and others) get stuff done faster, better, cheaper, etc. As a program manager at Microsoft it's very hard to keep my head above water. So much email. Day after day of meeting after meeting. A task list with 150 things on it... Arg. So I try tools. Outlook is the thing I spend most of my time in because communicating with people is basically my job. So my schedule and incoming and outgoing email are the most important things. I've tried a bunch of time management systems. Especially those with add-ins for Outlook that can automate the methodology. I tried the Covey add-in, but I found the software clunky and the methodology somehow a good idea for people who are less busy that I am. That's not a slam against their software and maybe it's something I'm doing wrong, but I just couldn't figure out how to be effective by sitting down at the beginning of each week to schedule out activities that would help me reach my goals. By the beginning of the week my schedule was already booked out 80% with meetings. :-) About a month ago I got the Getting Things Done add-in for Outlook. It's pretty nice. The software is well-integraed with Outlook. So far I think it's the best I've found, but I seem to have run into a challenge: Important Things Disappear. I love that you can click on an email to take Action later. A task gets created, linked to the email and the email is moved out of your inbox. Awesome. The only problem is that it's almost too easy. Now I've got a very long todo list and really important things can get lost in there with all of the not so important things (but ones that I still need to act on at some point). Ideally I'd be able to somehow mark them as important, but that classification is somehow much more dynamic. It doesn't necessarily start as important enough to stand out (for example) in the MUST DO BEFORE I LEAVE TODAY category, but over time it can need to move there. Maybe it's something I promised to do for somebody. Maybe events have occurred that make it more important. Often this important changes after the item has left my inbox. Maybe I just need to review my task list more often and reasses priorities (though even that can take a while with such a long list). Then there's the problem of small easy things that don't get done for a long time. For example, I run a hosted server inside Microsoft that project teams can use if they want their own FlexWiki namespace. They send me mail asking me to set one up for them. It takes about five minutes to set one up. But when the request comes in I've always got more important stuff to do at the time. So I send the person a note saying “sure, I'll get to it over the next few days.” And then this item just gets pushed out and pushed out because it's never quite big enough to reach the gotta-do-it-now state relative to everything else. So, I'm still working at it. I continue to try to figure out what works and what doesn't. And to improve. And to automate what I can. I've finally decided that I'm going to bite the bullet and build an Outlook add-in that will help automate some of what I need. The first problem it solves is Important Things Disappear. I've set up an Outlook view that shows me all tasks that haven't been modified in a week. One of the things I'm going to start doing every day is checking this list. The Outlook add-in (which I'm going to name “Faster, Better, Cheaper“ without really thinking about it too much :-)) adds a touch toolbar button that will allow me to update the modification time of a task quickly just to say “yeah, OK, I remember this one and it hasn't slipped my mind just because it's buried on a big long task list.“ I guess I'll start posting again to this blog about my experiences trying to get my personal productivity improved. This is the inagural post in the new Productivity category. Suggestions welcome!
OPCFW_CODE
Selenium Firefox starts, but does not open any URL I'm using the python version of selenium from selenium import webdriver from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.keys import Keys import time from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import Select import selenium.webdriver.support.ui as ui from selenium.webdriver.common.keys import Keys fp = webdriver.FirefoxProfile("C:\\Users\\%user%\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\fyet0w0h.default") browser = webdriver.Firefox(firefox_profile=fp) browser.get("https://helloworld.com/") firefox driver open, but i can't load any url with browser.get("url"). No proxy. Python 3.2 Firefox 31 Any help would be appreciated. Probably wrong selenium version / firefox version combination Does it work without passing in a Firefox profile? Also, if you set up logging, do you see any errors there? https://code.google.com/p/selenium/wiki/DeveloperTips#Getting_output_from_the_error_console_to_a_file This is really too vague to be answered as is. We need more details from you, either the stack trace, the error message, or something similar. This is most likely a configuration problem and without those, any potential solutions are just speculation. My initial guess is it's a version of firefox not compatible with the version of selenium you're using because that is perhaps the most common problem I see for these sort of issues. WebDriver supports the following browsers and operating systems: Google Chrome 12.0.712.0+ Internet Explorer 6, 7, 8, 9, 32-bit and 64-bit versions Firefox 3.0, 3.5, 3.6, 4.0, 5.0, 6, 7, 8, 9 Opera 11.5+ HtmlUnit 2.9 Android - 2.3+ for phones and tablets (devices and emulators) iOS 3+ for phones (devices and emulators) and 3.2 + for tablets (devices and emulators) I download FF 9.0.1 and ..it worked Downgrading to FF 9 is not a solution. Because FF will get automatically upgraded and the applications need to be tested in upgraded version. What if the requirement is only to test on FF 25- FF 27? We have to check why the url was not loading. First try downloading the latest jar of Selenium standalone webdriver. please check if your windows is a 64 bit or 32 bit machine download the jar according to that. I'm not use FF every day, and download and install FF 31 for me is .. first experience in use FF in this year :) And downgrading to FF 9, for me is solution Needs to upgrade the selenium, If you are using Latest version of Firefox, you should use latest version of selenium For Python, Enter this command pip install -U selenium For Java, Remove the old jar and Download Latest Version from here http://www.seleniumhq.org/download/ and Attach into build path. It will work find . Happy Testing with Firefox
STACK_EXCHANGE
Brilliantnovel 《The Bloodline System》 – Chapter 169 – Pride Of A Special Class aware wreck -p1 Jam-upnovel Timvic – Chapter 169 – Pride Of A Special Class drunk bee -p1 meg’s friend kelsey Novel–The Bloodline System–The Bloodline System Chapter 169 – Pride Of A Special Class mend pointless “Ugh!” The child beside him suddenly fell to the floor on his knees. what was the purpose of the pony express His title, handle, birth date, and for that reason all kinds of other items have been shown. planetside heavy assault build -“He is a distinctive type, that’s to generally be estimated,” “Ugh!” The son beside him suddenly decreased to the ground on his knees. “What can you imply by huge brother?!” Endric’s experience suddenly twisted in rage while he stared at the human being on his perfect-fingers area. “I can’t believe Endric does that to me. I was his elderly for a long time,” He voiced out just after sitting down. Female Protagonist Is A Blackened Villainess Cool I Got Super Powers And All, But I Don’t Think I Deserve Them “Uh? Endric’s large sibling?” The gal voiced out with a amazed seem and converted to stare in the computer screen. She spotted this and followed his series of view to look at what he was staring at. ‘Another prodigy out of the Oslov household. Do they have three little ones? Since Endric major brother was said to have a very low-level bloodline. I question why this one didn’t get the eye area with the inspectors… A mixedblood of this quality must have handed down the particular analyze…’ Gradier Xanatus thought while simply being baffled for the new information he found. He possessed a conflicted start looking on his confront while he stared with the screen into the future. “Wh-in a-re yo-u performing?” He stuttered while questioning as his body trembled. His brand, handle, date of birth, and so various other things were shown. “I can’t believe Endric managed that in my opinion. I became his older for several years,” He voiced out right after seated. The distress within his tone of voice was evident. Chatter! Chatter! Chatter! “What’s the problem, Endric? You’ve been operating weird since we reached the hall. Are you sickly?” A female with longer blonde curly hair beside him voiced out with a interested appearance on the deal with while extending out her palm to impression his deal with. The lady who used ceasing Endric earlier helped the other one boy up. Many of these kids provided off a confident and prideful vibe while they all observed the display screen in front with contemplative appearances. A huge tv screen was located before a hallway, and a few teenagers dressed in real whitened standard-like apparel sat on chairs contrary the display screen. The muted hall became a small noisy following what had occured. The area around Endric suddenly warped and twisted. A giant display was placed facing a hall, and several the younger generation dressed up in natural white colored consistent-like apparel sat on recliners reverse the tv screen. “Mention the data on that candidate,” Gradier Xanatus directed in the screen that showed Gustav dashing around the forest. Section 169 – Delight Of A Particular Category He turned to encounter the boy who had been fighting to move on a lawn. “Did that prospect just casually deal with a bunch of amount six AIs without activating his bloodline?” Endric stored gritting his pearly whites since he stared at a selected section of the monitor. “Uh? Endric’s massive brother?” The lady voiced out with a stunned appearance and turned to look at the monitor.
OPCFW_CODE
Autosys is job scheduling software program, a lot of the job scheduling is completed utilizing Unix-cron which is time based. Maybe the rationale we do not need any issues is the fact that we don't allow any jobs to be depending on another job that isn't with in a field - this manner there aren't any jobs being by chance started as a result of another job was restarted and was successful. The solely distinction between this example and the whereas assertion example is that the -n check command option (which means that the string has nonzero length) was eliminated, and the -z take a look at possibility (which means that the string has zero size) was put as a replacement. Dependent jobs of ON_ICE jobs, which is inside a box job, will run instantly, once box job is began. The essential advantage of using autosys w.r.t crontab is that it is has a Java entrance end too, so a person don't need to be a Unix champ to create or change a job in autosys. Think of this because the place the place I will tactically retreat to if everything goes improper sooner or later. ETrust Embedded Entitlements Manager is the alternative of the eTrust Access Control part seen in version 4.5. eEEM is a reduce down version of eAC and is geared toward a single software access management point quite than a system primarily based software. Since many Java programmers, additionally does support on Investment banks, it is important to understand fundamentals of Autosys or some other job scheduling techniques. In a home windows machine distant agent is a short lived service started by the occasion processor to process the occasions in Autosys consumer machines. Also if in case you have a job which has 20 predecessors it makes sense to group the predecessors in a box, or quite a lot of field jobs - if attainable. By giving a person entry in these three insurance policies (Application Access, Portlet Access, and Server Access), we grant minimum access to UWCC and AutoSys. When you point your browser to :5250/spin/eiam, there's an application drop down field on the display. This is where the precise distinction between ON HOLD and ON ICE comes into the image, which we will see in subsequent part. Copyright violation We take copyright violations into serious criminality ,together with reposting using RSS may also file a formal grievance although Google adsense and the DMCA department of the respective Web hosting company. Unicenter AutoSys JM is an automatic job management system for scheduling, monitoring, and reporting. Alternatively, you can enter the definition as a text file and redirect the file to the jil command. In addition, for jobs running on Windows machines, the event processor retrieves from the database the consumer IDs and passwords required to run the job on the client machine. A symbolic hyperlink, also termed a tender link, is a particular kind of file that points to another file, very similar to a shortcut in Windows or a Macintosh alias. Maybe the rationale we should not have any problems is the fact that we don't enable any jobs to be dependent on one other job that's not with in a field - this way there are no jobs being by chance began as a result of one other job was restarted and was profitable.
OPCFW_CODE
Due: Thursday, November 13 Impressed by your bugling elk simulations, the Lincoln County Ranchers Association (LCRA) has funneled serious grant money into your department. Your advisor sees the light and decides to change the direction of your research. With a government subsidy, the LCRA has set up infrared monitoring stations around the perimeter of the Gila which detect whenever an elk leaves the safe haven of the Gila and laser tattoos it with a unique ID number. Periodically these monitoring computers must communicate and sort their data so that ID numbers can be handed out to LCRA members for their weekly ``park-and-poach'' activity. The LCRA wants you to write a portable MPI code for this purpose which they will own and can license to other conservation-minded groups around the country. While skeptical that this really qualifies as cutting-edge environmental research, as a maturing graduate student, you recognize the need to satisfy paying You will write a parallel program to sort a set of numerical values. To make life easier you may assume that no two values are identical. Initially, each processor owns an equal fraction of the values. When your program is finished, each processor will own a sorted subset of values: that is, processor 0 will own the lowest few values, processor 1 the next few, and so on. Each processor will also sort its owned subset so that in aggregate, the entire list is in sorted order. To perform the sort, you should implement the parallel divide-and-conquer variant of quicksort presented in class: divide the data into two subsets of low and high values on two subgroups of processors by finding a median, then recurse within each subgroup. To simplify your task, you can assume that the number of processors is a power-of-two. Input to your program will consist of the number of values to sort (N), a random number seed (SEED) used to generate the list of values, and an index (M) of the value to output. Your code should run correctly for any N on any power-of-two processors P, including the case where P > N. The output of your code is the Mth value in the sorted list and a simple ``check-sum'' calculation on the sorted values which is more easily described in the code itself. For both of these outputs you will find the MPI_Scan function useful. Your parallel code should also include the following features: computation of the CPU time for the sort (not including initialization of the random values), creation and freeing of MPI communicators, use of asynchronous receives and waits for exchanging large sets of values between processors, and safeguards to insure that the exchanged data does not overflow allocated memory. You should start with a sequential Fortran or C program which performs this task. What you hand in: |P=1||N = 100||N = 1000||N = 10000||N = 100000| Be sure to include at least 8 digits of precision in the Mth value and checksum entries so that we can verify that your code is correct. Comment on the performance trends you see in the table for one-processor, fixed-size, and scaled-size problems. In other words, tell us why you think the numbers show what they do. Note: there are several scaling issues here, so you need to give more thought to this than in past assignments.
OPCFW_CODE
Is there any way to stop creating a new default user on newly created VM using Cloud-init template? Right now on my centos cloud-init template created by me has this config # nano /etc/cloud/cloud.cfg I am trying to to create a CentOS 7 template for my proxmox setup which I want to use with Hostbill for automated creation of VMs I tried everything but nothing works.. VM creates and online but can't login Right now I can create VM and they are online sucessfully But I can't login using... we are a small compagny ( Geco-iT ) from France that strongly relies on Proxmox PVE every day and as we find proxmox more and more powerfull, we want to give back to the community by providing some of our tools for PVE. We made a tool to use Fedora CoreOS as VM with proxmox cloud-init... One of the first things I did after setting up my PVE was creating a basic installation of Ubuntu Server, get it set up with all the basic bits, and then convert it to a template. I have cloned (full clone) this template a number of times and I did notice that each time I cloned it... currently we're trying to create template based clones using Ansible's community.general.proxmox_kvm. To make the clones configurable we stick with cloud-init. We are building the template using a hand rolled Packer configuration which deploys fine (locally as well as in our production... Hallo an alle, ich bin neu mit dem Thema Proxmox beschäftigt und scheitere an einer vielleicht leichten Sache: Mein Proxmox-Server hat ein Hardware-Raid-System mit zwei Anschlüssen. An Anschluss 1 hängen drei SAS-HD mit Raid5 und an Anschluss 2 hängen 2 SATA-HD mit Raid1. Die Hardware stellt dem... `qm template`: what does it _do_? what makes a template different from a (k)vm? I see references to how running `qm template [vm_number]` creates a template that makes future (linked or full) cloning better, more-efficient. But... what does `qm template` do to the (k)vm? (A bit of... Is it possible to update templates automatically? I'm guessing I'd need to clone the vm, update the os, then create the template over the old one. I'd like to do it for both windows/linux templates too. My use case is, If all the vm needs to do is update, I can keep it on a MUCH more... PVE newbie here. 1. How does the following wiki page apply to LXC Containers? 2. There's no mention of "snapshots" in the above wiki page, even though PVE 6.2-11 seems to feature snapshot feature prominently over "template." Is this above... Ich bin seit kurzem von ESXi auf Proxmox gewechselt und somit in dieser Welt noch sehr neu. Ich habe mir einen kleinen Cluster aus drei Baugleichen Rechnern gebaut und hatte versucht die jeweiligen Festplatten untereinander zu Verteilen, was mit dem Fehler von nicht gemounteten... i am just posting this how-to steps to help those like me that are just starting with proxmox and LXC. Hopfully, you can find it useful and contribute to fixing the remainig issue with audio in the container. i post this in Reddit too because i know when i am looking for help i look anywhere i... I have created some VMs and VM templates on a PVE node for testing. The node has a SSD and 4 SAS drives. I installed PVE on the SSD drive and assumed when i selected local-lvm as the VM disks the VMs would all use the SSD for storage too. When I look under Node > Disks now I see that the usage... I followed the instructions from this page to create a vm: qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0 qm importdisk 9000 bionic-server-cloudimg-amd64.img vms qm set 9000 --scsihw... can you please provide instructions how to cleanup an image to be used as template? This wiki article documents some manual steps to be executed on the VM before converting to a template. Is it sufficient to execute all steps documented here? This instruction is referenced in the... I use official container template to create a VM. And I set the password is 123456, but I can't login into system. It told me that Login incorrect. The template I used is : centos-7-default_20190926_amd64.tar.xz And another question, when I use images from... Ich möchte gerne von einen LXC Container ein Template erstellen (SSH soll Root login erlauben) Problem ist das ich jedes mal diesen Fehler bekomme: "Warning: Directory storage 'local' does not support container templates! (500) " Es würde auch gehen wenn mir jemand templates schicken... We having issues to create VM templates for PROXMOX VE. PROXMOX VE is installed and we added Moudules garden proxmox cloud module (modulesgarden dot com >>products>>whmcs>>proxmox-cloud) for linux distributions it requires cloud-init. for windows server OS is requires cloudbase-init...
OPCFW_CODE
Mastering Python: Closures and Decorators Mastering Python: Closures and Decorators What are Decorators? Let's discuss decorators. Note that "Decorators" can be a challenging topic for beginners. We will examine decorators in detail, step by step, explaining how they work, but it may be challenging to comprehend. Are you prepared for the challenges? That's why you're here! The prerequisite topic for understanding decorators is closure. Therefore, make sure you understand how closure works before diving into decorators. Decorators are functions that add extra functionality to another function using the closure. - This function takes another function - This is the wrapper function created inside the - This is the original function that we want to enhance with the decorator. add = decorator(add): - Here, we use the - Function calls and output: - Now that the decorator is applied to the funcas an argument. The purpose of this function is to create and return a new function (a wrapper) that will execute some additional code before and after calling decorator()function. It takes two arguments, wrapper()function starts by printing "Function starts executing" to indicate the beginning of the function's execution. - It then calls the original function func(argument1, argument2), where funcis the function passed as an argument to the - After calling func, it prints "Function ends executing" to indicate the end of the function's execution. - The result of func(argument1, argument2)is stored in the variable - Finally, it returns the - It takes two arguments, - It prints the message with the sum of band returns their sum. decorator()function to enhance the behavior of the - By assigning add = decorator(add), we replace the original add()function with the new version of add()that has the decorator applied to it. Now, whenever we call add(), we are actually calling the wrapper()function from the decorator. add()function, whenever we call add(a, b), it actually calls the wrapper()function executes the additional code before and after calling the original - The output shows the messages from the wrapper()function and the result of the original As you can see, the decorator successfully wraps the add()function, allowing us to add custom behavior to it without modifying its original code. Decorators are powerful tools in Python, used to add functionality, logging, or other cross-cutting concerns to functions or methods in a clean and reusable way. Here is an example of how a decorator works. The decorator() function takes a function as an argument, defines the wrapper() function, encloses the taken function within the wrapper(), and returns the wrapper(). There are three steps to the decorator's work: - Taking a function as an argument. - Enclosing the function within the newly defined function ( - Returning the wrapperfunction with the enclosed function. wrapper() function contains the main decorator logic and invokes the function with the given parameters. add() function is reassigned by the returned wrapper() function that now contains the enclosed Step 1. Define the decorator. The decorator should take exactly one argument. Step 2. Define the inner function. We need to define the inner function to close the function taken by Step 3. Enclose the taken function. The function should be called inside the inner function ( wrapper), and the result should be saved and returned. Step 4. Return the inner function. The decorator should return the inner function wrapper without calling. How does the decorator work? Step 1: Decorator is called. decorator() function is called and takes the function (decorated function) as the argument func. At this step, the interpreter creates the decorator() local scope. Step 2: Define the wrapper function. The interpreter defines the wrapper() function (in the decorator() local scope) that takes the same arguments as the decorated function. The wrapper() body contains the main logic of the decorator and calls the function from the non-local scope. wrapper() function is not executed at this step. Step 3: Decorator execution ends. decorator() function ends the execution and returns the wrapper() function. The interpreter removes the decorator() local scope but leaves the enclosed objects. Step 4: Reassignment wrapper() function assigns to the decorated function: The decorated function is replaced by the other function ( wrapper()), and the previously contained function is removed. add() function is removed but enclosed inside the returned Step 5: Usage The new function is the returned wrapper() function that takes arguments and inserts them into the enclosed function. addis a variable that contains a reference to the function that we defined. Everything was clear?
OPCFW_CODE
Change term - preparations Change term Submitter: Richard Pyle Proponents (who needs this change): Everyone Justification (why is this change necessary?): As currently defined, this property applies to preservation methods for a specimen, and specimens are included as examples within the definition of the MaterialSample class. Current Term definition: https://dwc.tdwg.org/list/#dwc_preparations Proposed new attributes of the term: Term name (in lowerCamelCase): preparations Organized in Class (e.g. Location, Taxon): MaterialSample Definition of the term: A list (concatenated and separated) of preparations and preservation methods for a specimen. Usage comments (recommendations regarding content, etc.): Recommended best practice is to separate the values in a list with space vertical bar space (|). Examples: fossil, cast, photograph, DNA extract, skin | skull | skeleton, whole animal (ETOH) | tissue (EDTA) Refines (identifier of the broader term this term refines, if applicable): None Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/preparations-2017-10-06 ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/SpecimenUnit/Preparations/PreparationsText The only pproposed change is to organize this term within the MaterialSample class, rather than the Occurrence class. No other changes to the term are proposed. Discussion around changes to MaterialSample on DwC (#314). Related issues are Issue #1, Issue #3, Issue #24 (reopened because of renewed interest), Issue #314, Issue #332, Issue #345, and Issue #347. The term definition is fuzzy, as well as the examples. Might be better replaced by two terms, "mount" and "preservedPart", which are two different things. preservedPart examples: body, bone, antler, fruit. mount examples: cast, envelope, jar, microscopic slide @wouteraddink The review of preparations and MaterialSamples in a Task Group could sort out the concerns you raise here. The current proposal is only to change the class the term is organized in, which is a non-normative change. I think the change in organization is warranted and has a history going back to Issue #24. I think it would be good to implement this change regardless of the further issues you raise, for which I would recommend creating one or more separate issues. That is ok for me, but it would be only a minor improvement, since the term itself needs work. See here for an example of what content it currently leads to in GBIF: https://github.com/tdwg/mids/files/5842404/bq-results-20210120-122837-ydhq0a99j5dl.xlsx Yes, it needs a lot of work to be anything more than a convenience term, hence the recommendation in https://github.com/tdwg/dwc/issues/345#issuecomment-828872056 to follow up on the work presented by @acbentley Andy Bentley. See also https://github.com/tdwg/dwc/issues/1#issuecomment-689906141. Apologies to all creating this as a new issue. As @tucotuco noted, the only proposal was to move the term from the Occurrence class to the MaterialSample class (everything else above is identical to the existing definition); but as this is a non-normative change, and as this issue was already addressed in a previous issue (from 2014!!) Perhaps now that #24 has been re-opened, this issue should be closed and commentary should be concentrated over at #24. Perhaps now that #24 https://github.com/tdwg/dwc/issues/24 has been re-opened, this issue should be closed and commentary should be concentrated over at #24 https://github.com/tdwg/dwc/issues/24. This issue should not be closed, it is necessary as a templated change request, which Issue #24 lacks. Issue #24 is a discussion supporting this change request. On Thu, Apr 29, 2021 at 5:16 PM Richard L. Pyle @.***> wrote: Apologies to all creating this as a new issue. As @tucotuco https://github.com/tucotuco noted, the only proposal was to move the term from the Occurrence class to the MaterialSample class (everything else above is identical to the existing definition); but as this is a non-normative change, and as this issue was already addressed in a previous issue (from 2014!!) Perhaps now that #24 https://github.com/tdwg/dwc/issues/24 has been re-opened, this issue should be closed and commentary should be concentrated over at #24 https://github.com/tdwg/dwc/issues/24. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tdwg/dwc/issues/346#issuecomment-829566620, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADQ722GO7GWCSMBICVQQJDTLG5DJANCNFSM43WNSEZA . OK, thanks for the clarification, @tucotuco ! We endorse this proposal on behalf of @SiBColombia This proposal has been labeled as 'Controversial' and in need of a task group to for resolution. It is no longer part of an active public review. This issue has been superseded by https://github.com/tdwg/dwc/issues/452
GITHUB_ARCHIVE
/* ************************************************************************** */ /* */ /* ::: :::::::: */ /* ft_conversion_d.c :+: :+: :+: */ /* +:+ +:+ +:+ */ /* By: rgwyn <rgwyn@student.21-school.ru> +#+ +:+ +#+ */ /* +#+#+#+#+#+ +#+ */ /* Created: 2021/01/23 12:31:03 by rgwyn #+# #+# */ /* Updated: 2021/01/23 12:31:49 by rgwyn ### ########.fr */ /* */ /* ************************************************************************** */ #include "../ft_printf.h" void ft_set_sign(long long *d, t_options *opts) { if (*d < 0) { opts->sign = -1; opts->sign_char = '-'; *d = -(*d); } else if (opts->flag_plus) { opts->sign = 1; opts->sign_char = '+'; } else if (opts->flag_space) { opts->sign = 1; opts->sign_char = ' '; } if (opts->flag_minus) opts->flag_zero = 0; } char *ft_is_x_with_sharp(long long d, t_options *opts) { if (opts->flag_sharp && d != 0) { if (opts->arg_type == 'x') return ("0x"); else if (opts->arg_type == 'X') return ("0X"); } return (0); } void ft_calc_lens_for_d(t_print_lens *lens, long long d, t_options *opts) { int xlen; lens->len = count_signs_in_number(d, ft_get_base(opts)); xlen = ft_is_x_with_sharp(d, opts) ? 2 : 0; if (opts->precision == 0 && d == 0) lens->len = 0; lens->nulls = lens->len < opts->precision ? opts->precision - lens->len : 0; if (opts->width > lens->len + xlen + lens->nulls + ft_abs(opts->sign) && opts->flag_zero && opts->precision == -1) lens->nulls += opts->width - lens->nulls - lens->len - xlen - ft_abs(opts->sign); lens->spaces = (opts->width > ft_abs(opts->sign) + lens->nulls + lens->len + xlen) ? (opts->width - ft_abs(opts->sign) - lens->len - xlen - lens->nulls) : 0; lens->flen = lens->spaces + ft_abs(opts->sign) + lens->nulls + lens->len + xlen; } int ft_conversion_d(long long d, t_options *opts) { t_print_lens lens; ft_set_sign(&d, opts); ft_calc_lens_for_d(&lens, d, opts); if (!opts->flag_minus) ft_print_n_chars(' ', lens.spaces); if (opts->sign) ft_putchar_fd(opts->sign_char, 1); ft_putstr_fd(ft_is_x_with_sharp(d, opts), 1); if (lens.nulls) ft_print_n_chars('0', lens.nulls); if (lens.len) ft_putnbr_with_base_fd(d, ft_get_base(opts), ft_get_charset(opts), 1); if (opts->flag_minus) ft_print_n_chars(' ', lens.spaces); return (lens.flen); }
STACK_EDU
Part 1: IOTA Data Marketplace — Update The IOTA Foundation launched the Data Marketplace (https://data.iota.org) in the fourth quarter of 2017, as a Proof of Concept (PoC) and open innovation ecosystem. Over a year later, it is now time for the Data Marketplace (DMP) PoC to evolve with the help of the community to fully decentralize all features of the DMP and take its next step. What are data marketplaces? The growth of data marketplaces is an inevitable result of the IoT (Internet of Things) revolution, which changes the way we connect, interact, and exchange data in nearly every context imaginable. As physical assets such as ships, factories, vehicles, farms and buildings become digital, their digital “twins” gradually act as secure data exchanges. As datastreams surge across silos and carry value across organizations, traditional value chains morph into a web of value. This paradigm is complex to administer, forcing business to rethink their competitive play within ecosystems. Data marketplaces emerge as a means to exchange data, monetize data streams and provide the basis of new “smart” business models. We refer to this new wave of value creation, for the Internet of Everything, as the “Economy of Things” or the machine-to-machine (M2M) economy. This opportunity landscape for society and business may sound futuristic to some, but it is the focus of a growing number of organizations across industries. Conscious of the massive opportunity and its transformative — and potentially disruptive — nature, we believe that the best approach for forward-leaning organizations and individuals is to explore these opportunities openly, together. Objectives of the IOTA Data Marketplace PoC To realize the potential of data marketplaces, we set out in 2017 to address several key challenges through the IOTA Data Marketplace initiative: IOTA and its Data Marketplace initiative aimed to (and helped) address these challenges by: 1. Producing an initial, open source Proof of Concept 2. Exploring new IoT / M2M solutions and business models for the “Economy of Things” 3. Growing a co-creation ecosystem to foster permissionless innovation At the same time, we invited (and continue to invite) participants to help shape the IOTA technology as a common standard that works for all. Like its technology, the IOTA Foundation’s approach to enabling innovation is open and permissionless. The Data Marketplace is designed to enable an agile, experiment-driven and collective approach to innovation for its participants, but also for the IOTA Foundation itself. The Data Marketplace initiative challenges the IOTA technology with the requirements of real life deployments and the demands of the participants. “IOTA AIMS TO DEVELOP A STANDARD FOR THE IOT/M2M ECONOMY TOGETHER WITH THE INDUSTRY” Since the start of the initiative, a number of proactive industry participants have gradually stepped up their engagement, providing our technical teams with valuable feedback, enabling us to accelerate the development of the IOTA stack to a production-ready stage. A starting point to explore new solutions The IOTA Tangle is a secure data communication protocol and zero-fee microtransaction system for the IoT/M2M. The IOTA Data Marketplace is a simplified platform, which simulates how a connected device running an IOTA script, can be paid rapidly for sharing secure data to a web browser. The initiative successfully delivered an IOTA “building block” which enables connected devices and APIs to sell and transfer data using testnet IOTA tokens — providing participants with a starting point to explore new solutions and to shape their own data marketplaces. “Smart” business models and use cases Data marketplaces are a means to create and capture value in an environment where sensors and connected device will trade data directly. For example, smart city sensors can now be paid for sharing valuable real time data. Physical assets equipped with a chip can now be leased, allowing “pay-per-use” models to shape the smart sharing economy. Our personal data could soon be accessed with our proactive consent via our digital twins or personal data management systems and allow radically new personalized and real time services. IOTA develops a digital technology to enable these “smart” business models. Using the PoC as an initial trigger for business model thinking for participants, we organized co-creation workshops to catalyze this process. Non-incremental business models seldom emerge out of internal efforts. External viewpoint and capabilities are often required to provide viable ideas. In the case of cross silo data exchanges, new models can only take shape through ecosystem cooperation as prevailing value chains often need to be reshuffled. It takes a cooperative effort to transition to a new mutually beneficial relationships between stakeholders. Our co-creation process focuses on bringing minimal viable ecosystems of partners around simple and tangible use cases. Building trust between partners and alignment around a common purpose is the key. IOTA is an enabler to this co-creation process and we’ve collected a few best practices for organizations interested in exploring IOTA’s Data Marketplace: - Secure a mandate from management, aimed at exploring emerging digital technologies and non incremental innovation - Secure necessary resources in time to ramp up your PoC towards a testbed pilot - Experiment in small but quick steps, start with a very simple PoC - Open innovation: be comfortable sharing your problem worth solving to find complementary partners sharing similar ambitions Growing a co-creation ecosystem to foster permissionless innovation Following webinars, workshops, events and many dialogues, a collaborative ecosystem began to take shape. This consists of corporations, institutions, and not-for-profit organisations interested in exploring together the potential of the IOTA technology. Theparticipantsin the PoC, numbering over 80 as of January 2019, came from many different sectors including Mobility, Energy, Agriculture, Real Estate, eHealth, Smart Manufacturing, Supply Chain, Financial Services, Semiconductors, IT integrators, Consulting, Universities and Industry clusters. Several follow-up initiatives are now triggered. Some of which include: - Workshops & Webinars — webinars were organized on the Data Marketplace and the MAM module. Workshops were conducted in Oslo and Berlin on the topics of Trusted IoT, GDPR and personal data - Smart Energy — Positive CityxChange, a new EU Horizon 2020 project with 7 smart cities and more than 30 partners to develop positive energy districts - Hackathons — The IOTA Foundation participated in hackathons in Groningen and Paris where the Data Marketplace was used as a starting point for participants - Smart City Testbed — Initial work towards smart cities development is being triggered in Los Angeles at USC’s smart campus IoT Testbed, as a proxy for a smart city What comes next? We learned a lot through the initial PoC, and now we want to enable companies to experiment more rapidly. The PoC is now open sourced. In doing so, we are asking the community to help us to fully decentralize all features of the Data Marketplace. The centralized cloud backend consists today of the following components: - User authentication (OAuth with Google account) - User management - Access rights management - Device management (create/read/delete) - Wallet management (wallet funding, tokens transfer) - Device stream purchase tracking - Error tracking and reporting And, as such, we also have drafted a five-part blog series to provide more information about the updated Data Marketplace, including technical specifications and user guidelines. You can check it out here: - Part 1: IOTA Data Marketplace — Update (this one) - Part 2: Sensor Onboarding - Part 3: Publishing Sensor Data - Part 4: Cloud Backend Configuration - Part 5: Checkout and Deploy your application Shaping co-creation opportunities together The IOTA Foundation is in active dialogue with its ecosystem of participants to define impactful activities aimed at expanding and catalyzing further corporate and community involvement towards PoCs, new business models and prototype solutions. We welcome your feedback. If your organization shares the same ambition and is ready to invest in shaping the future with an ecosystem of partners in your industry or region, please let us know. Similarly, if your organization is interested in working together to apply for a public grant for data marketplace-related topics (eg, EU 2020 calls) we would be excited to get in touch.
OPCFW_CODE
Our clients often need a products page or a team page that displays collections of dynamic content that include various types of data (text, images, charts, pdf downloads, etc.) per item. Sometimes we’ll even be asked to show various pieces of those items in multiple locations on the site – a menu of the collection, for example. There’s often no one right way to do things. Though, when using WordPress, we’ve found a method of achieving this while getting ultimate customization and reusability, as well as an intuitive, fool-proof admin experience. Let’s take the products page as an example. We recently had a client with a particular product suite in which a customer would choose a package that would always include a main component. The customer could then review, compare, and select a combination of multiple sub-components. Each sub component needed a description, image, and spec chart that would display on the page, ideally dynamically, in a responsive layout with an intuitive user experience that could also be easily managed in the back end. Our method utilizes the indispensable plugin, Advanced Custom Fields, in combination with custom post types to create an easily managed, easily queried set of posts that can contain any type of data needed. Let’s look at how to quickly set up something similar. Install the Plugin First, and perhaps it goes without saying, we’ll need a WordPress install. Second, if you install no other plugin, install Advanced Custom Fields https://www.advancedcustomfields.com/. Version 4.4 is what is available on the WordPress Plugin Directory at the time of this writing, but we happen to know that if you want to go for a paid pro account, Version 5 has some pretty killer features. For our purposes here, the free version is all we’ll need. Register Your Custom Post Type The next step deals with WordPress’ standard custom post type registration. Here, we’ll register the Products post type. For more information on registering custom post types, parameters, and custom taxonomies, see https://codex.wordpress.org/Function_Reference/register_post_type. Add this to your functions.php file. Set Up Advanced Custom Fields Now that we have our newly created custom post type, we can start to create all of the products we want as individual posts. But what if we want each product to include a title, main description, a downloadable pdf of specifications, and two images? We could cram all of that in the one wysiwyg WordPress gives us and hope for the best, but that’s not going to be the most reliable and easily managed method of building our back end. This project sounds like we need some custom fields. First, head to the Custom Fields admin page and add a new field group. Call this group ‘Product Info’ and add a new rule in the Location box that reads ‘Show this field group if Post Type is equal to Product’. If you do not see Product as an option to select, go back and make sure you registered your custom post type correctly. Next, start adding fields to the group. This tutorial assumes some level of familiarity with the plugin, but if you aren’t familiar with Advanced Custom Fields, have a look at their documentation: https://www.advancedcustomfields.com/resources. Because each product post will already have a title, a content wysiwyg, and a featured image, we can use those as product title, product description, and one of the images. For the other product features, add a File field called ‘Download’ with a return value of File Array, and an Image field called ‘Product Image’ with a return value of Image URL. Querying the Products After adding a few products, we need to output them somewhere. This can be done in whichever way works best for your site. You can make a template php page that queries the posts, or use a custom shortcode; regardless, you are now in a position use any part of the products data in any way you wish. Here is an example of a basic query: This is a very simple query that will output all of the products entered, but you can see that any of these elements can be queried on any page. Everything is managed on the backend in one place and the opportunities for reusing the data are endless. One query can be used to create a menu of products as well as each product output. A product that relates to a particular blog article can be queried on that article’s single view page. If you are into the new WordPress api, this will be exposed for use there as well. If your WordPress site needs more flexible data with an easy-to-manage back end, Advanced Custom Fields is the perfect solution for visually creating fields, selecting multiple input types, assigning fields to multiple edit pages, easily loading data through a clear and familiar interface, and improving upon native WordPress custom post type and metadata features for ease and fast processing. If you need help building, customizing, or refreshing your web presence, Fresh can help.
OPCFW_CODE
Redux does not have a dispatcher. It relies on pure functions called reducers. It does not need a dispatcher. Each action is handled by one or more reducers to update the single store. Since data is immutable, reducers return a new updated state that updates the store Flux makes it unnatural to reuse functionality across stores in Flux, stores are flat, but in Redux, reducers can be nested via functional composition, just like React components can be nested. Redux store your state at only one place. While you can have many in Flux The internet is abuzz with Facebook’s recently launched Flux, their new pattern to structure client-sided applications. Let’s take a look how the new Flux pattern relates to the earlier used MVC pattern and how it could end up being as useful as the user interface builder, React. The story behind the model-view-controller(MVC) pattern and how numerous companies and projects have used it in the past is pretty interesting. It is recommended that you go through this brief history because it’s a great way to understand the specific domain that the Flux pattern operates under. The MVC Pattern Web developers have been using a lot of model-view-controller (MVC) patterns that have each being doing things a bit differently than the other. But, most MVC patterns typically perform the following core roles: a. Model: This maintains the behavior and data of an application domain. b. View: This represents the showcasing of the model in the user interface. c. Controller: Taking user inputs, manipulating the model and prompting the view to update itself The core concept of model-view-controller can be formulated by: 1. Separating the presentation from the model – This not only enables the implementation of varied UIs, but also ensures smoother testability. 2. Separating the controller from the view – This is extremely useful for older web interfaces that are not commonly used in most Graphic User Interface (GUI) frameworks. Problems with MVC Introduced in Smalltalk-76, the object-oriented, dynamic reflective programming language, MVC is an application pattern that is a legend in its own right as it has been employed for multiple projects ever since 1976. And even in 2015, it has firmly stood the test of time and is being used in some of biggest projects. So, the question arises as to why it should even be replaced! The truth is that MVC didn’t scale well when it came to Facebook’s enormous codebase. Major challenges arose due to Facebook’s bidirectional communication platform, where one change would loop back and have ripple effects across the entire code-base. This made the system fairly complicated and debugging almost impossible. The Flux Pattern MVC’s shortcomings posed some serious challenges, and Facebook solved them by incorporating Flux, which forces unidirectional flow of data between a system and its components. Typically, the flow within an MVC pattern is not well defined. Flux however, is completely dedicated to controlling the flow within an application, making it as simple to comprehend as possible. The Flux pattern has four core roles including actions, storage, data dispatcher and views. Their functions are described below: a. Actions – These are pure objects that consist of a type property and some data. b. Stores/Storage - This contains complex data, including the application’s state and logic. One can consider stores as a manager for a particular domain within the application. While Flux stores can store virtually anything, they are not identical to MVC models because these models typically attempt to model single objects. c. The Dispatcher – This essentially acts as the hubs nerve center. The dispatcher processes actions like user interactions and prompts loops that the storage has registered within it. And as with storage, the dispatcher is quite different from controllers in the model-view-controller (MVC) pat tern. The difference is that the data dispatcher does not contain a lot of logic within it, allowing you to reuse the samedispatcher across a variety of new and complex projects. d. Views – These are just controller-views and are also commonly found in most Graphic User Interface (GUI) MVC patterns. They monitor for any changes within the stores and re-design themselves accordingly in real time. Views also have the ability to add new actions to the dispatcher, user interactions included. These views are normally coded with React, but with Flux patterns it’s not necessary to use React. The typical flow of a Flux application can be defined in the below diagram. It is critical to note that every change that you make will go through an action via the dispatcher. So how does Flux differ from MVC? 1. The Flow When it comes to Flux, the flow of the application is vital and these are governed by some strict rules and exceptions that are enforced by the data Dispatcher. In model- view-controller (MVC) patterns however, the flow is not strictly enforced or not enforced at all. This is why different MVC patterns implement their flows differently. 2. Unidirectional flow As all changes go through the data dispatcher, stores/storage cannot change other stores directly and the same basic concept applies for views and other actions. So, any change has to go through the dispatcher via actions. And MVC’s commonly have bidirectional flow. Flux stores can store any application related state or data. But, MVC models try to model single objects. So, Is Flux better than MVC? The fact that Flux has only recently been launched means that it’s too early to say as its perceived benefits are yet to be vetted. That being said, Flux is very new and innovative and it’s just refreshing that there is now a new pattern that can challenge MVC and its traditional ways. The best part remains that Flux is extremely easy to understand and comes with minimalist coding, allowing you to structure your app in a more effective manner. This augers well for the future, particularly when React’s programming language is nefarious for coming with a nearly endless huge codebase and an even bigger runtime complexity that turns off a lot of web developers.
OPCFW_CODE
Accounting practice management requires a focus on both people issues and technology issues, as well as the continual improvement of traditional accounting processes and the development of new services and specializations. The Rosenberg Survey, covered elsewhere in this issue (“The State of the Profession: Analyzing the Results of the 2019 Practice Management Survey,” page 20), identifies several areas of interest that readers can explore further in the two blogs covered in this month’s column: AccountingDepartment.com and Firm of the Future. AccountingDepartment.com provides outsourced accounting services with a U.S.-based staff of virtual accountants performing bookkeeping, accounting, and controllership activities. Its website includes an extensive blog, downloadable guides, webinars, and best practice tips for outsourcing. Many of the resources are focused on small business financial accounting and reporting and managerial accounting, which could apply to the management of an accounting practice or advising companies. In addition, although only 6% of respondents to the 2019 Annual Tax Software Survey (November 2019, http://bit.ly/2019atss) reported that they outsourced any tax preparation activities, the Rosenberg Survey indicated an increase in staff turnover among firms of all sizes, which may open the door for more organizations to consider outsourcing. AccountingDepartment.com’s blog offers small business advice for accounting services on a variety of topics including accounting strategy, accounting methods and metrics, and accounting technology (https://www.accountingdepartment.com/blog). “Accounting in the Age of Automation and AI” explains that while artificial intelligence shows promise to relieve human operators of menial tasks, allowing them to focus on higher-level critical thinking, the technology is still too expensive and not well adapted for small organizations (http://bit.ly/33iINq7). The post provides several questions to ask when investigating new technology, such as how well a tool will evaluate an organization’s transactions, how it makes decisions, and whether it automatically records journal entries. The author concludes by encouraging small business decision makers to be sure that software and other technologies will actually support their accounting needs. “Security Concerns in Outsourcing Accounting Information” provides several points to consider to protect a company’s data when using outsourced accounting services, such as researching a provider’s security practices (http://bit.ly/37BKaDW). No company is too small to be unattractive to hackers, and in most data breaches, the thieves are looking for personal information such as Social Security numbers and bank account information. “How Blockchain Changes Everything” is a short article that explains blockchain for nonexperts (http://bit.ly/2rnDFUA). Two posts that may be useful for either CPA firm management or business advisors address key performance indicators (KPI). “The Case for Making KPIs Transparent to the Entire Team” recommends identifying and tracking only those KPIs that provide the most information content for the particular organization, reviewing the KPIs on a regular basis to help managers and employees understand them, and helping employees see the positive aspects of the performance metrics (http://bit.ly/2QT2Mcr). On a related note, “7 KPIs to Use in Your Strategic Planning” lists the KPIs that are most applicable and useful to small businesses, such as the current ratio and inventory turnover (http://bit.ly/35wLNAJ). AccountingDepartment.com also offers four short guides—essentially checklists—that are downloadable as PDFs with free registration. The “7 Deadly Sins of Bookkeeping” include mixing business and personal expenses and lack of multiple-file backup methods (http://bit.ly/35A4X8Z). “Controller Checklist for Small Businesses” covers the controller services that AccountingDepartment.com offers, but can also serve as a guide for establishing or expanding the controllership function inside an organization (http://bit.ly/2DlFt2R). “How to Prepare Business Financials to Support a Comprehensive Exit Strategy” is a discussion of an activity that is often overlooked or postponed, perhaps because more owners and partners are delaying retirement, as noted by the Rosenberg Survey. It emphasizes several important considerations, including common inaccuracies in financial statements (http://bit.ly/2s9l8eS). A few webinars are available that would probably apply more to a CPA’s client companies than to accounting firms themselves, but they are worth taking a look (http://bit.ly/2pPNlGZ). “Job Costing: The Nitty & The Gritty” is a 50-minute presentation of real-world business scenarios where job costing can provide useful financial information. Other videos are archived from prior webinars and include basic financial accounting and reporting topics. The Tips and Accounting Best Practices page generally relates to outsourcing accounting activities and may be of most interest to CPAs who are considering outsourcing or would like to learn more (http://bit.ly/2KSqFwO). For example, “Company Best Practices” covers which procedures can be outsourced, three ways that outsourced accounting can reduce corporate fraud, and several ways to reduce employee theft (http://bit.ly/2XMAa6a), while “Using Outsourced Accounting Records” presents discussions of several factors to consider, including more information on specific services that can be outsourced and data and record security (http://bit.ly/2OhLaW6). Firm of the Future Firm of the Future is an Intuit blog covering practice management, accounting software, and technology issues; it features articles and videos prepared by Intuit staff and other contributors, as well as industry news items (https://www.firmofthefuture.com/). To locate blog content on specific topics, it may be helpful to start with the top menu bar. The client relationships tab includes advising, billing, and client engagement. The efficiency and growth tab covers apps, data and reports, hiring and training staff, and several other topics. In the Apps section, readers who use Quickbooks may enjoy the monthly “What’s New in Apps” posts, which profile mobile applications that work with that software (http://bit.ly/2XMkTT1). “Top Tech Tasks to Automate” is an informative discussion of how some technologies can make life easier; the author recommends specific mobile applications to answer the phone, schedule appointments, manage email, log into websites, onboard new clients, and make recurring payments and transactions (http://bit.ly/2qAqj7t). Links to the resources are provided, and readers may wish to save them for future reference. “Understanding Today’s Workforce: Generational Differences and the Technologies They Use” is an insightful look into generational characteristics with regard to technology (http://bit.ly/2XNfivK). While most people won’t fit the categories exactly as described, it is still beneficial to have some idea of what coworkers’ and clients’ mind-sets might be. The article suggests considering technology and communication preferences when adapting staff training resources and when creating and maintaining firm networking and relationships. “11 Tips to Prevent File Corruption” is a must-read for anyone who has experienced this problem (http://bit.ly/2DgjZoo). Although it emphasizes Intuit products, its general recommendations are still extremely useful and include backing up files, logging off from connections when not using an application or resource, making sure that hardware is updated and free from viruses, and using an uninterruptible power supply. The author also suggests not using wireless Internet with Quickbooks Desktop, and the question arises as to whether this might be a consideration with other types of desktop software. Many videos are available on the Video Gallery, ranging from one-minute-or-less snippets to 50- and 90-minute webinars (http://bit.ly/2rlmlj5). Examples of the single-topic offerings include “How and When to Ask for a Client Referral” (http://bit.ly/2qNuXPg), “How to Onboard Clients for Success and Efficiency” (http://bit.ly/35xcYvb), and “Embracing New Technology to Help Clients” (http://bit.ly/34fbTrP). An excellent longer video is “Organizational Design of Modern Practices,” a 45-minute webinar originally aired in June 2019 as part of a series on business strategy (http://bit.ly/2OkyvSa). The presenter discusses organization design theory, which is influenced by the professional services offered, the number of employees, and the knowledge base of those employees. Key things to consider include specialization versus simplification, dedicated resources for key functions, creation of redundancies for key roles, continuous evolution of skill sets, and the effect of incentives and motivations on behavior.
OPCFW_CODE
A technique to provide constant feedback of "display latency" in webconferencing for each participant to help presenter adjust the speed of presentation. Publication Date: 2014-Oct-28 The IP.com Prior Art Database This article shows statistics of people who are seeing the current slide for web conferencing based systems. This helps presenters adjust their speed for the situations when lot of people are having latency issues. Page 01 of 4 A technique to provide constant feedback of "display latency" in webconferencing for each participant to help presenter adjust the speed of presentation . On a web conference there is always a "display latency" for each participant i.e. there is a time lag between information being presented by the presenter and the same being visible to the participants. Also, time taken to load information being presented for one participant varies from another i.e. the display latency for each participant may vary. The display latency or variation in load time depends on the connection speed of each participant. A participant with a faster connection will load all information faster than a participation with a slower connection. This poses a problem for the Presenter as he/she needs to wait until all participants have the presented information loaded else participants with slower speeds will be out of the synch with what is being presented. Currently, the only ways to resolve this problem is by verbal interventions i.e. - 1. Either the Presenter seeks verbal feedback from all participants on whether the information presented is loaded for all OR 2. The participants interrupt the presenter asking him/her to wait till the information is loaded. There is no automated way to receive feedback for the Presenter or there is no way for the participants to control the flow of the presenter. Often times, due to time constraints, the presenter is forced to proceed even if some participants have not received all the information being presented. The participants with slowest connection speeds are most affected due to this issue. Instead of repeatedly interrupting, they eventually drop off, which results in a inconclusive meeting thereby wasting time. The problem is only compounded when the participants are logged into the web conference in a listen only mode, in that case there is no way for the participants to let the presenter know that they are not able to see what the presenter is presenting. The article described herein tries to solve the problem of display latency in a web conference by providing mechanisms - 1. For the presenter to receive continuous, automated feedback about the information visible to the participants. While network or device latency cannot be solved when the conference is ON, constant feedback to the presenter will help him adjust his speed of presentation seamlessly. 2. For the participant to switch to a light weight version of the presentation, thereby reducing the load time of slides. We propose a technique with a - 1. Visual Indicator for the presenter - Web conference will have a visual indicator for the presenter to know how many participants are seeing the page being presented. This statistics can be shown as a traffic light or a progress bar (percen...
OPCFW_CODE
Fix HNSW graph visitation limit bug We have some weird behavior in HNSW searcher when finding the candidate entry point for the zeroth layer. While trying to find the best entry point to gather the full candidate set, we don't filter based on the acceptableOrds bitset. Consequently, if we exit the search early (before hitting the zeroth layer), the results that are returned may contain documents NOT within that bitset. Luckily since the results are marked as incomplete, the *VectorQuery logic switches back to an exact scan and throws away the results. However, if any user called the leaf searcher directly, bypassing the query, they could run into this bug. I ran performance tests and there were no significant latency increases. There do seem to be observable latency decreases though at higher maxConn levels. I am getting slightly different recall. Usually better by 0.001, but worse by 0.001 on glove 100 with nDoc fanout maxConn beamWidth 100000 20 96 500 120 so I am digging into why that may be. Any help there is appreciated. Data (lucene util knnPerf): dim = 100 doc_vectors = constants.GLOVE_VECTOR_DOCS_FILE query_vectors = '%s/util/tasks/vector-task-100d.vec' % constants.BASE_DIR Settings ran: VALUES = { 'ndoc': (100000,), 'maxConn': (32, 96), 'beamWidthIndex': (250, 500,), 'fanout': (20, 100,), } Intuitively, it sounds like a good approach to me to not take live docs into account to find good entry points, as there could be nodes that are good entry points even though they might be marked as deleted or not match the filter? Should we consider never exiting before hitting the zero-th level instead? Should we consider never exiting before hitting the zero-th level instead? 🤔 The idea is that if we cannot even get to the zeroth level before hitting the visitation limit, we shouldn't even bother going through the graph structure anymore as it will likely be slower than an exact match. I think exiting with nothing and indicating incomplete makes the most sense to me as it gives clear indication that searching the graph structure shouldn't be done. If we end early on the zeroth lauer we could give a "knn" that is nowhere near the actual kNN because we exited early on the zero-th layer. I don't know why that would be any better than exiting before reaching that layer. Sorry, I commented too quickly, before understanding what your change was doing, I thought it was ignoring filtered out ords on levels > 0 at first. Your change makes sense to me now. OK, I reverted my minor optimizations and moved the method to be more inline with what Lucene did before. Now I am getting exactly the same recall and the weird bug is fixed where we return partial results that may or may not contain documents not within the acceptableOrd set. I still kept it a unique method to be perfectly clear what this method is doing. @jpountz @msokolov found my bug 🤦 in the simplified version. I updated, removed the need for tracking candidates & results since we only care about the best found entry point. I don't know what bug I found, but this LGTM Commas are important! Just meant to tell you I found my own bug.
GITHUB_ARCHIVE
LOT SIZE SETTINGS WITHIN AUTOTRADER. If you choose Autotrader option, your autotrader will be taking lot size details from settings within autotrader. Firstly choose lot type with aut_lot_type option (Fixed Lots = Fixed; Flexible Lots = Flexible or % of Deposit). Lot size is set within aut_lot_size field (regardless the type of lot you choose). If you use Flexible Lots option, it is required to use flexdepo value as a basis for lot calculation. More detailed information on how to choose the most efficient lot type or how lots are calculated, you may find here. All other options are set within Inputs tab within autotrader only. Slippage. Very rarely by the time an autotrader gets a trading alert, a price may move dramatically. Set the maximum slippage level to cancel a trade opening in case a price is more than a "Slippage amount" pips away from Signalator open price. noInstant. If you can't set SL / TP levels prior to open a position, set noInstant value to true. An autotrader will open a trade first and then will update SL / TP levels. Cursymbol. For additional security purposes, add the symbol title of your broker you attach an autotrader. If your broker has EURUSDecn, EURUSDmini, etc. symbol, simply add this value to this field. Checkcurrency. In most case brokers use either common or commonly accepted symbols. When you attach an autotrader to a chart, an autotrader will check whether you attaching it to the right pair. However, in some cases, a symbol may be very uncommon and an autotrader won't be able to validate a currency pair even if you place it on a right chart. Set the checkcurrency to false and an autotrader won't be validating a currency pair symbol. Showdetails. By default all operations made by an autotrader are accompanied with text details. However, if you would like to have more details (true by default), choose this option. If this is set to true, every time a trade is opened, a message with type of lot and lot size will be inserted. Troubleshooting. If for some reason you experience any difficulties with an autotrader, use this option in true state (default value) to generate detailed information which can help you or Signalator support better and faster identify a problem occurred. More detailed information is available on Automated Trading Installation Errors page. Stopsmanage. If you change a SL / TP level manually, an autotrader will then change it to the value passed from an alert. To leave your levels entered manually, set stopsmanage setting to true. Please note: in that case, an autotrader won't update these levels until you set the value to false. UseSound. If you would like to hear sounds an autotrader generates, leave it as true value. Sounds are very helpful to identify everything is made correctly even without checking a trading platform. WriteLog and WriteMsg. These options are used for storing information about errors and operations made by an autotrader.
OPCFW_CODE
Also, in cases where a piece of data needs to be pushed into the blockchain, a DApp developer needs to know how to achieve that. Access a vast pool of skilled developers in our talent network and hire the top 3% within just 48 hours. As a Toptal qualified front-end developer, I also run my own consulting practice. When clients come to me for help filling key roles on their team, Toptal is the only place I feel comfortable recommending. A white paper is a hybrid document that tries to sell technical aspects of a project in a way that can be understood by the average reader. We use cryptography to verify the sender/creator of a specific transaction. Without encryption, every operation could be easily reassigned and then the network could be corrupted. The first widespread implementation was bitcoin, created by Nakamoto and launched in January 2009. Since then, many different applications have received publicity. LaborX currently supports two major blockchains; Ethereum and BNB Chain. This role can be hard to fill because a writer needs to live in two contexts at the same time, having expertise in both business and technical aspects. He or she needs to create a document where the hard technical aspects are presented in a way that shows off potential business benefits. Compare the Quotes you receive and hire the best freelance professionals for the job. SafePay provides payment protection on our online freelancing on Guru. It is a shared account funded by the Employer before starting work. Once employers can feel secure that payment can be made once they are satisfied with the work. Blockchain is an ingenious way of transferring information from point A to point B in a sage and fully automated manner. Ken enjoys system design and is adept at creating technical solutions around business use cases. The growth of the blockchain sector has led to the creation of a wide range of Web3 Jobs. These may be specific to the Web3 space, with roles including smart contract developers and marketing and communications experts with a strong understanding of the technology. I am looking for a Quick Wealth Management based Demo project in .NET who can integrate Blockchain Security, Quantum key distribution or any Secure Key Management system. This project mainly focuses on demonstrating the Advanced Security features for a Wealth Management Solution. These problems are sometimes less critical in private networks. But still, in some cases, you cannot guarantee that every node will be fair, and the developer should be able to handle such situations arising from the limitations of the network. One main difference is in the target time for resolving the puzzle. Unlike with the original email context for hashcash, on average, a new bitcoin block is signed every ten minutes. Alex is a software developer primarily specializing in blockchain technology, having developed for both commercial applications as well as innovative initiatives. Serkan is a seasoned software engineer with 14 years of experience in consultancy and in-house development. For the last four years, he’s been working as a member of a rapid prototyping team at Daimler AG, focusing on blockchain projects. Previously, he worked as a consultant for different companies and projects for 6+ years. As a software consultant, he led development teams of up to ten developers and worked on employee relations, learning management, and eCommerce applications. We’re a Fortune 1000 business transforming a critical part of our system to be powered by blockchain technology. We need multiple engineers who are familiar with blockchain and related technologies. This cross-functional project will take our business to the next level at global scale. We are a medium-sized business in the logistics space looking to use blockchain to optimize our process and integrate our services with users/customers. Set the rate you want and enjoy a steady stream of income without the overhead. We handle all billing and invoicing directly with clients, so you can focus on your remote work engagements. Choose from multiple payment methods with SafePay payment protection. Create your free job posting and start receiving Quotes within hours. This problem, known as distributed consensus, cannot be solved in all cases. But a digital currency is just one particular case, and Nakamoto was able to solve it. Toptal offers a no-compromise solution to businesses undergoing rapid development and scale. Every developer working on smart contracts should know as much as possible about these problems and should be able to write solutions. Logan is a developer with expertise in the blockchain space. As an entrepreneur, he has conceptualized and delivered many of his own products. At Flexiple, freelance developers work on Blockchain developer jobs with top tech startups & companies. All the jobs are fully remote, with your payments protected by Flexiple. Block data contains all operations not yet included in another mined blocks. They have to have been accepted by a miner, verifying that the transactions do not break any conditions or rules of the network. freelance blockchain developers In most cases, there is an upper limit to the amount of data that can be included in a single block. Allows corporations to quickly assemble teams that have the right skills for specific projects. They are the first to bring flash loans to Avalanche C-Chain. The blockchain developer they are looking for needs to have prior experience deploying complex smart contract systems to the Ethereum mainnet. Toptal is a network of top blockchain developers, engineers, and consultants. Top companies work with Toptal blockchain engineers to launch ICOs, write smart contracts, create Dapps, and more. Daniel is a software engineer focused on functional programming.
OPCFW_CODE
Hi guys i had to opt this way because i dont receive any help or i dont understand. I have a site since 3 years (almost 4) always have been using cloudlfare flexible SSL and for some months i was paying for Cloudflare Pro, but 6 months ago i had issues with my website and hosting, and i decided to fix it by temporaly changing the CDN to a different one (temporaly) to solve the issue… Now i turned back to cloudflare 24 hours ago and all went good, in less than 10 minutes i was using the cloudflare service, except for the ssl. Really right now i cant use the flexible, full or full stric ssl… i received an automatic answer from a bot saying that i would be iissued with new SSL Certs in 24 hours… but im still waiting. My site is using HSTS, so it cant work withouth SSL, thats why right now the site is working with origin ssl, waiting for cloudflare. When i checkk the SSL panel > Edge certificates: there is nothing, it says “No certificates.” There has passed more than 24 hours and no solution ;S… even i received a reply from cloudflare with this: Cloudflare has observed issuance of the following certificate for mywebsite.com or one of its subdomains: Log date: 2020-04-03 19:29:25 UTC Issuer: CN=Go Daddy Secure Certificate Authority - G2,OU=[http://certs.godaddy.com/repository/,O=GoDaddy.com](http://certs.godaddy.com/repository/,O=GoDaddy.com\), Inc.,L=Scottsdale,ST=Arizona,C=US Validity: 2020-04-01 22:56:41 UTC - 2021-04-01 22:56:41 UTC DNS Names: mywebsite.com, www.mywebsite.com Most certificates are trustworthy. If the data above is surprising or incorrect, please visit https://support.cloudflare.com/hc/en-us/articles/360031379012. This email was requested by one of your Cloudflare account administrators. If you would no longer like to receive it, please disable it under “Certificate Transparency Monitoring” at https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates#ct-alerting-card. Really i dont understand what happpends. Always i have been using the service from cloudflare and always the ssl certs were issued fast enough in less than 2 hours. But currently i dont see any edge certificate, so i have to use cloudflare by passing their proxy ( with the clouds in grey) so the site is working only with the origin certificates: so right now if i turn on the clouds, the website stop working, as browser block the access due no certificates were found. I dont know what to do at this point, can someone help me?
OPCFW_CODE
windows_task resource unable to find tasks in nested folders Description The windows_task resource appears to have trouble finding existing tasks that themselves are in folders. All I can seem to do with it at the moment is create a task in a folder (which also creates the folder) but then attempting to do anything to that task after, such as delete, will fail citing the task cannot be found: windows_task[TestFolder\TestTask] action delete[2018-08-17T13:34:02+01:00] WARN: windows_task[TestFolder\TestTask] task does not exist - nothing to do Chef Version Chef: 14.1.12 Platform Version Windows Server 2016 Replication Case There's actually an example in the Official Docs that I see this issue with straight up: windows_task '\Microsoft\Windows\Application Experience\ProgramDataUpdater' do action :disable end Client Output Running the above example results in: Recipe: test::default * windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] action disable[2018-08-17T13:59:03+01:00] WARN: windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] task does not exist - nothing to do Just to confirm, the Windows schtasks utility sees the task fine: λ schtasks /query /tn "\Microsoft\Windows\Application Experience\ProgramDataUpdater" Folder: \Microsoft\Windows\Application Experience TaskName Next Run Time Status ======================================== ====================== =============== ProgramDataUpdater N/A Ready Any updates on this @Ryuzavi ? Ah sorry! Slipped my mind... Will give it a go today. Hmm it does seem this isn't as straightforward as initially thought. I've tried the following resource on various guises today and found it was pass on some and fail on others. I initially tried it in Kitchen, which had an even lower version of chef-client (14.0.202) and it passed fine. But then trying it on some boxes directly via chef-client (zero local mode) with 14.2.0 it did fail again as before: [2018-08-27T09:59:21+00:00] WARN: No config file found or specified on command line, using command line options. Starting Chef Client, version 14.2.0 [2018-08-27T09:59:30+00:00] WARN: Run List override has been provided. [2018-08-27T09:59:30+00:00] WARN: Original Run List: [] [2018-08-27T09:59:30+00:00] WARN: Overridden Run List: [recipe[test]] resolving cookbooks for run list: ["test"] Synchronizing Cookbooks: - chocolatey (2.0.0) - nssm (4.0.1) - windows_firewall (4.0.2) - windows (5.0.0) - test (1.0.0) Installing Cookbook Gems: Compiling Cookbooks... Converging 1 resources Recipe: test::default * windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] action disable[2018-08-27T09:59:38+00:00] WARN: windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] task does not exist - nothing to do (up to date) * windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] action enable[2018-08-27T09:59:38+00:00] FATAL: windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] task does not exist - nothing to do ================================================================================ Error executing action `enable` on resource 'windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater]' ================================================================================ Errno::ENOENT ------------- No such file or directory - windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater]: task does not exist, cannot enable Resource Declaration: --------------------- # In C:/Users/Administrator/.chef/local-mode-cache/cache/cookbooks/test/recipes/default.rb 2: windows_task '\Microsoft\Windows\Application Experience\ProgramDataUpdater' do 3: action [:disable, :enable] 4: end 5: return Compiled Resource: ------------------ # Declared in C:/Users/Administrator/.chef/local-mode-cache/cache/cookbooks/test/recipes/default.rb:2:in `from_file' windows_task("\Microsoft\Windows\Application Experience\ProgramDataUpdater") do action [:disable, :enable] default_guard_interpreter :default declared_type :windows_task cookbook_name "test" recipe_name "default" execution_time_limit 4320 task_name "\\Microsoft\\Windows\\Application Experience\\ProgramDataUpdater" end System Info: ------------ chef_version=14.2.0 platform=windows platform_version=10.0.14393 ruby=ruby 2.5.1p57 (2018-03-29 revision 63029) [x64-mingw32] program_name=C:/opscode/chefdk/bin/chef-client executable=C:/opscode/chefdk/bin/chef-client Running handlers: [2018-08-27T09:59:38+00:00] ERROR: Running exception handlers Running handlers complete [2018-08-27T09:59:38+00:00] ERROR: Exception handlers complete Chef Client failed. 0 resources updated in 15 seconds [2018-08-27T09:59:38+00:00] FATAL: Stacktrace dumped to C:/Users/Administrator/.chef/local-mode-cache/cache/chef-stacktrace.out [2018-08-27T09:59:38+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report [2018-08-27T09:59:38+00:00] FATAL: Errno::ENOENT: windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater] (test::default line 2) had an error: Errno::ENOENT: No such file or directory - windows_task[\Microsoft\Windows\Application Experience\ProgramDataUpdater]: task does not exist, cannot enable Bizarre :/ @Ryuzavi could you please try with version > 14.2.0. I think I have mentioned wrong version. Because its working with master. Ah interesting. It does appear to work with the latest chefdk version - 14.3.37. So it would appear it was an issue introduced after 14.0.202 but then fixed between 14.2 and 14.3.37. I can switch to this version on my boxes for now then to get around the issue. Do you want me to close this? Yes I think this has been fixed. after 14.2.0 and its not reproducible with current code on master too. So you can close this. Let us know if you come across this in newer versions. Will do. Cheers!
GITHUB_ARCHIVE
My home network with 3 OpenWrt routers has 802.11r enabled and working as evident from system log and my mobile devices roaming and maintaining bandwidth. But my Windows 11 laptop stutters each time it roams and log indicates pairwise handshake which means 802.11r doesn't kick in. Then there is this website that says "Windows® 10 currently doesn't support 802.11r with Pre-Shared-Key (PSK) and Open Networks." Just curious to see if anyone has managed to get their Win10 or Win11 machines working with OpenWrt's 802.11r. Cheers and thank you ! If I understand it correctly then you need a EAP authentication method and a RADIUS server running somewhere in your network. Never done this before... Plus with my IoT devices connected to my wireless network, I suspect a lot of these devices will not know how to connect with EAP enabled. So, basically I will need to live with my laptop not supporting 802.11r? In the end it always boils down to your wireless driver: does it support the needed bits and bolts? And chances are you might be running a new and shiny Windows 10 or 11, but the hardware drivers might just have been bumped with the absolute minimum required - ie to compile agains the newer APIs MS might offer, etc. But no actual added features, because that costs money. I suppose this is holder hardware you put a recent Windows on? I wouldn't trust what it's listed in that link (both intel and windows ones). Sure the driver reports .11w/.11r/etc support but, let's take my case with 9260, in that list it say Yes to .11w, driver say it support .11w, yet when I set ieee80211w='2', 9260 is unable to connect, yet a joke realtek usb dongle is happy connecting to it. In linux on the other hand the problem that I face in windows plain doesn't happen, the 9260 happy connects. In windows no matter what drivers I tried with 9260 it plain doesn't want to connect as long as ieee80211w='2'. In some older versions of Win 10 it was working. And I have no clue what is broken and were cause the realtek usb dongle happy connects no matter what win 10 version I used. The idea is that the driver might just report support for some feature, yet that feature might just not work at all in supported/current Windows version Even on Windows 11, my NIC doesn't support 802.11r which one is it, might be good to know/post, for future use/searches. Mine is not supported...not sure how that's helpful. Or...what to look for when buying a computer with onboard wireless, or buying a USB wireless NIC. on board wireless can usually be swapped out, on both desktops and laptops. but then you need to know what not to buy, hence the question ... It is indeed easier to blacklist hardware than to whitelist it. Maybe so, but I find that lists like "802.11ac devices supported by OpenWRT"..."802.11x devices supported by OpenWRT" much easier to find what I'm looking for. Anyway, I'm out. It is an older machine, but I swapped out the wifi card for a pretty new Intel Wi-Fi 6E AX210. So even the AX210 isn't supporting 802.11r (with PSK anyway). And if what Intel says here is anything to go by, I have a chance of making it work if I change authentication from PSK to EAP. Not about to try though.
OPCFW_CODE
Share - Start Your Own A number of people have asked us how they can start their own "share" party, what equipment and how much effort is involved. Although we have a fair amount of equipment, and invest a fair amount of effort into running "share", you can start your own with only a time, a place, and some form of audio amplification. Don't be afraid to start small, and let the event grow as more people become involved. Find a place which will be available on a regular basis. A club may offer a good soundsystem, but require that your event generates income. Ask if they have free time on an off-night, when monetary expectations are low. Choosing an off-night also means fewer local musicians will be have other commitments. Any space in which you can play audio and video will do, a college club room, practice space, warehouse or garage. If the amp has only one audio input, audio artists will have to take turns playing. Taking turns turns can be a good way to learn new sounds and techniques, but being able to jam is even better. For people to jam together, you'll need multiple inputs into the amp. A four- or eight-input hardware mixer works well. An alternative is to use a computer with a multi-channel soundcard, with its output going to the amplifier and inputs available for jammers. Many laptops and hardware have a line-in which can be used to daisy-chain many people together, with the last person plugged into the amplifier. In computers, check if the driver for your soundcard supports hardware-through. If not, you will have to use a piece of software which can read from the input and write to the output. Software-through adds latency. For other audio gear, try to avoid passing the input through any eq or effects. One other way to get lots of people in on the jam, is to run software which lets people collaborate to one audio output. If possible, place the mixer or audio inputs in the center of the space where people can easily access it. Long cables or a snake can extend the reach of audio inputs. You should announce in advance what form the audio inputs will be, so that people know what cables to bring. If possible, provide a few loaner cables: from minipin (headphone out on laptops) to your audio inputs, and also minipin-to-minipin for chaining computers together. For a video jam, it helps to have one or more monitors or projectors. Although laptop video artists can play directly on their screens, it is not always convenient for other people to see. If one projector or monitor is available, find out if any of the video artists can bring a video mixer. Another way to get many people in on the jam is to run software which allows people to collaborate on a single display. To enable collaborative software, a data network is required. This means an ethernet hub and cables. Many newer laptops have wireless cards, a wireless base station will be appreciated. Ethernet loopback cables allow two computers to form a two-person network. For a successful "share" you'll want people to learn about it, and come back. Return participants sharpen their jamming skill, new participants add news sounds and dynamics. Try to run "share" on a regulare basis, every month, weekly or biweekly, etc. If your "share" has a long time slot, you might consider scheduling featured audio and Announce to lists, local music groups, computing groups, and friends. Make sure the announcement includes all relevant information, such as what cables people need to bring to participate, time and place, and the regular schedule. One person doesn't have to provide all the gear or work. Participants can volunteer for task and to bring equipment with them. The announcement can include a list of equipment which would be useful if anyone could bring. Good luck! Once your "share" is underway, drop us a line with the info so we can list it on share.dj.
OPCFW_CODE
Join subscribers on our YouTube channel and enjoy other Divi video tutorials! Horizontal Scroll Tabs On Mobile If you want to maintain horizontal inline tabs on smaller screens, now you can! Just enable the new horizontal tabs scrolling setting and when the tabs are wider than the space, it will add a scrollbar, so you can touch and slide the tabs sideways. We also added design settings to set the height and color of the scrollbar. Not to be confused with the 32 existing tab content animations, these new tab animations allow you to animate the actual tab when switching between tabs. So now you can set nice UX animations for when clicking and switching tabs like fade, borders that slide, backgrounds that side, flips, and zoom. These are really nice, and probably more useful and nice to have than the existing content animations. Let me know what you think and if there are any goods ones that we missed. Check the tab animation demos to see these cool new effects in action! We also included an animation duration setting, as this can really make a different depending on which aniamtion option you use. If you are using any of the background animations, be sure to set a background color for the active tab. If you are suing any of the borer animations, we actually provide two new settings for styling that in the Design tab>Active Tab toggle. Heading H1-H6 Styling In Tab Content One thing we missed before was design settings for styling any text headings H1-H6 in the content area of the tabs. So now we have added the familiar tabs for H1, H2, etc. each with full text and font design settings. Tab Content Overflow We noticed that some of the animations for the tab content area such as the wobble and a few others were cut off by the bounding box of the content area by default. It looks fine either way, and is more of a preference, so we added this new setting to choose whether you want to make the entire animation visible or have it hidden (cut off) around the content area. Improvements & Fixes We are always improving the plugin by making minor bug fixes and code improvements. You can always check the plugin changelog to see the details. Divi Library Dynamic Content Label When you add dynamic layouts from the Divi Library to your tabs, now the label is clearly marked as “Divi Library.” Alignment Setting For Tabs Now you can align the tabs to the left, center, or right! Perfect for making toggles like this pricing toggle tab layout on our demos page. Updated Content Design Setting Toggles In the previous versions, we had only one “Content” toggle in the Design tab. This included everything about the content area as well as the text within it. But we realized that is not correct, so we now have three tabs: - Content – settings for the content area - Content Text – settings for customizing and styling paragraph text in the content area - Content Heading Text – settings for customizing and styling H1-H6 headings in the content area This change was made for the main Design settings as well as the individual tab settings.
OPCFW_CODE
Recently, I came across some horrible, horrible code. I immediately pasted it into a messaging application and quickly received an expression of solidarity. With what though? My indignation? Grief? Amusement? I find it suspicious that code I consider horrible tends to have been written by people I already disliked for some reason. Here are some things I have complained about recently: - Code that was the source of a bug that was in retrospect obvious. - Code that could be deleted with no change to the behavior of the program other than, possibly, a performance improvement. - Extremely inefficient code, perhaps extravagant in memory use or accidentally elevating the order of complexity. - Extremely efficient code, in one tiny section of the program that made no measurable contribution to overall performance. - Flow control making use of (a) if/else, (b) monadic combinators, or (c) pattern matching, in lieu of one of the other two. Occupying their own special corner of my lizard brain are the following nomenclatural iniquities: - Ambiguous or (gasp) misspelled names. I got very upset recently about a block of code that contained three nearly identically named variables, differing only in the placement of underscores. - Comical adherence to coding standards. Well, comical to me anyway. I found val userUuid = UUID.randomUUIDhilarious. - Symbols that were literally the opposite of the true meaning, for example a list of excluded things called - Symbols that had clearly been repurposed without being renamed, for hostnamecontaining a user count. Here are some things that I often do before complaining about horrible code: - Googling to make sure that the complaint is legitimate. - Learning from such googling that the complaint is not legitimate and then observing my blood pressure fluctuate in an interplay of rising embarrassment and receding indignation. - Recalling such painful personal experiences to illustrate my sense of fairness and stiffen my resolve. - Recalling that time when I woke up in a cold sweat the night after noisily pointing out someone's mistake that turned out either not to be a mistake or to be my mistake and deciding not to send my artfully composed email after all. - But not deleting it from Drafts either. - Realizing that, while I know that the code in question is horrible, I don't understand precisely why, and so spending the rest of the afternoon reading about catamorphisms. - Realizing and then suppressing the thought that I wrote code that was horrible in a very similar way, actually quite recently. With reference to that last point: An ignorant person is one who doesn't know what you have just found out. - Will Rogers - This code makes me nervous and raises my cortisol levels. I have OCD, for chrissake. - But my genuine surprise is no less wounding than feigned surprise. - This mess makes it hard to do my job and may lead to outages! - Seriously, what is the objective importance of either my job or your outages? - It's so funny! Existential crises after pondering a world where no code could be called horrible anymore: - Could code in such a world ever meaningfully be called good - or wonderful? Would code not so designated be by implication horrible. - But craftsmanship is an objective good, no? - Why? Isn't craftsmanship basically about making nice things for rich people to display ostentatiously? Doesn't all morality come down to aesthetic preference anyway? - Something that I find important is evidently not important to everyone. This makes me moderately sad. - If two people disagree about what is horrible, can anything they say to each other be called communication? - Glimpses of a chaotic hell. Heat death of the universe. Commentscomments powered by Disqus
OPCFW_CODE
At the moment, FE is something like 30,000 lines of C++ with some extra MaxScript, Python and SWIG interface scripts. I think we’re using about a dozen file formats for resources, and if we’re not now we will be soon easily. (Let me count for a moment: .mesh, .material, .program, .compositor, .particle, .cg, .png, .tga, .fnt, .py(c), .femap, .femap.xml, .physics, .cfg (OGRE’s), .cfg (ours), .con, logfiles… okay, so that’s about 16.) That’s easily gonna be in the low to mid 20′s by the time things are done. (We’re going to have a package manifest, separate map metadata/rendering/collision/entity layers, cinematics, savegames…) The point is, there’s a bunch of shit going on and you can get design vertigo if you try to think about it all at once instead of adding things one-at-a-time until it’s complete. I also noticed that we’re ending up with several separate file hierarchies that I’m gonna have to document… 1) the source tree layout; 2) the installation layout; 3) the user profile layout; 4) the package “pattern” layout (standard subdirectories etc. for resources in packages.) If you take into account that the installation layout might end up with a Linux and an OS X variant, and that the source tree would have to have build scripts for both of those cases, that alone is gonna be a few days of work for each case to set up build & install behaviors. I guess I’m just feeling scattered from not being on a real overall plan at the moment. There’s a goal to get the engine ready to do some kind of a demo by December but that’s very vague and leaves me with a lot of flexibility as far as what needs to be in place by then. I think the most prudent plan would probably involve getting the entity system improvements done (adding nice light & camera support, cinematic animation, trigger/raycast-only collision volumes, and rudimentary load/save and possibly network ability) whilst actually using them to implement some basic behaviors in test maps so they can be proved to actually work. I do still need to make several decisions regarding the entity system, such as how to actually set up cinematics (the current plan is to allow assignment of controllers to property slots, and use this to hook entities up to sampled movement curves from 3DS), how to organize the volume system (some volumes, like the hit-volumes for limb parts on entities, will actually change from raycast-only volumes to physics bodies, which ight mean the collision volume can be in a “physics” or “raycast-only” state, which is very weird), how to set it up so you can set up complicated, animating entity groups that only save the minimum amount of data when serializing to file/network (I have a few ideas on this, but I’ve not gotten to implementing them yet), and how to actually interface this with Python scripting. I also need to add more UI element types and improve UI scripting ability so scripts can actually create proper HUDs. On the plus side, in the past week I’ve done some overhauls that were long overdue, such as adding a proper ingame console, adding proper keybind support (which also means player movement is driven by changes to cvars), adding a less primitive (and not Python-bound) configuration system, and moving all of the user profile files to a user directory instead of splattering them in various places in the installation tree. (This also means that theoretically Tim and I can make personal configuration changes without them ending up in SVN and resulting in a ping-pong config tweaking effect. This happens quite a lot with the default map setting and things of that nature.) The OGRE configuration dialog, which I intend to remove entirely in favor of our own in-game UI, can now be disabled if you don’t want to change config on startup by passing +ogreconfig 0 on the commandline (the commandline more or less works like Quake 3, and is basically just an interface to the console.) I haven’t done much extra work on the LLVM-backed scripting engine concept. I’ve got more pressing stuff to work on with FE. I also haven’t submitted a patch for LLVM to compile properly on VS7.1 either Nor have I done anything on EL with Pouya for a good month, I think. I think one of the things that’s bugging me about FE is the lack of visual changes recently. I’ve been making big, important changes to the engine side, but it’s still just the same handful of maps with enemies that are dumb that don’t have collision so you can’t shoot them. I spend a lot of my time wishing I was making something more visually impressive (or making tools to make more visually impressive things, which is more classic SHilbert.) I think FE definitely has the capability to be impressive, but of course it’s all C++ and any effects I’d want to do have to be possible within OGRE’s framework, which isn’t always the case. I still need to add a proper lighting system and set up a set of shaders for characters and map geometry that look nice and allow for all the different effects I’m likely to need. October is starting soon, so the whole Halloween atmosphere is starting to creep into grocery stores and newspaper ads around here. If I didn’t have experience showing that it’s incredibly unlikely for such a thing to work *cough*, I’d make a small game for Halloween… I’m still toying with the idea, but I’ll probably be too lazy to actually do it. Plus I’ve got way too many other games I’m supposed to be working on anyway. Maybe I’ll just make some kind of pumpkin-head mod for FE or something.
OPCFW_CODE
Why did Hanuman fight his son? I recently found out that lord Hanuman has a son through the question asked on this site called "Did lord Hanuman have a son". After that, a comment I saw said that he had to fight his son to enter a place. Why did he have to fight him even after he had found his son? His son, Magardhwaj, was the guard of Ahiravan. Hanuman hi went to save Raam ji and Lakshman ji, and the entrance was guarded by his son. This is ehy they fought. @ABcDexter--Who is wife of Hanumanji? Hanuman is confirmed bachelor according to the hearsays from puranic story tellers. What are your sources for the claim that Lord Hanumāna had a son? Yes, it is true that Lord Hanuman have a son. Name of his son is "Makardhwaja", which belongs to Samundra (Sea) and Patal. The story about Lord Hanuman and Makardhwaja fight is mentioned in Adbhut Ramayan. According this Story: When King Ravana was sure about his defeat in The war with Lord Rama, He approached Ahiraavan to help him. Ahiraavan was the king of Patal Lok, and Makardhwaja is the Protector of Patal Lok. When, Ahiravana tricked and Kidnapped Shri Rama and Laxman and took Rama and Lakshmana to Patala. Lord Hanuman followed them to their rescue, where at the gate of Patala Lok he met his Son "Makardhwaja" when Makardhwaja introduced himself as the son of Hanuman and told the story about his birth, Lord Hanuman Accepted him as his Son. Then Sri Hanuman told him to save Sri Rama and Laxman, But "Makardhwaja" said that Ahiravana is the Master of Patallok and as you are doing your duty i am also serving my master to protect the patal lok. Thats why i cant allow you to enter in Patallok, then to save the life of Sri Rama and Laxman, Lord Hanuman fought and defeated his son "Makardhwaja". You can watch this Video in Hindi which describes Panchmukhi Hanuman Story, the Complete Story and Birth Details of "Makardhwaja" and his Mother. Where is it mentioned in Valmiki Ramayana that Hanuman had a son? Story of birth of Makardhwaj is associated with Hanuman burning down Lanka. When Hanuman was extinguishing the fire on this tail, a drop of sweat fell in the ocean which was swallowed by a crocodile (makara in Sanskrit). As a result of which, a child was born to the makara (crocodile). The child born to a crocodile was named as ‘Makardhwaj’. But this is not present in Valmiki Ramayana. This must be present in later day version of Ramayana. Please modify the sentence "Yes, according Valmiki Ramayana, it is true that Lord Hanuman have a son." No proper sources attached.
STACK_EXCHANGE
How to say that you are a teacher because of your education? Imagine you're being interviewed for a job and you're asked to tell about your profession. You're a professional, let us say a teacher, because you studied in a pedagogical university. What is the right way to say that? I'm a teacher by/according to/based on my education. What preposition should be used while using exactly this word order? Thanks! It's not a logical idea to me. Would you say you're a doctor because you went to medical school? There is a phrase I'm a/an teacher/lawyer/plumber by training. I think people use this to say that they learned or prepared to be (a professional) but never practiced it, or no longer practiced it. I just think I'd use a completely different approach to describe this in a job interview. I'm a teacher and graduated from a teachers college. Teaching is important to me because.... I enjoy teaching because.... Anything but I'm a teacher because I went to school to be a teacher! @JimReynolds looks like a good answer. And as for if it's an important thing to say, I think it depends on the place or culture. In some places what you've been formally trained to do can be an important part of your identity. In a job interview, I don't know. @DanGetz I completely agree, and notice that Guz is Russian. If his interviewer is Russian, then he may well want to say this. If his interviewers are a panel of Russians, Brits, and Yemenis, then he will probably get the job because of his good looks. I'm a teacher because that's my job. That's precisely what makes me a teacher, I do it for a living [or for free, but nevertheless I do it daily/weekly etc]. I became a teacher through education & specific teacher-training So, I'm not a teacher because of my education, although I couldn't have become one without it. I'm a teacher because of my career-choice, enabled by the relevant education & training. Had I gone from teacher-training college to a job in gardening & home improvement, I would be fully-qualified as teacher but I would not be a teacher. What Jim says ... but without the hyphen. Yes, I understand you cannot become a professional just by getting a degree, but I wanted to express the idea that @JimReynolds mentioned in his comment to my question - that you were trained to do something, but you're not actually do (or you may do) it. I think the great hyphenation debate is somewhat transpondian. I always feel more comfortable with a nice hyphen joining the ideas, though i'm no grammarian ;) I would suggest one of two choices. The first option, as suggested by Jim Reynolds in the comments above, would be: I'm a teacher by training. This is the standard idiomatic way to say that you've studied (and, typically, by implication, completed your studies) for a particular occupation. You could substitute "by education" for "by training" here, if you really wanted to, but at least to my ear, that doesn't sound quite as common or natural. (I decided to check my intuition with Google Ngram Viewer, and it seems to generally agree, ranking "X by training" above "X by education" over the last 100 years or so. Interestingly, refining the query shows that the this idiom seems to be most often applied to a few specific professions, the top two being "a lawyer by training" and "an engineer by training". As these are both well-known examples of professions where a formal degree is an essential and often legally mandated requirement for practice, this is perhaps not so surprising.) As Jim points out, using this expression can sometimes carry the implication that you have not actually yet worked, or do not currently work, in the profession that you studied for. This does not really have anything specifically to do with the idiom as such — it's simply that, with the explicit qualifier "by training" included in the sentence, the reader may assume that the qualifier is actually necessary, and that therefore, by implication, you are not (currently) a teacher in some other sense. Generally, you don't need to worry about this too much, since the intended meaning should be clear from context, anyway. That said, if you wish to make it absolutely unambiguous that the reason you're stressing the "by training" part is to put emphasis on your formal degree in the subject, I would suggest simply rewriting the sentence to explicitly say so: I have a degree in pedagogy. Of course, you should substitute the specific official (English) name of the degree you have, and perhaps include the name of the institute you received it from To address the phrases you asked about, any of the following is grammatically correct: "I am a teacher by education" "I am a teacher by my education" "I am a teacher according to my education" "According to my education I am a teacher" (although "according to my qualifications" would be more on the mark) "I am a teacher based on my education" "Based on my education I am a teacher" You can put a comma before "I" in each of the cases with reversed order. Whether these statements are true or not depends on your definition of "teacher" as discussed in other answers. I think all of these cases automatically imply that by "being a teacher" you must mean "being qualified as a teacher", not necessarily that you currently work in the profession or ever did. But a pedant or an actual practising teacher might well disagree with that meaning. Because it's unclear whether they can be true, i.e. that education alone makes you a teacher, I don't think any of them is a particularly natural way of putting it. To my ear "I'm a teacher by education" is best of these options. The phrasing directly suggests that a "teacher by education" is a different thing from "a teacher". Next best is something like, "According to my education, I am a teacher, but in fact I never worked as one". In your circumstances it would be more natural in English to say "I trained as a teacher", since this avoids the whole business of the slightly unusual (and arguably inaccurate) meaning of "teacher". Or specifically in a job interview, say, "I qualified as a teacher" to emphasise that you do have all the certificates! It's also a bit more natural to talk about "training as a teacher" (or doctor, or philosopher, or brick-layer) rather than "being educated as a teacher", because "training" implies a bit more active use of skills. You might say "I was educated in medicine", and "I trained as a doctor". For teaching, this results in the confusing phrase, "I was educated in education", probably best avoided :-) I think the simplest way to express this is to say, "I studied teaching, but now I am a/an x" or "I have a degree is in 'x', but now I work in 'y'." For example, "I have a degree in teaching, but I work in IT." -or- "I studied literature, but now I work in merchandising." I'm a teacher by virtue of my education. If you say that you are a teacher because of your education, that means that you didn't want to become a teacher, otherwise you'd say you became a teacher because it was your dream, or it seemed like a sensible career choice or whatever. Since you got an education which left you no alternative but to become a teacher even though you didn't want to, that then has unfortunate implications. Where you forced into the study? Didn't you think things through before you started? Were you misinformed about the career opportunities granted by the study? Is this really what you want to say?
STACK_EXCHANGE
Unable to write to Windows file share "not enough space" yet drive has 10Gb free I've got a system running Windows Server 2016 Standard. It's got multiple file shares running from a common drive. Periodically, I get an error when trying to save data to a file share on that drive. "There is not enough space" - Which is weird, as the drive has 10Tb free on it.. When I work on the drive LOCALLY - it works fine, no errors. This only seems to be happening when accessing the drive via the file shares... It should be noted: Quota's are NOT enabled. (I checked...twice!) If I delete a stack of files, it seems to 'fix it'.... but the problem re-occurs... Permissions aren't an issue as when I delete files, people can access and use the shares normally. And if one share on that drive starts throwing the error - they ALL do - for ALL users, including me, and I have full admin R/W on all shares... Thoughts? Access the server and: Check for shadow copies (properties -> shadow copies tab) Search for errors: chkdsk /f /r X: (replace X with the drive's letter) - note that if this your system disk, it will do the check only on reboot, and the -R option on a 10G disk will take hours Before you do the next one, you should contact warn your mates (disconnections). Verify SMB using a terminal: net stop server net start server Then, you should check for FS limitations. This ain't very likely to affect free space, but I would check that the the number of files and directories on the shared drive and ensure you're not hitting any NTFS limitations. It is a very large disk drive! High i/o can cause these errors. Make sure nothing is wasting resources. Use Task Manager or Resource Monitor to check for high disk usage or memory usage. Identify any processes that may be consuming resources excessively. Make sure your network drivers are up to date, outdated or corrupt drivers can cause connectivity issues. Open Device Manager -> expand Network adapters -> right-click your network adapter and select "update driver" Check the logs: Open the even manager -> windows logs > system, search for warnings and errors, even if they don't seem directly related to the issue, but pay most attention to "Disk" events. Check if there are any hidden system files or directories consuming unexpected space. Windows lacks this tool, you might have to find a 3rd party program to search. Recreate the shares (downtime), but before you do that, note the names and permissions! Remove the existing shares and recreate them with the same names and reviewed permissions. If you can, you should modernize: Microsoft has put very little effort in the windows development for the last 25 years, weird issues are not uncommon.
STACK_EXCHANGE
SD CARD + PETIT FAT BMP DISPLAY Updated on 6/2/2015 The PIC16F876A programmed with PETIT FAT reads bitmap file on the SD card and displays it on the TFT module. The firmware works for SD cards only because you can format FAT16 on max 2GB card. SD card interface the PIC in SPI mode. Petit FAT file system reads one file only. It is configured to FAT16, FAT32 can be added. This pic doesn't have enough RAM for Write File option. Read more at http://elm-chan.org/fsw/ff/00index_p.html LCD TFT Module used is 8 bits drive. It includes SD card socket and 3.3V regulator. Inputs to the TFT driver are level shifted from 5V to 3.3V by buffers, inputs to the SD card are driven by resistors to reduce the drive to 3.3V . The pic firmware can read only bitmap file of 24 bits bmp type. The file has to be size of 240 pixels width and 320 pixels high. The firmware removes the bitmap header (54 bytes) and then streams the rest of the file to the TFT. Every 3 bytes are the 24 bits color for one pixel. The bitmap format reads and displays the pixels of the image starting from bottom left. Small color coded squares on the screen indicate errors: yellow for SD error, blue for file system error and green for file errors. TO SET UP THE SD CARD: Format the card with FAT16. Create image file using MS Paint or another program, file size is 226KB, image size 320 pixels high by 240 pixels wide. Name the file "pic.bmp". Save the file as 24 bits bmp. Add the file to the root folder of the card (don't use a directory). FAT 32 PICTURE FRAME This project uses the same circuit with different firmware. It doesn't include Petit FAT. The program displays BMP pictures from the SD/SDHC card formatted FAT32. The code has functions that find the root directory, read the files location from the Root Directory and streams the data to the TFT. The code displays the files in rotation. The software can read only FAT32 and only from the root directory. The file name and size isn't read. To setup the SD/SDHC card: format the card with FAT32, add files to root folder without directories. Each picture file size is 226KB, image size 320 pixels high by 240 pixels wide, saved as 24 bits bitmap. Files names have to be 8.3 type, 8 characters max. File can be created using MS Paint. More about FAT32 in this document: https://staff.washington.edu/dittrich/misc/fatgen103.pdf Good free specifications for SD can be found in SanDisk PDF: http://alumni.cs.ucr.edu/~amitra/sdcard/ProdManualSDCardv1.9.pdf You are free to use the circuit diagram and software with no See also Technical Tips RD input of 3.3V is taken from the TFT module. The module supply is 5V. The LCD driver I have is ILI9341. I bought this module directly from China because the price was more sensible. It is important to note that most products sold on the net by Chinese are defective, you only have to hope that the one you receive after 4 weeks is only the second hand one and not the one that partly defective or doesn't work at all. Whichever way it is, forget about refund. Don't buy from http://www.banggood.com when you order small quantity they label you as not valuable customer and send you rubbish out of their bin.
OPCFW_CODE
// closures // 16.05.17 ///////////////////////////////////////// // functions ///////////////////////////////////////// func printString(aString: String) { print("Hello \(aString)") } printString("rawr") ///////////////////////////////////////// // functions as a constant ///////////////////////////////////////// let someFunction = printString someFunction("rawr again") let anotherFunction: String -> Void = printString anotherFunction("rawr again") ///////////////////////////////////////// // functions as a parameter ///////////////////////////////////////// func displayString(a: String -> Void) { a("I'm a function inside a function") } displayString(printString) ///////////////////////////////////////// // extension Int type ///////////////////////////////////////// extension Int { func apply(operation: Int -> Int) -> Int { return operation(self) // pass self (int) to the operation function } } func double(value: Int) -> Int { return 2*value } func closestMultipleofSix(value: Int) -> Int { for x in 1...100 { let multiple = x * 6 if multiple - value < 6 && multiple > value { return multiple } else if multiple == value { return value } } return 0 } 10.apply(double) 12.apply(closestMultipleofSix) ///////////////////////////////////////// // returning functions ///////////////////////////////////////// typealias IntegerFunction = Int -> Void func gameCounter() -> IntegerFunction { var counter = 0 // caputured variable (maintains state) func increment(i: Int) { counter += i print("counter value: \(counter)") } return increment } let aCounter = gameCounter() // assign to function (creates and instance of the function) aCounter(1) let bCounter = gameCounter // assign to name (doesn't work) bCounter() ///////////////////////////////////////// // capturing variables ///////////////////////////////////////// aCounter(1) aCounter(1) aCounter(4) ///////////////////////////////////////// // closure expression ///////////////////////////////////////// func doubler(i: Int) -> Int { return i * 2 } let doubleFunc = doubler doubleFunc(2) let doubleNumbers = [1,2,3].map(doubleFunc) let closureExps = [1,2,3].map( { (i: Int) -> Int in return i*3 } ) ///////////////////////////////////////// // closure expression shorthand syntax ///////////////////////////////////////// let inferredType = [1,2,3].map( { i in return i*3 } ) let implicitType = [1,2,3].map( { i in i*3 } ) let shorthandArgs = [1,2,3].map( { $0*3 } ) let trailingClosure = [1,2,3].map() { $0*3 } let ignoreParentheses = [1,2,3].map { $0*3 }
STACK_EDU
Past Testbed Projects Resources for your Project Testbed Cloud Servers ESIP GitHub Repository How to Propose The ESIP Testbed is an environment where technology, standards, services, protocols and best practices can be explored. Projects enter the Testbed through a Request for Proposals (RFP) released by the Products and Services Committee. The Testbed Configuration Board (TCB), a community-led panel, reviews each proposal to determine if it represents innovative research, fosters collaboration among, and provides value to, the Earth science community. The TCB works with project Principal Investigators to refine proposal objectives and decides which projects enter the Testbed. NOTE: This RFP is aimed at new incubation-style projects only, i.e. this is not a call for current Testbed projects to extend funding. 2 Incubation Projects Incubation projects lie in the realm of good ideas ready to be tried out and provide proof-of-concept on new approaches to problems identified by the Earth science data community or by application of new technologies to those problems. Potential incubation projects could include, but are not limited to, those that: - Implement demonstration technologies or activities furthering goals of ESIP committees, clusters and working groups. - Provide an opportunity for a student to investigate an interesting problem. - Investigate ideas related to the 2017 ESIP theme, “Strengthening ties between observations and user communities.” If your project has a relationship to the 2017 ESIP theme , please explain this in your proposal response. 3 Past Testbed Projects Read about past ESIP Testbed projects on the ESIP wiki or Testbed project site 4 Eligibility Requirements - Proposals may be submitted by any ESIP member organization, an individual within such an organization or a team of such individuals. - Civil servants are restricted from receiving ESIP funds. - The Principal Investigator has not been funded through the ESIP Testbed in the past year. 5 Resources for your Project ESIP can provide up to $7,000 for incubation-project funding. Your RFP response should outline a budget and budget justification. 5.2 Testbed Portal The Testbed Portal is the registry of testbed projects, and is available to be used for your project’s documentation, content registration and document sharing. The URL for the Testbed Portal is http://testbed.esipfed.org 5.3 Testbed Cloud Servers If you need a location to host your project for development, testing, and/or user feedback, the Testbed can provide cloud server instances for you to use. Include in your proposal a plan for how you will use the cloud server and what it contributes to your project. The cloud instance can remain up for the initial project timeframe, but plans should be made to migrate your project to another environment after the duration of your project in keeping with project lifecycle expectations. Your proposed budget should include estimated costs and duration for running the cloud server instance, what type of cloud resource you need and the plan for user feedback and access of the resource. Where there is collaborative development, team members can share an instance password. 5.4 ESIP GitHub Repository Any resultant code for testbed projects should be archived in GitHub and included in the ESIP GitHub organizational account. The ESIP GitHub site is public, but you will need to be a member of the organization to create a repository here. Please contact ESIP ([email protected] ) with your GitHub username for access. If you would like to set up regularly scheduled or group calls, contact ESIP ([email protected] ) for GoToMeeting access. 5.6 Meeting Space ESIP provides breakout session capabilities at its bi-annual meetings that can be used for planning sessions, feedback and presentation of your work, and other project-related meetings. If you are considering a face-to-face meeting or a breakout session at the 2017 Winter Meeting, please submit a placeholder session to the ESIP Commons (instructions ) by the October 31 session deadline. 6 Reporting Requirements Projects chosen for award will be expected to do the following: - Create a project page on the Testbed Portal (http://testbed.esipfed.org) that explains what your project will achieve, your project plan and timeline. - Define when and how your project will be concluded; if longer term sustainability is a goal, please describe why it would be maintained in the Testbed longer than the project duration. - During or at the conclusion of your project, post any resultant code and/or a snapshot of the cloud (e.g. Azure or Amazon) instance to the ESIP GitHub organization repository, and place the link on your project page in the Testbed Portal. - You will be asked to present your project progress on a telecon at its halfway point, and also upon its completion. - Presentations/Posters at ESIP Summer and/or Winter meetings are highly encouraged. Specifically, we encourage participation at the ESIP Summer Meeting. 7 How to Propose 7.1 Document Guidelines A proposal should be 3 pages in length or fewer and should specify: - Project description, and if applicable, how project relates to the 2017 ESIP theme; - How your project supports the mission of an ESIP collaboration area(s); - The ultimate benefit that your solution brings to the ESIP community; - Project plan, budget, and timeline; - Names and descriptions of roles for team members; - If you plan to use cloud server resources, indicate what kind of instance you need, how you plan to use it, and the estimated cost to run the instance for the duration of your project. 7.2 Submission Instructions Submit the 3-page proposal, plus a CV for all participants to the ESIP Testbed Configuration Board at [email protected] Incubation projects are intended to be roughly 6 months in duration. You can conceptualize the project’s scheduling and reporting out as follows; please consider this as you propose your project plan: - Project award about two weeks after close of RFP (Nov. 2016) - Present project plan at 2017 ESIP Winter Meeting (Jan. 2017) - Work on project, report out at intervals (Jan. - Jul. 2017) - Present project results at ESIP Summer Meeting (Jul. 2017) Proposals may be submitted up to Nov. 1, 2016 and reviewed thereafter. The ESIP Testbed Configuration Board will alert awardees by Nov. 15th.
OPCFW_CODE
#ifndef THREAD_POOL__TASK_HPP #define THREAD_POOL__TASK_HPP namespace thread_pool::detail { class TaskPimpl { public: virtual void invoke() = 0; virtual ~TaskPimpl() = default; }; template<typename F> class TaskPimplImpl: public TaskPimpl { public: explicit TaskPimplImpl(F&& fun) : fun_(std::move(fun)) {}; void invoke() final { fun_(); } private: F fun_; }; class Task { public: Task() = default; Task(Task&&) = default; Task& operator=(Task&&) = default; template<typename F> requires (!std::same_as<std::decay_t<F>, Task>) Task(F&& fun) : pimpl_(make_pimpl(std::forward<F>(fun))) {} void operator()() { pimpl_->invoke(); } private: std::unique_ptr<TaskPimpl> pimpl_; template<typename FF> static auto make_pimpl(FF&& fun) { using decay_FF = std::decay_t<FF>; using impl_t = TaskPimplImpl<decay_FF>; return std::make_unique<impl_t>(std::forward<FF>(fun)); } }; } #endif //THREAD_POOL__TASK_HPP
STACK_EDU
Solve for $x$ with exponents I am trying to solve an equation to find a value of $x$ like this: $(1.08107)^{98/252}=(1.08804+x)^{23/252}(1.08804+2x)^{37/252}(1.08804+3x)^{38/252}$ That is pretty straightforward using Excel Solver, but I am not quite grasping how to do it by hand. The result is $-0.00323$. Thanks in advance. Welcome to MSE. Please type your questions instead of posting images. Images can't be browsed and are not accessible to those using screen readers. If you need help formatting math on this site, here's a tutorial There may be a solution more simply rendered than MS Excel's -0.00323. What do you mean Oscar? Thanks for the reply A whole number maybe? I am sorry. 1.08 is rounded, I wanted to make the post clearer because I am interested in the resolution and not the result itself. I'll edit it to make it more clear. Moo, I edited the question before to put the decimals because I didn't realize that would be 0. We can use a root finding algorithm, like Newton's Method. Our function is given by $$f(x) = 1.03078 -(x+1.08804)^{23/252} (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126}$$ The Newton iteration is given by $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} = x_n - \dfrac{ 1.03078 -(x+1.08804)^{23/252} (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126} }{\left(-\dfrac{23 (2 x+1.08804)^{37/252} (3 x+1.08804)^{19/126}}{252 (x+1.08804)^{229/252}}-\dfrac{37 (x+1.08804)^{23/252} (3 x+1.08804)^{19/126}}{126 (2 x+1.08804)^{215/252}}-\dfrac{19 (x+1.08804)^{23/252} (2 x+1.08804)^{37/252}}{42 (3 x+1.08804)^{107/126}}\right)}$ Starting at $x_0 = 1$, we arrive at $$x \approx -0.003235904357553754$$ Thanks Moo. I put that into Python and the answer matches my work in Excel. Excellent to hear that! Great job. The first step is to ignore the $252$, converting the equation to $$(1.08107)^{98}=(1.08804+x)^{23}(1.08804+2x)^{37}(1.08804+3x)^{38}$$ (that is, raise both sides to the $252$th power). Now, since $98=23+37+38$, we can move the left hand side to the right, giving $$1=\left(1.08804+x\over1.08107\right)^{23}\left(1.08804+2x\over1.08107\right)^{37}\left(1.08804+3x\over1.08107\right)^{38}$$ Noting $1.08804=1.08107+0.00697$ and taking logs, we have $$\begin{align} 0&=23\ln\left(1+{x+0.00697\over1.08107}\right)+37\ln\left(1+{2x+0.00697\over1.08107}\right)+38\ln\left(1+{2x+0.00697\over1.08107}\right)\\ &\approx23\cdot{x+0.00697\over1.08107}+37\cdot{2x+0.00697\over1.08107}+38\cdot{3x+0.00697\over1.08107}\\ &={(23+74+114)x+98\cdot0.00697\over1.08107}\\ &={211x+0.68306\over1.08107}\\ &\implies x\approx-0.68306/211=0.00323725\ldots \end{align}$$ This gets us in the ballpark of the asserted result. In fact, the true answer is somewhere between $0-.00323$ and $-0.003237$: The right hand side of the first equation is larger than $(1.081076)^{98}$ for $x=-0.00323$ and smaller for $x=-0.003237$. The key here is the approximation $\ln(1+u)\approx u$ if $|u|$ is small, which turns out to be the case. One could get a better approximation using $\ln(1+u)\approx u-{1\over2}u^2$, but that would lead to a messy quadratic equation to solve for $x$. Nice approach + 1
STACK_EXCHANGE
BUILD THE BASIC SCREWDRIVER Let’s begin with the easiest possible configuration for the optical screwdriver: a plain-jane Arduino with an LED and three potentiometers. While not particularly wand-like, it gives you an unsexy equivalent that works identically—it makes optical LED flashes that interact with a sensor-equipped synthesizer. The three pots will be used to control the speed, duration, and pauses of the flashing LED. Build the Test Rig Unless you have a light-detecting synth handy, you’ll have to build one. This rig emits a beat that is modified by data sent via the light sensor, so it can interact with the screwdriver. The test rig’s code is a variant of the tonePitchFollower example sketch, which changes a tone based on the level detected by the light sensor. I’ve added a bit more pizzazz by tying the beat length and pause length to the same light sensor, allowing the optical screwdriver to simultaneously modify the beat’s tone, its length, and the time between beats via the light sensor. BUILD THE WAND VERSION WITH AN ATTINY85 Next I’ll show you how to swap in an ATtiny85, a smaller microcontroller than the ATmega328p featured in the Arduino. This is a simpler chip, without the extra bells and whistles of the Arduino board, but it’s also more compact, which allows you to move the screwdriver circuit off the prototyping board and onto a more portable wand. Build the ATtiny85 Screwdriver The next challenge involves turning that loose jumble of wires into something a little more solid. You’ll use a credit card–sized prototyping board from Ad fruit and design it so that, besides a power supply wire, you can hold the entire screwdriver in one hand. When you’ve made the changes, upload the sketch using the programming rig mentioned earlier in the chapter—that is, by wiring the ATtiny85 to the Arduino and then loading the code to the Arduino. Once the ATtiny85 is programmed, it can be placed in the socket and you’ll be ready to go. MAKE A PCB WAND Without the need for a full-sized Arduino, you can make the optical screwdriver much smaller and, well, sexier. In fact, you can make it small enough to fit on a wand-shaped printed circuit board (PCB. The previous version of the screwdriver was for folks who didn’t want to buy or mill their own circuit boards. If you’re up to that challenge, however, this variant of the project is for you. Considered by many to be the simplest form of electronics project, a blinking LED nevertheless can offer some cool challenges and opportunities. This project shows how combining digital and analog makes for an intriguing tool that you can make yourself
OPCFW_CODE
from multiprocessing import Process from aoc_solver.solver_event import SolverEvent from aoc_solver.types import PipeConnection class Context: """ Container for all connections and processes managed in the file. Provides an easy way to clean up these resources when exiting the script. """ conns = [] procs = [] def add_conn(self, conn: PipeConnection): self.conns.append(conn) def add_proc(self, proc: Process): proc.start() self.procs.append(proc) def shutdown(self, signal: int = None, error: str = None): """ Send a TERMINATE event to all connections and join all processes to properly shutdown all resources. """ while self.conns: conn = self.conns.pop() message = {"event": SolverEvent.TERMINATE} if signal: message["signal"] = signal if error: message["error"] = error conn.send(message) conn.close() while self.procs: proc = self.procs.pop() proc.join() class ContextManager: """ The class provides access to a global Context so we can shutdown from anywhere in this script. """ _context = Context() @classmethod def add_conn(cls, conn: PipeConnection): cls._context.add_conn(conn) @classmethod def add_proc(cls, proc: Process): cls._context.add_proc(proc) @classmethod def shutdown(cls, signal: int = None, error: str = None): cls._context.shutdown(signal, error)
STACK_EDU
I need to make a helical cut down a Ø10mm steel bar (think twist drill). I have a Fanuc 10T Model A controller and have been thinking of using the G-code 7.1 to turn on cylindrical interpolation. I have found a program example in my manual, but I must be missing some vital information because my machine brings an alarm saying improper g-code whenever I try to run it. My program looks like this: G96 G99 G50 S500 M9 G0 Z0. M56 G0 X0. Z-1. T303 G1 Z0. M3 S75 F.25 G3 X13. Z27.5 R2.5 G0 X14. Z-5. G1 Z0. F50 G3 Z0. C0 R2.5 G1 Z25. C180 G0 X14. Z27. G1 X12. Z33.5 G3 X10. Z36. I2.5 Can anyone see an error or does anyone have any suggestions to a better way of programming. I need 2 helix with a distance of 180 degrees. Thanks for your response Bill. I have checked operators manual, controller manual and my appendices for both. There is nothing that indicates the function should not exist. Even the papers the machine builder delivered with the machine has tables with cylindrical interpolation listed as an option. Any other suggestions? Only a couple of weeks ago I had to organize the installation of cylindrical interpolation on a client's 18i control. There is no reference in the operators manual to G07.1 being an option on this control either, but it is an option and wasn't listed in the spec sheet as being part of the build. The same alarm as you're experiencing was encountered when programming G07.1 One thing you have to be careful of when using a program that includes cylindrical interpolation, is that the function has not been previously activated by programming G07.1 with a cylinder radius specified and then not canceled it with a cylinder Zero radius, this will also give you an alarm. For this reason, its helpful to program G07.1 C0 as part of the call up blocks for the tool involved with the cylindrical interpolation. That was the next problem the client had after the option was turned on. If the build sheet for your machine indicates that the option is included, contact Fanuc regarding having it turned on; it may have been lost at some point in the life of the machine. If you contact Fanuc or the machine builder, make sure you have the machine's serial number available. Post the actual alarm number you're getting. Last edited by angelw; 04-05-2011 at 09:37 AM. Thank you for your help Bill. It turned out that the option was not available with our machine after all, and we will have to try and be creative in order to get our parts done how we want them. I now have a man investigating what it would take to get this implemented, meanwhile I try to figure out how to program a spiral without having either cylindrical nor polar interpolation available. Cylindrical interpolation makes possible the programming of circular interpolation on a cylindrical surface quite simple. Basically, you can unwrap the cylindrical surface and program the tool path as if on a flat surface. However, if you only have to machine a helical path you won’t need to have cylindrical interpolation; this process is not very difficult. Can use a threading cycle for a helical path with uniform pitch. From your initial description of what you wanted to achieve "I need to make a helical cut down a Ø10mm steel bar (think twist drill)." I didn't think a threading cycle would be any help to you. However, if its only a helical groove that's required, you will be able to do that with the C and Z axis move without having to have cylindrical interpolation. Last edited by angelw; 04-14-2011 at 11:54 AM. I need Z to push forward, while slowly turning C aswell as using a rotating tool. With a thread cycle this will not function. I was considering simply coding the coordinates and use G1 and move Z a few hundreds of a milimeter, then C then Z again, but the program would be thousands of blocks and then I would have a memory problem Correct, a threading cycle will not function in C axis mode. Is your machine capable of C and Z simultaneously interpolation? This is quite different to having and using the cylindrical interpolation option. If so, once the milling tool is positioned at the X, Z, C start position, try programming C and Z together, ie the angular (C) and Z move to the end of the helix. Another problem you will have doing it the way you're considering, using many small moves, is that you will not achieve anything near the feed rate that would be reasonable. With a 10T control, I'd expect that you may only achieve a feed rate of maybe 30mm or 40mm per min. maximum, notwithstanding that you program something much higher. The reason for this is that the motion will actually decelerate to Zero at the end of each move and must try and accelerate up to the programmed speed at the commencement of the next move. However, with very small moves, there will be insufficient length in the move for the motion to reach the programmed speed before having to start the deceleration ramp to the end of the motion block. Accordingly, the motion will reach what velocity it can and that's it. You can get around the length of the program issue by running the program as a DNC exercise from a PC, but if cycle time is at all important, the feed rate will be an issue.
OPCFW_CODE
- 7+ years of industry experience as a Certified Scrum Master, Project Lead, Sr. Developer working with Banking domain, E - Commerce domain, and Retail domain. - Highly acquainted with various phases of project life cycle and various SDLC methodologies as Waterfall, Agile (Scrum), Waterfall-Scrum Hybrid. - Expertise in training and implementing the change for organizations seeking to become Agile. - Proficient in working with various stake holders in gathering requirements, analyze and implement business requirements into the software application development process. - Managed a cross functional team of 15-20 software developers, testers with a budget of $1.5M-$2M. - Exposure and experience in various Software Development Life Cycle methodologies such as Waterfall, Agile-Scrum, and Waterfall-Scrum Hybrid environment. - Extensive experience in conducting different sessions with various stake holders to gather requirements, analyze the requirements and document the requirements for fostering the other phases in project life cycle. - Valuable experience with various components such as Actors, Triggers, Preconditions, Postconditions, Normal flow, Alternate flows and Exceptions in documenting Use Cases. - Expertise in tracing requirements throughout the development process and verifying their adherence to Requirement Traceability Matrix (RTM). Thereby, proven success in Eliciting Requirements, Impact Analysis, Cause and Effect, Cost Benefit Analysis, Risk Assessment, ROI Analysis and SWOT Analysis. - Proven Experience transitioning teams to Agile/Scrum from Waterfall methodology. - Expert in using JIRA, RALLY as a tool for requirement documentation, user story tracking and generating various scrum artifacts and reporting to the product owner. - Proficient in creating user stories, estimating the size of user stories and prioritizing them. - Pleasant experience in conducting various SCRUM ceremonies such as Spring Planning Meeting, Daily Scrum, Sprint Review Meeting, Retrospective meeting and Product Backlog Grooming. - Knowledge on other practices such as Kanban and SAFe. - Good Knowledge on different technologies (JAVA, WebMethods, .net, AWS). - Knowledge and experience in Service Oriented Architecture (SOA) framework, XML, Ajax, HTML, WSDL, Web Services, Client Side and Server Side Validations. - Ability to work effectively as a team member and servant-leader. - Accompanied with expertise in developing Test cases, Test plans and procedures in different test environments. Collaborated and monitored various testings’ such as Black-Box testing, Smoke Testing, Regression testing, and User Acceptance Testing (UAT) for verifying and validating the product. - Proficient in Microsoft Office tools like MS Access, MS Visio, MS Excel, MS Word, MS PowerPoint and MS Project. Operating System: Windows Vista/XP/7/8/8.1/10 Modelling Tools: Microsoft Visio, Wireframes SDLC Methodologies: Waterfall, Agile (Scrum), Waterfall-Scrum Hybrid. Enterprise App Integration: WebMethods, Weblogic, Tibco Requirement Management: MS Office, Confluence, JIRA, Rally, HP ALM 11.5, Rational ClearQuest Project Management Tools: MS Project, MS SharePoint 2010,2013 Testing Tools: Quality Center, Win Runner, Load Runner, and Quick Test Pro (QTP) Reporting Tools: Tableau Desktop, IBM Cognos, MS Office Suite, Tibco Spotfire. Languages: HTML, XML, JAVA, JAVA Script, .Net, C, C++ Databases: MySQL, SQL Server, MS Access, Oracle 9i, 10g, 11g. - Facilitate all Scrum events for the team including preparing, moderating and post processing. - Support the Product Owner in their role in writing effective user stories, planning releases, and prioritizing value. - Remove team impediments. - Help the team to keep focus (e.g. by acting as a buffer between external distractions and the team). - Help the team to maintain their Scrum tools (Story board, Action board, charts, backlogs, etc.). - Help the team and product owner to find suitable norms, definition of ready, and definition of done. - Mediate team members through conflict; help the team to make decisions and foster self-organization. - Reflect Agile and Scrum values to the team - Coaching Scrum Masters in conducting effective Scrum events, Sprint planning meetings / Sprint Review Meeting / Sprint Retrospective / Backlog grooming meetings. - Leading and managing onshore and offshore resource pool of 20 plus resources. - Working with Directors & AVPs to remove impediments that are preventing teams from accomplishing their goals. - Assisting teams in improving with facilitated conversations, metrics, feedback loops, and information radiators - Created Use-cases and Sequence diagrams using MS Visio for Client Authentication and other high- level requirements. - Responsible for Identifying and/ or evaluating new and emerging technologies. - Instrumental in creating a knowledge repository, that considerably reduced the production support turnout time and maintenance. - Responsible for customer engagement and satisfaction. - Responsible for project quality and timelines. - Responsible for providing client senior management with status updates. - Conducted Requirement Workshops with Stake holders and SMEs (Subject Matter Experts) to gather the Business Requirements. - Identifying success between Agile & Waterfall: conducted analysis to identify gaps between current business process and the methodologies. - Analyzed business requirements to identify and document different business needs by level of importance to create Use Cases, and activity diagrams. - Analyzed business requirements and their feasibility in areas such as timelines, implementation aspects, and was responsible for communicating the same back to business team. - Participated in the research, development of business opportunities and brainstorming sessions for ideas within the scope of the project. - Conducted peer review meetings periodically to keep track of the project’s milestones. - Prepared and Presented Weekly Project reports to senior Management - Interacted with the stakeholders and the subject matter experts (SME’s) to get a better understanding of - Client Business Processes and elicit requirements in sync with the scope of the project. - Created UML Diagrams including Use Cases Diagrams, Sequence Diagrams, Data Flow Diagrams (DFDs), ER Diagrams and defining the Business Process Model and Data Process Model - Software Quality Analyst Program Development Big Data, Java, SaaS and Web GUI. Sr. WebMethods Developer - Involved in creating the functional requirement and design specification documents. - Installed, configured and managed EDI adapter on Integration Server to exchange EDI documents with suppliers. - Worked with JDBC Adapter to communicate to Oracle and SQL Server. - Used Trading Networks enabling to link with other companies (buyers, suppliers) and marketplaces to form a business-to-business trading network. - Created Partner Profiles, defining document exchange, and establishing business process rules between buyers and suppliers using Trading Network console. - Implemented EDI ANSI X12 4010 version. - Implementation experience with EDI transactions like 810 Invoice, 820 Payment order, 850-purchase order, 856 ASN and 997 Functional acknowledgement of ANSI X12 EDI standard. - Worked with EDI transactions sets of types PO, Invoice and PO Acknowledgement were received for processing and converting into client specific XML format. - Used Broker and Pub-sub Model for document exchange between all the internal applications. - Worked on XML validations using validate built-in service, against the schemas. Created Schemas based on DTDs, and XSDs. - Conducted meetings with Business and SMEs (Subject Matter Experts) to gather the Business Requirements. - Responsible for coming up with the As-Is and To-Be scenarios for the business process model. - Worked on understanding and translating client business requirements into technical specifications. - Defined the Solution architecture and systems design for the proposed integration. - Designed MQ objects requirements and created MQ Queue Managers, MQ Queues and MQ channels to connect to the end systems. - Designed various technical modules of the solution and communicate to client and team members. - Review the Java code and ensure the requirements are implemented. - Handled the performance tuning of applications. - Created SQL queries and stored procedures for transaction history and cleanup of DB. - Communication to offshore development and track the status of work. - Accountability of overall solution delivery. - Develop Interfaces to send the shipment info, Order info, and ASN info, retrieve the data, map to Canonical and publish to Broker. - Develop Interfaces to receive the shipment info, Order info, and ASN info, retrieve the data, process the data and push to SAP/Internal systems. - Split the existing interfaces based on business functionality between the two companies. - Remediate code and property files based on the new server info. - Unit test all the interfaces after remediation. - Integration testing with the source & target systems. - Migrate packages between the different environments. - Performed administrative tasks such as migrating code between various environments, setting up multiple database aliases (Oracle), Access Control Lists, Groups and Users. - Extensively used Coors Logging & Error-Handling Framework for Automatic-Retry mechanism when any servers are unavailable. - Created utility Java services necessary for the Interfaces. - Prepared the high-level Technical-Design documents, mapping document based on the Functional Specifications and the UTP (Unit Test Plan) documents for each Interface I worked on.
OPCFW_CODE
On Mon, 2018-05-21 at 11:52 +0200, Pavel Březina wrote: On 05/18/2018 09:50 PM, Simo Sorce wrote: > On Fri, 2018-05-18 at 16:11 +0200, Sumit Bose wrote: > > On Fri, May 18, 2018 at 02:33:32PM +0200, Pavel Březina wrote: > > > Hi folks, > > > I sent a mail about new sbus implementation (I'll refer to it as > Sorry Pavel, > but I need to ask, why a new bus instead of somthing like varlink ? This is an old work, we did not know about varlink until this work was already finished. But since we still provide public D-Bus API, we need a way to work with it anyway. Ack, thanks, wasn't sure how old the approach was, so I just asked :-) > > > Now, I'm integrating it into SSSD. The work is quite difficult since it > > > touches all parts of SSSD and the changes are usually interconnected but > > > slowly moving towards the goal . > > > > > > At this moment, I'm trying to take "miminum changes" paths so the code can > > > be built and function with sbus2, however to take full advantage of it, > > > will take further improvements (that will not be very difficult). > > > > > > There is one big change that I would like to take though, that needs to > > > discussed. It is about how we currently handle sbus connections. > > > > > > In current state, monitor and each backend creates a private sbus server. > > > The current implementation of a private sbus server is not a message bus, > > > only serves as an address to create point to point nameless connection. > > > each client must maintain several connections: > > > - each responder is connected to monitor and to all backends > > > - each backend is connected to monitor > > > - we have monitor + number of backends private servers > > > - each private server maintains about 10 active connections > > > > > > This has several disadvantages - there are many connections, we cannot > > > broadcast signals, if a process wants to talk to other process it needs > > > connect to its server and maintain the connection. Since responders do > > > currently provider a server, they cannot talk between each other. > This design has a key advantage, a single process going down does not > affect all other processes communication. How do you recover if the > "switch-board" goes down during message processing with sbus ? The "switch-board" will be restarted and other processes will reconnect. The same way as it is today when one backend dies. Yes, but what about in-flight operations ? Both client and server will abort and retry ? Will the server just keep around data forever ? It'd be nice to understand the mechanics of recovery to make sure the actual clients do not end up being impacted, by lack of service. > > > sbus2 implements proper private message bus. So it can work in the same way > > > as session or system bus. It is a server that maintains the connections, > > > keep tracks of their names and then routes messages from one connection > > > another. > > > > > > My idea is to have only one sbus server managed by monitor. > This conflict wth the idea of getting rid of the monitor process, do > not know if this is currently still pursued but it was brought up over > and over many times that we might want to use systemd as the "monitor" > and let socket activation deal with the rest. I chose monitor process for the message bus, since 1) it is stable, 2) it is idle most of the time. However, it can be a process on its own. Not sure that moving it to another process makes a difference, the concern would be the same I think. That being said, it does not conflict with removing the monitoring functionality. We only leave a single message bus. Right but at that point might as well retain monitoring ... > > > Other processes > > > will connect to this server with a named connection (e.g. sssd.nss, > > > sssd.backend.dom1, sssd.backend.dom2). We can then send message to this > > > message bus (only one connection) and set destination to name (e.g. > > > to invalidate memcache). We can also send signals to this bus and it will > > > broadcast it to all connections that listens to this signals. So, it is > > > proper way how to do it. It will simplify things and allow us to send > > > signals and have better IPC in general. > > > > > > I know we want to eventually get rid of the monitor, the process would > > > as an sbus server. It would become a single point of failure, but the > > > process can be restarted automatically by systemd in case of crash. > > > > > > Also here is a bonus question - do any of you remember why we use private > > > server at all? > > In the very original design there was a "switch-board" process which > > received a request from one component and forwarded it to the right > > target. I guess at this time we didn't know a lot about DBus to > > implement this properly. In the end we thought it was a useless overhead > > and removed it. I think we didn't thought about signals to all components > > or the backend sending requests to the frontends. > > > Why don't we connect to system message bus? > > Mainly because we do not trust it to handle plain text passwords and > > other credentials with the needed care. > That and because at some point there was a potential chicken-egg issue > at startup, and also because we didn't want to handle additional error > recovery if the system message bus was restarted. > Fundamentally the system message bus is useful only for services > offering a "public" service, otherwise it is just an overhead, and has > security implications. Thank you for explanation. > > > I do not see any benefit in having a private server. > There is no way to break into sssd via a bug in the system message bus. > This is one good reason, aside for the other above. > Fundamentally we needed a private structured messaging system we could > easily integrate with tevent. The only usable option back then was > dbus, and given we already had ideas about offering some plugic > interface over the message bus we went that way so we could later reuse > the integration. > Today we'd probably go with something a lot more lightweight like > > If I understood you correctly we not only have 'a' private server but > > for a typically minimal setup (monitor, pam, nss, backend). > > Given your arguments above I think using a private message bus would > > have benefits. Currently two questions came to my mind. First, what > > happens to ongoing requests if the monitor dies and is restarted. E.g. > > If the backend is processing a user lookup request and the monitor is > > restarted can the backend just send the reply to the freshly stared > > instance and the nss responder will finally get it? Or is there some > > state lost which would force the nss responder to resend the request? It works the same way as now. If backend dies, responders will reconnect once it is up again. So no messages are lost. If the message bus die, clients will reconnect and then send awaiting replies. Also the sbus code will be pretty much stable so it is far less likely to crash (of course I expect some issues during review). So you expect requests to be still serviceable if the message bus dies. How does a client find out if a service dies and needs to send a new request ? Will it have to time out and try again ? Or is there any messaging that let's a client know it has to restart asap ? And if the message bus dies and a service dies before it comes back up how does a client find out ? > How would the responder even know the other side died, is there > for clients to know that services died and all requests in flight need > to be resent ? If client send request to a destination that is not available it will return a specific error code. The client can decide how to deal with it (return cached data or an error, resend the message once it is available). I am not concerned with messages sent while a service is down, more about what happens while the client is waiting. There are D-Bus signals (NameOwnerChanged/NameOwnerLost/...) that can used as well. Given our use case, we can queue it in message bus until the destination is available (this is currently not implemented, but it is doable). It is important to recover speedily, if at any point a crash leads to a cascade of timeouts this will be very disruptive, and will have a much bigger impact than the current behavior. > > The second is about the overhead. Do you have any numbers > > longer e.g. the nss responder has to wait e.g. for a backend if offline > > reply? I would expect that we loose more time at other places, > > nevertheless it would be good to have some basic understanding about the > > overhead. This needs to be measured. But we currently implement sbus servers in busy processes so logically, it takes more time to process a message then routing from a single-purpose process. I do not think this follows. Processing messages is relatively fast, the problem with a 3rd process is 2 more context switches. Context switches add latency, and trash more caches, and may cause severe performance drop depending on the workload. > Latency is what e should be worried ab out, one other reason to > direct connections is that you did not have to wait for 3 processes to > be awake and be scheduled (client/monitor/server) but only 2 > (client/server). On busy machines the latency can be (relatively) quite > high if an additional process need to be scheduled just to pass a long > a message. This needs to be measured in such environment. Yes, it would be nice to have some numbers with clients never hitting the fast cache and looping through requests that have to go all the way to the backend each time. For example creating an ldap server with 10k users each only with a private group and then issuing 10k getpwnam requests, and see the difference between current code and new code. Running multiple tests in the same conditions will be important. Ie first dummy run to prime LDAP server caches, then a number of runs to average on. Sr. Principal Software Engineer Red Hat, Inc
OPCFW_CODE
I have made a app with 12 quizes. I have app variables for each question. I have the correct ansswers set equal to the proportion of 100% the represent- 8 question quiz= correct andswer is worth 12.5. I am using a equation that looks like this… screenshot of variables created screenshot of get results button logic screenshot of working equaltion screenshot of non working equation for subsequent quizes I have a feeling tis something stupid that i just dont know about as im new here thank you much for the help if you can- greatly appreciated- this is holding up the app launch sadly First I would change the type of variable to number instead of text now. Second, if you want to do a simple addition, maybe try simply like that with a plus sign: AppVar1 + AppVar2 + ,etc… Hey @Anthony_Sargysian, your project seems to be interesting. I’m working on a similar theme and would love to understand more about your architecture. I’m at my initial phase of design and feasibility and think any insights from you would greatly help. Greatly appreciate your input how would you write the code string? can you give an example please? sure but… what are you creating? tell me about your project bud… my architecture is fairly simple to be honest- i had to teach myself how to use basic api function for this thing much the same way a cma works in a web site like webflow sites so my back end is not very complex… how do you mean architecture? My project is simple too. In simple words, it’s building a Trivia with a Ranking/Grading system to it. A very high-level features simply put are that the Trivia has multiple Subjects (called Epics) and each Subject can have multiple Packs of quizzes called Episodes. Each Episode may have one or more Units of biteable chunks with approx 10 MCQ (Multiple Choice Questions) to answer in each Unit that earns a score. Other related features include (a) pre-quiz text to read, (b) quiz practice mode (no points), (c) re-take any number of times to earn more scores, etc. Below is the ariel view of the structure we are imagining. We are novices too for this kind of project and self-learning and looking for expert advice to execute this project. So, sharing your design/development could help us if that’s ok. Thanks in advance. mine only has simple quizes- true false questions- a value to true+ added up and gives a score- its diet site to see what kind of diet is best for the person to lose weight= different diets for different types of metabolism No knowledge is too small bud. I’m very new to the mobile development world and an infant I should say. I’m more of a functional guy and trying to explore this no-code app builder. So you can share your app’s business logic if that’s ok with me (not sure how one could do that). But your projects is definitely an interesting one mate. AppVars.varable01 + appVar.variable02 + … Click on the left menu to find the variables and double click them to place them in the input field. I just got in there and got my hands dirty and kept going till this is all done except this equation which is driving me insane to be honest this is what i get when i do that with parenthesis when i dont use parenthesis I get a non number code changed variable type to numbers as well Ok so it not the plus sign you should use alas. Your variable values are treated like text. It’s a simple addition so I guess the ‘add’ formula should do it. Just follow the instruction of how to use it. I saw there were error messages when you used it. would you be so kind as to give the basic equation for this formula? Make sure your AppVars are set as number type then put in: thank you much… I taught myself to use app gyver from tutorials on youtube, so im back end ignorant- much appreciated sir
OPCFW_CODE
Push 1.7.2 and 1.8 to Cocoapods Trunk Hey Drew! I noticed recently that you had a couple of releases (1.7.2 in April 2018 and 1.8 just a couple weeks ago) which haven't been pushed to Cocoapods trunk, meaning when we try to install Ensembles by Cocoapods, the system only sees 1.7.1 as the latest. Would you mind doing those quick pod trunk push commands to update the Cocoapods Spec repo? I wasn’t able to get it to push to cocoapods, I’m afraid. Was failing for some odd reason to do with SSZipArchive. If you can get that working, I’m happy to push it. Drew On 6 Apr 2019, at 16:56, Jason Ji<EMAIL_ADDRESS>wrote: Hey Drew! I noticed recently that you had a couple of releases (1.7.2 in April 2018 and 1.8 just a couple weeks ago) which haven't been pushed to Cocoapods trunk https://github.com/CocoaPods/Specs/tree/master/Specs/b/d/c/Ensembles, meaning when we try to install Ensembles by Cocoapods, the system only sees 1.7.1 as the latest. Would you mind doing those quick pod trunk push commands to update the Cocoapods Spec repo? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/drewmccormack/ensembles/issues/282, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEuALfGlQ-wUbmfyU7X5Ihd-Y1o0S20ks5veLWsgaJpZM4cgPcP. Funny you mentioned that - I tried pulling in the latest Ensembles directly from your GitHub URL, and I get an error in CDEMultipeerCloudFileSystem.m: I wonder if it's related? Yes, certainly related. SSZipArchive is a mess. It was a separate framework, now it is included in ZipArchive. But I can’t find a combination of imports/frameworks etc that seems to work. Will take another look. Drew On 6 Apr 2019, at 18:07, Jason Ji<EMAIL_ADDRESS>wrote: Funny you mentioned that - I tried pulling in the latest Ensembles directly from your GitHub URL, and I get an error in CDEMultipeerCloudFileSystem.m: https://user-images.githubusercontent.com/5779307/55672052-7ab27380-5864-11e9-8513-0e305103ec18.png I wonder if it's related? — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/drewmccormack/ensembles/issues/282#issuecomment-480515886, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEuAJUZkmmVrpWAiI2JEoaH5CrInzBfks5veMY5gaJpZM4cgPcP. Think this is now fixed. Should be updated in CocoaPods. Drew On 6 Apr 2019, at 18:07, Jason Ji<EMAIL_ADDRESS>wrote: Funny you mentioned that - I tried pulling in the latest Ensembles directly from your GitHub URL, and I get an error in CDEMultipeerCloudFileSystem.m: https://user-images.githubusercontent.com/5779307/55672052-7ab27380-5864-11e9-8513-0e305103ec18.png I wonder if it's related? — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/drewmccormack/ensembles/issues/282#issuecomment-480515886, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEuAJUZkmmVrpWAiI2JEoaH5CrInzBfks5veMY5gaJpZM4cgPcP. Looking great, thank you! I'm still having an issue with the Ensembles dyld not loading in my main project, but I think that's something up with my project setup in particular - when I create a new sample project and include Ensembles, everything is fine. Ensembles in CocoaPods is not using a dynamic library. It is static. The repo has a dynamic build option. Drew On 6 Apr 2019, at 23:09, Jason Ji<EMAIL_ADDRESS>wrote: Looking great, thank you! I'm still having an issue with the Ensembles dyld not loading in my main project, but I think that's something up with my project setup in particular - when I create a new sample project and include Ensembles, everything is fine. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/drewmccormack/ensembles/issues/282#issuecomment-480537801, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEuAD_QY9hq-1FrT7JwAw9U-WFxVUjZks5veQzxgaJpZM4cgPcP. Ensembles is not using a dynamic library, but I think Cocoapods makes it available as a dynamic library? I'm not sure - this stuff is over my head - but the error is dyld: Library not loaded: ... reason: image not found. It was working for me using Cocoapods 1.5.3, but not with 1.6.1, so I'm currently trying to isolate the issue and submit it to Cocoapods to see if I'm doing something bad that used to be tolerated and isn't anymore, or if it's a regression in Cocoapods.
GITHUB_ARCHIVE
GMAIL imap access with OAuth2 and service account I have the following scenario: A server that runs a scheduled service reading a mailbox to handle attached files in new mails Its on a gmail account Until now this worked with App Passwords (which will be switched off). I'm able to create the token and login to imap(ssl) for a single user (with web login), but for a service account there seems to be no way to "impersonate" the user, without super-admin privileges. All the descriptions I found demand that I give impersonation Rights to the domain for the service (which will never be granted by our IT) So the question is: How do I get an access token for a single gmail inbox without user interaction (refresh token is not sufficient)? is the gmail account a google workspace domain user or a standard google gmail user? What language are you using? I'm using Java. My Test is with a "normal" account, but the real one will be in workspace. That is why the global impersonation will not work. In Azure/Exchange I can restrict the service account to impersonate only one user, this seems to be not possible with google :-( you cant use service accounts with gmail and non google workspace domain accounts it wont work. You need to have the workspace admin configure domain wide delegation. In workspace and domain wide delegation will grant the service account access to all the users in the domain you cant limit it. Is there any other way to create an access token for the user without opening the user login screen (e.g. JWT, mTLS,...)? no a user will always need to authorize it at least once. If you are just running this for a single user why not just authorize it once locally then upload the refresh token your code can then just use the refresh token to request a new access token when it needs https://stackoverflow.com/q/74280268/1841839 Thanks for your answers ;-) I have the code for mail (imap) working. I only need the valid token. Problem with refresh token is, that it only works for 7 days with the application in test state, and I don't want to publish. I probably need to investigate how that works in a workspace. Perhaps there I don't need to publish to get indefinite refresh tokens And why dont you want to publish it? Its just a button? You need to publish, if you want to have refresh tokens, that do not expire in 7 days for non workspace accounts UPDATE Just to avoid confusion, this is for my use case, which is very special. Need to integrate in a somewhat complex existing product, that already is able to communicate with multiple IMAP(S) and POP3(S) services, like Outlook365, GMX, Telekom, ... and other services (workflow system, material management, etc.). This product used googles app passwords until now to access gmail mailboxes Need to minimize the "new" api's, so including google specific API (in the sense of new jar files) was to be avoided, if possible Testing out different scenarios, but the main aim is to monitor a specific mailbox for mails with attachments by an unattended scheduler process (so no gui process) I started to read through the available documentation and tutorials. The original aim was to migrate the Basic Authentication to OAuth2 authentication with a service account (as was done in the past for Outlook365). Unfortunately this has shown to be not feasible, due to me being unable to restrict the service account to one mailbox (I was only able to find documentation, to enable it for all mailboxes in the company; which is a no go for most IT departments). If you have no problem with opening up all mailboxes in your workspace to the app you can also use the service accounts. This has the advantage that you can avoid the first step of getting the refresh token. There is a somewhat complete explanation here: https://www.limilabs.com/blog/oauth2-gmail-imap-service-account The following solution was for my test environment (a private, non-workspace, gmail account). Any mention of workspace features is not tested yet. It is only my collection from different tutorials and google developer pages. The described solution consists of 2 part. One tool to create a refresh token and a second part that uses the refresh token in an unattended scheduler process. Both part use a generic library (https://www.nimbusds.com/products/nimbus-oauth-openid-connect-sdk) which is open source and already available in the product. This is no limitation, you can use any lib that provides OAuth2 flows (Authorization Code Flow and Refresh Token Flow). To summarize my solution so far: Go to google cloud console (https://cloud.google.com/console) and create a new application/project. In "API's and Services" you might need to activate the gmail api first. You need to configure the "OAuth consent screen". In here you have to do a few decisions, depending on if you are in a workspace or not. In a workspace environment you can mark it as intern. This eases the deployment, since you do not need to "publish" to public. Publishing is required, if you want to deploy your application for public use. For public use you can only use it as External/Test environment. You get the message: "Because you’re not a Google Workspace user, you can only make your app available to external (general audience) users." In Test Mode you have certain restrictions, mainly you can only use invited test accounts and all tokens are restricted to 7 days (not feasible for my use case, where the scheduler needs to run 24/7 unattended). To publish your application you need to provide (copied from google configuration screen, when you press publish): An official link to your app's Privacy Policy A YouTube video showing how you plan to use the Google user data you get from scopes A written explanation telling Google why you need access to sensitive and/or restricted user data All your domains verified in Google Search Console So as the name suggests, Test mode is good for testing, but for deployment you need either a workspace with your own domain (internal use only) or a published app. You need to request consent (in the "OAuth consent screen" page 2 - add or remove scopes) for using gmail api.(I used complete, since I need to move/delete mails). After that you filled out the rest of the OAuth Consent screen questions, you can create OAuth Credentials. In "credentials" tab create a new OAuth2 client ID (I used for Desktop Application). With the collected ids and secrets you can use any lib you are familiar with to make an authcode flow. You need to set baseurl=https://accounts.google.com clientid=[your id].apps.googleusercontent.com clientsecret=[the client secret] scope=openid https://mail.google.com/ All this data is on the credentials screen, after you created it. The scope might change, if you selected e.g., only readonly access (https://www.googleapis.com/auth/gmail.readonly). With this you can execute the auth code flow and will get Access and Refresh token. Save the latest Refresh token you got. This finishes for me the "interactive" part, which a user needs to do. After that you can use the refresh token to your (scheduler) service. The service needs to execute a refresh token flow (again, use your favorite OAuth2 lib with the settings above) to get a new access token. As mentioned above, the refresh token expires after 7 days in Test mode. Otherwise it should (not tested by me) not expire. There is a limit of refresh tokens per account (from what I read 25). If you are above this, the oldest will be deactivated. This should not be an issue, since you only need 1 token (and that should be the latest you have retrieved). The refresh token flow will not produce new refresh tokens so make sure you keep the given refresh token available (please note, other OIDC services like RedhatSSO also issue new refresh tokens with a refresh token flow, so in these cases the old one would become invalid) The access token from the refresh token flow is (by default) 1h valid and can be used as password for imap (ssl) login. You get with the refresh token flow an expiry timestamp for the access token. After that time the access token becomes invalid. For standard java mail library it would look like: Properties props = new Properties(); props.put("mail.imaps.auth.mechanisms", "XOAUTH2"); Session session = Session.getInstance(props); session.setDebug(true); IMAPSSLStore store = new IMAPSSLStore(session, null); store.connect("imap.gmail.com"<EMAIL_ADDRESS>accessTokenString); Folder in = store.getFolder("Inbox"); in.open(2); // do something with the messages in the folder Please note: the given mail address must be the one you used to create the refresh token. It will not work for others. Hope this will help someone else ;-) This only works in a workspace environment as intern. <--- not true 2 .However there is a limit of refresh tokens per account (from what I read 25). <--- that is why you should always store the newest refresh token. 3. You need to make a youtube video and a lot of legal/GDPR stuff <--- no its called application verification and has nothing to do with gdpr. 4. for deployment you need a workspace with your own domain <---- also not true. Question are you really sure this is an answer? It seems more like a lot of incorrect assumptions. Thanks for the comments. 1. See corrections - If this is not true, please provide link with doc how to do it! 2. Corrected in text; 3. Corrected with message from Google Cloud in text, sorry mixed GDPR and Privacy 4. See corrections - If this is still not true, please provide link with correct doc. To your Question: I wrote this answer as help for others looking for a solution and marked in many spaces that it is not tested and only what I read. Please feel free to write a correct and better answer IMO this isnt going to help anyone its going to confuse people more. Why do you want it intern if your not using workspace? Post your code so that we can see what you are doing. Why are you even using IMAP. are you connecting just one account, or more then one user? what language you are using?
STACK_EXCHANGE
How can I use SQL_CALC_FOUND_ROWS when having multiple selects I'm working on a pagination code that uses SQL_CALC_FOUND_ROWS() but when I limit the results per page to a number less than the total results, "SELECT FOUND_ROWS() as total" will return the number of elements per page and not the total of found rows. Since I'm doing a select from the second select, this result makes total sense but I don't know how to solve this. How can I pass the total results from inside to the external select? My code(please, ignore quotation marks for php escaping): SELECT SQL_CALC_FOUND_ROWS userid,contaTipo,userNome,nomeFantasia,sexo,cidade,estado,bairro,imovelN,logradouro,avaliacao,imagem,formasPagamento,estabelecimento,profissao FROM (SELECT vw_Busca.userid as userid, vw_Busca.contaTipo as contaTipo, vw_Busca.userNome as userNome, vw_Busca.nomeFantasia as nomeFantasia,vw_Busca.sexo as sexo, vw_Busca.cidade as cidade, vw_Busca.estado as estado, vw_Busca.bairro as bairro, vw_Busca.imovelN as imovelN,vw_Busca.logradouro as logradouro,tipoProfissionalPF.tipo as profissao, tipoProfissionalPJ.tipo as estabelecimento, vw_userRating.total as avaliacao, GROUP_CONCAT(especialidades.especialidade SEPARATOR ', ') as especs, vw_Busca.imagem as imagem, GROUP_CONCAT( DISTINCT userPagamento.formaPagamento SEPARATOR ', ') as formasPagamento FROM vw_Busca LEFT JOIN usersEspec ON usersEspec.userid=vw_Busca.userid LEFT JOIN especialidades ON especialidades.id=usersEspec.especialidade LEFT JOIN userPagamento ON userPagamento.userid=vw_Busca.userid LEFT JOIN profissionais ON profissionais.userid=vw_Busca.userid LEFT JOIN tipoProfissionalPF ON tipoProfissionalPF.id=profissionais.profissao LEFT JOIN empresaDados ON empresaDados.userid=vw_Busca.userid LEFT JOIN tipoProfissionalPJ ON tipoProfissionalPJ.id=empresaDados.tipoProfissionalPJ LEFT JOIN vw_userRating ON vw_userRating.userid=vw_Busca.userid WHERE vw_Busca.cidadeId='$cidade' AND (vw_Busca.userNome LIKE '%".$termo."%' OR vw_Busca.nomeFantasia LIKE '%".$termo."%' OR vw_Busca.tags LIKE '%".$termo."%') GROUP BY userid LIMIT $inicio,$qtd) as mainTable ORDER BY mainTable.avaliacao DESC You should find a way to NOT use SQL_CALC_FOUND_ROWS. It's a well-known performance suck. It's usually better to count rows and then do your query to return data in a separate query. Later, you will encounter performance problems with the JOINs, the leading wildcards, the ORs, and the use of OFFSET. It appears that your outer select is just ordering results returned from the inner query. Since your inner query has already applied a LIMIT, your outer query is actually performing sort on partial results, which seems incorrect to me. So I guess you can remove the outer select altogether which would solve your problem. By the way, just noticed that the SQL_CALC_FOUND_ROWS query modifier and accompanying FOUND_ROWS() function are deprecated as of MySQL 8.0.17 as per the documentation I added this outer query because sort was not working, it was returning errors and now works but yes, as you said it is partially sorting. Why removed, but then doing two different queries will double ...
STACK_EXCHANGE
// ============ Init ================ import * as functions from 'firebase-functions' import * as admin from 'firebase-admin' import { WebClient } from '@slack/web-api' // Initialize Firebase admin.initializeApp() // Initialize Slack bot const SLACK_BOT_TOKEN = functions.config().slack.bot_token const bot = new WebClient(SLACK_BOT_TOKEN) // ============ Helper Functions ================ // Post a message to a channel your app is in using ID and message text function sendLinkMessage( channelID: string, channelName: string, userID: string, userName:string, link: string) { console.log(userID) // Call the chat.postMessage method using the WebClient API bot.chat.postEphemeral({ // The token you used to initialize your app token: SLACK_BOT_TOKEN, channel: channelID, user: userID, link_names: true, text: `${channelName} queue link: ${link}`, attachments: [], // You could also use a blocks[] array to send richer content blocks: [ { "type": "section", "text": { "type": "mrkdwn", "text": `Hey, @${userName}, you're up! Click the link below to join the ${channelName} meeting! 🐻` } }, { "type": "divider" }, { "type": "section", "text": { "type": "mrkdwn", "text": `${link}` } } ] }) .then(() => { console.log(`Link sent: Ephemeral message to ${userID} successfully posted to ${channelID}!`) }) .catch((err) => { console.log(err) }) } // ============ Google Cloud Functions ================ // Cloud Function: grizzbotMessageScheduler // -------------------------- export const grizzbotMessageScheduler = functions.https.onRequest(async (request, response) => { functions.logger.info("Recieved request to grizzbotMessageScheduler", {structuredData: true}) const data = request.body console.log(data) //GrizzBot Message Scheduling Starts Here: try { // //Test message // const scheduledTime = new Date() // scheduledTime.setDate(scheduledTime.getDate() ) // scheduledTime.setHours(5, 45, 0) // await bot.chat.scheduleMessage({ // token: SLACK_BOT_TOKEN, // channel: 'general', // link_names: true, // text: 'Keep going hackers! You\'re doing great! :grizzhacks:', // post_at: (scheduledTime.getTime() / 1000).toString() // }) //send response response.status(200).send("GrizzBot has successfully scheduled your messages! 🐻\nDon't open this webpage again or you'll get duplicates!") } catch (error) { console.error(error); response.status(200).send("GrizzBot encountered an error scheduling your messages!") } }) // Cloud Function: buttonClickHandler (currently does nothing) // -------------------------- // Any interactions with Slack shortcuts, modals, or interactive components // (such as buttons, select menus, and datepickers) will be sent to this route export const buttonClickHandler = functions.https.onRequest((request, response) => { functions.logger.info("Recieved request to buttonClickHandler", {structuredData: true}) const data = request.body console.log(data) //send response response.status(200) }) // Cloud Function: joinQueue // -------------------------- export const slashJoinQueue = functions.https.onRequest(async (request, response) => { try { functions.logger.info("Recieved slash command: joinqueue", {structuredData: true}) // Parse and save the incoming Slack data const slackData = request.body console.log(slackData) // const channelID = slackData.channel_id const channelName = slackData.channel_name //use to retrieve sponsor Firebase doc const userID = slackData.user_id const userName = slackData.user_name const userMap = {userID, userName} //store each user as a map with their ID and username //Retrieve Firebase data for this channel const snapshot = await admin.firestore().doc(`queue/${channelName}`).get() const fbData = snapshot.data() console.log(fbData) //Get copy of Firebase users array const userArray = fbData?.users //If the user is already in the queue, don't add them again let foundDuplicate:boolean = false userArray.forEach((currUser:any) => { if (currUser.userID === userMap.userID){ foundDuplicate = true } }) if (foundDuplicate === false) { //If the user is not already present, continue and add user to Firebase try { //add new user to the end of user array and update Firebase userArray.push(userMap) await admin.firestore().doc(`queue/${channelName}`).update({ users: userArray }) response.send(`${userName}, you have been successfully added to the ${channelName} queue! 🐻`) } catch (error) { //Handle error console.log(error) response.status(200).send("GrizzBot is having trouble adding you to the queue - please try again 🐻") } } else { //send response to Slack as JSON const responseMessage:string = `Hang in there, ${userName}, you are already in the queue! 🐻` response.send(responseMessage) } } catch (error) { //Handle error console.log(error) response.status(200).send("GrizzBot is not trained to work in this channel 🐻") } }) // Cloud Function: advanceQueue // -------------------------- export const slashAdvanceQueue = functions.https.onRequest(async (request, response) => { try { functions.logger.info("Recieved slash command: advancequeue", {structuredData: true}) const slackData = request.body console.log(slackData) //Parse Slack data const channelName:string = slackData.channel_name //use to retrieve sponsor Firebase doc const enteredSponsorKey:string = slackData.text.trim() const channelID = slackData.channel_id //Retrieve Firebase data for this channel const snapshot = await admin.firestore().doc(`queue/${channelName}`).get() const fbData = snapshot.data() console.log(fbData) const sponsorKey:string = fbData?.sponsorKey //get the actual sponsor key from Firebase if (enteredSponsorKey === sponsorKey) { //Get users array const userArray = fbData?.users // if the queue is empty or null, do not attempt to process if (!userArray || !userArray.length) { response.send(`advancequeue: The ${channelName} queue is empty! 🐻`) } //Get the meeting link and send to user at front of queue const link = fbData?.link const nextUser = userArray[0] sendLinkMessage(channelID, channelName, nextUser.userID, nextUser.userName, link) //Advance the queue and update the Firebase users array userArray.shift() await admin.firestore().doc(`queue/${channelName}`).update({ users: userArray }) //Respond to the sponsor in Slack that the queue has been advanced! response.status(200).send(`advancequeue: Successfully advanced the ${channelName} queue!\nA new hacker should join the call soon! 🐻`) } else { response.send("advancequeue: Invalid sponsor key or wrong channel, try again! 🐻") } } catch (error) { //Handle error console.log(error) response.status(200).send("GrizzBot is not trained to work in this channel 🐻") } }) // Cloud Function: showQueue // -------------------------- export const slashShowQueue = functions.https.onRequest(async (request, response) => { try { functions.logger.info("Recieved slash command: showqueue", {structuredData: true}) // Parse and save the incoming Slack data const slackData = request.body console.log(slackData) const channelName = slackData.channel_name //use to retrieve sponsor Firebase doc //Retrieve Firebase data for this channel const snapshot = await admin.firestore().doc(`queue/${channelName}`).get() const fbData = snapshot.data() console.log(fbData) //Construct users array const userArray = fbData?.users // if the queue is empty or null, do not attempt to process and return if (!userArray || !userArray.length) { response.send(`showqueue: The ${channelName} queue is empty! 🐻`) } //Construct queue status string and send response to Slack let responseMessage:string = `The current queue to meet with ${channelName}:\n` let i = 1 userArray.forEach((currUser:any) => { responseMessage += `${i}. ${currUser.userName}\n` i++ }) response.send(responseMessage) } catch (error) { //Handle error console.log(error) response.status(200).send("GrizzBot is not trained to work in this channel 🐻") } }) // Cloud Function: leaveQueue // -------------------------- export const slashLeaveQueue = functions.https.onRequest(async (request, response) => { try { functions.logger.info("Recieved slash command: leavequeue", {structuredData: true}) // Parse and save the incoming Slack data const slackData = request.body console.log(slackData) const channelName:string = slackData.channel_name //use to retrieve sponsor Firebase doc const userID:string = slackData.user_id const userName:string = slackData.user_name //Retrieve Firebase data for this channel const snapshot = await admin.firestore().doc(`queue/${channelName}`).get() const fbData = snapshot.data() console.log(fbData) //Construct users array const userArray = fbData?.users // if the queue is empty or null, do not attempt to process and return if (!userArray || !userArray.length) { response.send(`leavequeue: The ${channelName} queue is empty! 🐻`) } //Filter out and remove user from the queue and update Firebase const filteredUsers = userArray.filter((user: { userID: string }) => user.userID !== userID) await admin.firestore().doc(`queue/${channelName}`).update({ users: filteredUsers }) //Construct queue status string and send response to Slack let responseMessage:string = `leavequeue: ${userName}, you have left the ${channelName} queue 🐻` response.send(responseMessage) } catch (error) { //Handle error console.log(error) response.status(200).send("GrizzBot is not trained to work in this channel 🐻") } })
STACK_EDU
[00:27] <thatcoderkid> hello! I have an issue with my snap code: https://github.com/Thekiddiejsandpython/hello-python-snap.git [00:27] <thatcoderkid> I get an error: ValueError: local source (../src"..., 53ValueError: local source (../src) is not a directory [00:37] <thatcoderkid> Can I please have some help? [09:47] <m4sk1n_> popey: can you review my task? :p [10:57] <shailesh> hi elopio [10:58] <shailesh> hi kyrofa [10:58] <shailesh> hi sergiusens [10:59] <popey> m4sk1n_: heya. I am afk today. Will have a look later if nobody else gets to it before me [11:05] <m4sk1n_> ok [11:06] <m4sk1n_> flexiondotorg: ping :) [14:48] <daniellimws> hi, curious how is it possible for one to unregister a name from the snap store [14:49] <daniellimws> in the case where one decides to change the name or decided that it is no longer useful [15:19] <elopio> daniellimws: you can't unregister. The names can be transferred, and you can close all the channels. [15:21] <daniellimws> how can that be done? [15:23] <daniellimws> more specifically I tried to create a snap for this repo https://github.com/dj3500/hightail/pull/105 [15:23] <daniellimws> how can I transfer the name to that user instead [15:51] <kyrofa> daniellimws, you just need to talk to the store folks and request they transfer the name to that user's account [15:52] <kyrofa> (so make sure they have one) [16:51] <deniskamazur> Hi, have some questions about this task - https://community.ubuntu.com/t/adding-terminal-notifications-for-completed-commands-to-the-default-desktop/212? [16:52] <deniskamazur> Mazur Denis December 6, 2017 at 19:47 (MSK) Hi, have some question about this task Can I use any language for this task? Where should I PR my changes or can I create a separate project? Should It work for all kinds of terminals? [17:00] <elopio> deniskamazur: you can leave your questions there in the forum. I think didrocks is the mentor for that one, and he's not here but he will get notified when you reply there. [17:02] <deniskamazur> alright, thanks [22:20] <m4sk1n_> flexiondotorg: popey: ping [22:25] <flexiondotorg> m4sk1n_: Just got home from a conference in London. We'll take a look in the morning ☺️
UBUNTU_IRC
Gregory Stark írta: Ok, ignore my previous message. I've read the patch now and that's not an issue. The old code path is not commented out, it's #ifdef'd conditionally on HAVE_LONG_INT_64 is right (well it seems right, it's a bit hard to tell in A few comments: 1) Please don't include configure in your patch. I don't know why it's checked into CVS but it is so that means manually removing it from any patch. It's usually a huge portion of the diff so it's worth removing. 2) The genbki.sh change could be a bit tricky for multi-platform builds (ie OSX). I don't really see an alternative so it's just something to note for the folks setting that up (Hi Dave). Actually there is an alternative but I prefer the approach you've taken. The alternative would be to have a special value in the catalogs for 8-bit maybe-pass-by-value data types and handle the check at run-time. Another alternative would be to have initdb fix up these values in C code instead of fixing them directly in the bki scripts. That seems like more hassle than it's worth though and a bigger break with the rest. 3) You could get rid of a bunch of #ifndef HAVE_LONG_INT_64 snippets by having a #define like INT64PASSBYVALUE which is defined to be either "true" or "false". It might start getting confusing having three different defines for the same thing though. But personally I hate having more #ifdefs than necessary, it makes it hard to read the code. OK, this would also make the patch smaller. Is pg_config_manual.h good for this setting? Or which header would you suggest? 4) Your problems with tsearch and timestamp etc raise an interesting problem. We don't need to mark this in pg_control because it's a purely a run-time issue and doesn't affect on-disk storage. However it does affect ABI compatibility with modules. Perhaps it should be added to I am looking into it. Actually, why isn't sizeof(Datum) in there already? Do we have any protection against loading 64-bit compiled modules in a 32-bit server or You can't mix 32-bit executables with 64-bit shared libraries, well, I don't see any problem here. But generally this is something I've been wanting to do for a while and basically the same approach I would have taken. It seems sound to me. Thanks for commenting and encouragement. Cybertec Schönig & Schönig GmbH Sent via pgsql-patches mailing list (email@example.com) To make changes to your subscription:
OPCFW_CODE
List of Sections ↓ Filters are an important part of nearly every machine vision application. They can, for instance, be used to smooth images (e.g., extract sub-pixel precise edges (e.g., process the frequency domain of images (e.g., Usually, filter operations are applied using filter masks, which are moved across the input image pixelwise. For each pixel a new value is calculated based on its neighborhood. The neighborhood is determined by shape and size of the filter mask (see the scheme below). You can tell from the working principle of filter masks, the treatment of pixels along the image or domain border requires special attention, as part of the filter mask exceeds the domain. In the following sections some consequential issues will be discussed. If a filter that is using a mask is applied on an image with a reduced domain, the result along the domain boundary might be surprising because gray values lying outside the boundary are used as input for the filter process (see scheme below). To understand this, the definition of domains in this context must be considered: For a filter, a domain defines for which input pixels output pixels must be calculated. But pixels outside the domain (which lie within the image matrix) might be used for processing nevertheless. Another point to notice is the handling of pixels outside the input how those pixels are treated. If the pixels are initialized with 0, consecutive runs will yield identical results. Otherwise, those pixels While this enhances the program runtime, undefined pixels can differ from system to system, for example if parallelization is activated or not. It is merely guaranteed that the values are consistent if the program is executed repeatedly on systems with the same configuration. In certain cases, these 'undefined' pixels might lead to problems. Expanding the resulting image to the full domain with will lead to artifacts appearing outside of the former image domain. When two or more filters are applied consecutively on the same domain, the undefined or unexpected values (as described in the paragraphs above) have a higher impact on the result image. This is because with every following filter the error increases, starting from the border to the middle. In the following, four strategies for solving those problems are presented. Errors caused by undefined pixels can easily be prevented by, e.g., choosing a dilated domain (see Morphology / Gray Values) according to the filter mask. If multiple filters are applied consecutively, the image domain can be dilated in advance, considering the filter sizes to be used. For instance, when using a cascade of rectangular filters of arbitrary dimensions, the width and length dimension of the dilation mask can be calculated considering the individual filter mask dimensions and the number of filter operations : After the filters were applied, the image domain can be reduced to its original size (e.g., with Another option is to set the domain exactly to the size of the interesting part within the image and then calling the operator before applying a filter. This operator copies the pixels inside of the border to the outside of the border and therefore avoids errors caused by pixels that are undefined outside of the domain. Subsequently, the domain can again be reduced to its original size. This process should be repeated for every following filter operation. Note, however, that this option increases the runtime significantly. If runtime is not an issue, the operator called before applying the first filter to the image. That way, the whole image is defined as domain and undefined pixels are avoided completely. Another possibility of getting an image without undefined pixels is by calling the operator before applying a filter. crops the image to the size of the domain, which means that the domain then covers a complete smaller image. Note, however, that for the cropped image the coordinate system has changed in relation to the original image, which will influence all following applications depending on the image coordinate system (e.g., calculating the center of gravity).
OPCFW_CODE
Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "learning with pdf" 1. Spend a week learning InDesign and create an awesome resume. 2. Upload and send the PDF resume around with applications. 3. Have ppl ask for a Word file...16 “Don’t learn multiple languages at the same time” Ignored that. Suddently I understood why he said that. Mixed both languages. In holiday rechecked it and it was ok. Sometimes mistakes can lead to good things. After relearning I understood it much better. “Don’t learn things by head” was another one. Because that’s useless. If you want to learn a language, try to understand it. I fully agree with that. I started that way too learning what x did what y did, ... But after a few I found out this was inutile. Since then, I only have problems with Git Another one. At release of Swift, my code was written in Obj-C. But I would like to adopt Swift. This was in my first year of iOS development, if I can even call it development. I used these things called “Converters”. But 3/4 was wrong and caused bugs. But the Issues in swift could handle that for me. After some time one told me “Stop doing that. Try to write it yourself.” One of the last ones: “Try to contribute to open source software, instead of creating your own version of it. You won’t reinvent the wheel right? This could also be usefull for other users.” Next: “If something doesn’t work the first time, don’t give up. Create Backups” As I did that multiple times and simply deleted the source files. By once I had a problem no iOS project worked. Didn’t found why. I was about to delete my Mac. Because of Apple’s WWDR certificate. Since then I started Git. Git is a new way of living. Reaching the end: “We are developers. Not designers. We can’t do both. If a client asks for another design because they don’t like the current one tell them to hire one” - Remebers me one of my previous rants about the PDF “design” Last one: “Clients suck. They will always complain. They need a new function. They don’t need that... And after that they wont bill ya for that. Because they think it’s no work.” Sorry, forgot this one: “Always add backdoors. Many times clients wont pay and resell it or reuse it. With backdoors you can prohibit that.” I think these are all things I loved they said to me. Probably forgot some. Dank Learning, Generating Memes with Deep Learning !! Now even machine can crack jokes better than Me 😣 TL;DR, I do node.js now. There's much I was working on the past weeks. First of all some of you may know I don't work in IT and therefore always am learning how to make things easier in my workspace with tech. And my boss once told me how annoyed he is converting stuff to PDF for easier sending via mail. Then I started to build PDF converter with PHP and the Laravel framework. My first steps into it succeeded and I could even deploy my Pdf-wizard website, but everything feels like a hustle and making this application bigger don't really seems like a enjoyable task for me. I tried the same stuff with Node.js then. It was damn good. It was simple, because there are plenty of packages wich do this tasks on NPM. Afterwards I spent some time on doing research and ended up learning Express Framework. This brought new inspiration to me and I wanted to share this with you guys.1 PUBLIC SERVICE ANNOUNCEMENT: For AI, in particular Deep Learning developers, practitioners, hobbyists and otherwise people interested in the field. If you go into the Pytorch website, click on resources and scroll down you will see a link to "Deep Learning with Pytorch" by Manning publications. This will give you access to the book, a book that if memory serves me well costs about 40+ in printing and the online book format is about 29 (again, if memory serves well) The book is currently FREE and it does not ask you for an email address, you can just tell them why you want it for and they will give you the free pdf download. I don't know how good the book is, but have found Manning to publish really good resources. Do with this information what you want. And yes, I am leaving the rant tag, so that more people can see this and take advantage of the opportunity in case of being interested and not having the money to purchase the book after the promotion is done and over with. Fuck you about tags and shit.10 Okay! Got my numpy pdf, theano pdf and my theano deep learning pdf! It’s time to get reading for 1111111111111111111111111111111111111 hours. Wow! I’m really getting deep into “deep learning” learning! Ok, I’ll quit now...2 I need little bit help. I am noob in react native. I am creating a app which show pdf, all pdf are store in a web server. I want to start downloading all those PDF in background when app start. So user does not have to wait it to download when he/she/it click on it. Also, I am not good with redux yet. I am still learning it. And this application does not have redux implemented. So please, can you explain how can I achieve this?7 Am just looking to buy my first printed book for C#! Till yet I was learning C# from internet resources like video tutorials and Pdf books. But now am going to buy a printed book so that I can carry that with me and can read on the go. So can you please guys suggest me any c# book to buy. I decided to buy the C# complete reference. But still I need some suggestions from you all.4 This semester in college we're supposed to learn some machine learning using mostly Matlab. The first laboratories (technically the second, but the first we actually do something) we're learning basics of Matlab. We were given an instruction PDF that talks about assigning variables, creating functions and classes, and some basic operators. A the end of the instruction are exercises, but the thing is, they require knowledge of a lot of Matlab functions, like linspace, reshape, random numbers, vectors and matrices, but it does not tell anywhere about then or how to use them. An example: exercise 4 tells us to read docs about 'ezplot' and plot sin^2(x). Then, exercise 5 tells us to generate a 100 element linear space for -2pi to 2pi, calculate the sigmoid value of each point and plot it. The professor looked personally offended that we had no idea what a sigmoid is and that we were all struggling to calculate it. He almost shouted at us for trying to use ezplot (which we assumed is what we're supposed to use based on exercise 4) instead of regular plot do visualize it. I fucking hate this kind of professors. Also, the real fuckfest is in the last exercise. I'll try to translate it to English as close as I can: Create a 100 element vector of random positive integers and then save it as a matrix with the amount of rows equal to the amount of unique values in the vector and 100 columns and then for each element of the original vector encode it's value in the form of 1 in the field whose index equals the value of that element increased by one. Yes it's all in one sentence, and no, nowhere in the instruction does it say how to do any of that. Also, we have a test about all of that tommorow and I don't think anyone will pass it2 So today my teacher told me to do that project for some competition or something(frankly, I don't remember clearly what this is for). He gave us the machines we need, the CDs with the systems we have to work with. We are supposed to make a properly working Beowulf cluster from the things I've been given. I am really okay with making this the way my teacher wants us to do. I am okay with installing an ubuntu 16.04 server that is completly irrevelant to the project, because it's not part of the cluster. I am really okay with using some weird linux distribution on the master nobody has ever heard of. But I'm not okay when the software we've been given(including operating system) has seven pages of documentation, escpecially when fucking screenshoots of how PXE booting should look like are roughly 70% of it. No, I couldn't find a thing on the internet about it. I couldn't read the fucking manual. There was no fucking manual. There was no fucking --help. There was no motherfucking english language. Everything was motherfucking spanish, including that 7 pages long document that was supposed to guide us through our work. It was planned to be done until march. The only reason I can think of about why doing the stuff the document tells us to do would take four motherfucking months is that we'd have to learn spanish to do this. And I'm not going to do that. Not because I don't like spanish or learning. Simply because I didn't sign up for this to learn languages. And no. I can't switch to other, human purposed software. I am only allowed to use the things the teacher has given us. Because somebody has worked on it already couple of years ago and they had left a pdf file about how to install that ubuntu server I've been writing about a while ago. Which, by the way, was the "installation guide for animals". Showing how to install a system, screenshoot after screenshot. It took about an hour to figure out the thing supposed to handle pxe booting computers all the time was telling us that it can't work because we had to configure ethernet interface manually. Because why the fuck not.
OPCFW_CODE