Document
stringlengths
395
24.5k
Source
stringclasses
6 values
SDK: what is it? The explanation of the distinction between SDK and API SDK : what is it? The explanation of the distinction between SDK and API As more teams enter the market and our collective experience grows, code samples are becoming the foundation of today's website and app development. This is the reason why most modern code at most companies is a patchwork of different sources rather than a single cohesive piece of work, even with a wealth of instructions and now even AI tools. When it works, it works; when it doesn't, well, let's hope that's not the case. However, this method may result in inconsistent work and an unpleasant user experience. For this reason, many services and platforms that cater to users introduce Software Development Kits, or SDKs. They frequently guarantee a flawless user experience while streamlining the process for developers. We will go over the advantages of the SDK in great detail in this guide, along with the distinctions between the SDK and API and the reasons your app needs the Adapty SDK. Now let's move! What is an SDK, or software development kit? You can use a Software Development Kit (SDK), which is a necessary set of software tools and programs, to create applications for particular hardware, operating systems, or platforms. It functions as a complete package made to make the development process easier and more efficient, allowing you to create complex and effective applications more quickly. Code samples, which are an essential part of it, offer real-world examples of how to apply particular features or address typical issues. These samples serve as an invaluable educational tool, illustrating best practices and assisting users through the process, particularly for novice developers or those unfamiliar with a given platform. Additionally, SDKs include a wealth of documentation that provides thorough instructions and guidelines on how to use the tools and components inside the SDK. It helps developers understand the capabilities and constraints of the SDK by covering everything from setup and installation to particular use cases. An SDK contains all the components, tools, libraries, and instructions needed to expedite the development process. Let's examine the contents of a normal SDK. SDKs are equipped with a variety of components, each designed to support different aspects of the application development process. Here’s a closer look at some of the common components found within SDKs: Code is transformed by compilers into an executable file, also known as a program or app. They convert high-level programming languages into lower-level languages that the computer processor can comprehend, such as machine code. The source code can be converted into executable applications using this procedure. Additionally, compilers optimize software to increase its effectiveness and performance. Developers can find and fix errors or bugs in their code with the help of debuggers. They enable programmers to run their applications in a safe environment where they can monitor variable values, examine the program's current state, and step through the code line by line to see how the program functions when it is running. Certain SDKs also come with sandboxes or testing tools. An integrated development environment is included in or supported by a large number of SDKs (IDE). An IDE is a feature-rich program that gives programmers an intuitive coding, compilation, debugging, and occasionally deployment interface for their applications. Code editors, building and debugging tools, and version control system integration are common features of integrated development environments (IDEs). An exemplary case in point is Apple's Xcode. Developers can add specific functionality to their applications without starting from scratch by using libraries, which are collections of prewritten code. These can include, among other things, connectivity features, data manipulation capabilities, and graphical elements. Application Programming Interfaces, or APIs, establish a set of guidelines and procedures for creating and utilizing software applications. This facilitates communication between various software components. Because libraries and APIs provide reusable parts and interfaces, development is sped up considerably. For any SDK, thorough documentation is essential. Usually, it contains usage guidelines, feature descriptions, installation instructions, and API reference guides. In addition to providing helpful examples of how to implement features, tutorials and sample code are frequently included with the documentation. These resources help developers become familiar with common scenarios and the capabilities of the SDK. Each of these elements contributes to accelerating development and lowering the entrance barrier for new services and platforms. Let's now discuss a few less evident advantages of utilizing an SDK. SDKs expedite the process of development. SDKs provide developers with a collection of pre-made tools, libraries, and code samples that they can utilize, saving them from having to start from scratch when creating fundamental components. For example, the Unity SDK offers sophisticated graphics rendering, physics engines, and networking libraries to developers of mobile games, freeing them up to concentrate on user experience and game design instead of underlying technical details. Offer a Standardized Process. SDKs compile standard operating procedures and best practices for developing apps, guaranteeing that developers work in a uniform manner. When integrating third-party services or working with large teams, this standardization is very helpful. A consistent and dependable user experience across apps is achieved, for instance, by ensuring that developers can implement maps and location features in accordance with Google's recommended practices through the use of the Google Maps SDK. Verify that it is compatible with the intended platform. Certain SDKs are created especially for the platforms on which they are used, guaranteeing compatibility and peak performance for applications developed on those platforms. The appropriate SDK, also called WatchKit, contains tools and interfaces designed specifically for the watch's unique hardware and software environment. This is what you would use if you wanted to create an app for the Apple Watch. Cut expenses. SDKs drastically reduce development costs by reducing development time, reducing the need for thorough testing across various platforms, and removing the need to buy individual tools or libraries. For instance, a startup creating a cross-platform mobile application can save development and maintenance costs by using the Flutter SDK, which allows them to write the code only once and deploy it on both the iOS and Android platforms. The following is a condensed overview of the actions a developer may take when launching an SDK: Choose the Appropriate SDK. The first step in the process is to evaluate the project's needs. The target platform (iOS, Android, Windows, etc.), particular capabilities (like graphics, networking, database management), or features unique to a certain industry (like payment processing for e-commerce, GPS for location-based services) should all be taken into consideration when making your decision. Once chosen, download and set up the preferred SDK. The SDK may be downloaded from within the IDE or may be pre-installed for integrated development environments (IDEs) such as Android Studio or Xcode. Setting up your development environment to use the SDK is the next step. This could be importing the required libraries and frameworks into their project, setting path variables, or setting up the IDE to detect the kit. Usually, the setup procedure is carried out in accordance with the SDK documentation. Examine the Sample Code and Documentation. Study the available sample code and become familiar with the documentation before starting any development. Whereas sample code will highlight particular features and best practices, documentation should provide a brief overview of the SDK's capabilities, API references, and usage instructions for its various components. Launch the development process. Continue as before, but with the addition of new features, libraries, and capabilities. Assemble SDK components into your codebase and use the features they offer to improve their application. Evaluate and rectify. Tools for testing and debugging applications are frequently included in SDKs. Make use of these tools to test apps under different scenarios, find and address bugs, and make sure the program functions properly on the intended platform. The SDK's debugging tools can assist in tracking down problems at their origin, which facilitates the maintenance of high-quality code. Deployment is the simplest step! Tools for packaging the application, fulfilling platform-specific deployment requirements, and automating the deployment procedure are frequently included in SDKs. Deploying the application in accordance with the SDK guidelines guarantees that it complies with all applicable distribution standards and is compatible with the target platform.
OPCFW_CODE
Have you ever experienced a Node Panic or Down Node? Do you have Hot Standby Nodes (HSN's) in your configuration? If you answered 'YES' to any of these questions, read-on about how Teradata has introduced a new feature that may improve performance in these types of situations. Fast connect times are of paramount importance to most applications that interface with the Teradata Database. Further, the sooner this connection takes place, the sooner transactions can be processed on the Teradata Database. Normally this connection is very quick, measureable only in milliseconds. However, when one or more nodes are down in a clique, this normally fast connection can be extremely slow. Why? Well, the problem occurs in the connectivity APIs, (CLIv2, ODBC, JDBC, .Net Data Provider) and their normal behavior of waiting for a period of time for a down node to respond. Understand that the APIs do have some intelligence, as they will mark a node as down and NOT retry it for a given interval. However, this down node information is specific to a particular application instance and must be re-discovered each time a new instance is created. Imagine the performance implications on a system with 100,000 or even 1 million concurrent instances. Therefore, in order to mitigate the performance bottleneck associated with a down node, Teradata has introduced a new, smarter approach called Laddered Concurrent Connect, or LCC. Situation: Running normal workloads during peak business hours with Pre-LCC functionality A. Operating System: Windows B. Application: Teradata SQL Assistant C. Connection Method: Teradata ODBC Driver D. ODBC: Wait time interval = Default (20 seconds) E. DNS Listing configured (Each COP is assigned an individual IP address) A. Teradata DBS Name: Production B. System configuration: 4 Node MPP (Enterprise Data Warehouse - EDW) C. 1 node (COP1) panics (goes down) and drops out of the configuration D. 3 nodes ONLINE, 1 node DOWN (not responding) There are 2 phases that readers should be aware of when viewing the diagram below: Phase 1: ODBC attempts to obtain an IP address from the DNS server Phase 2: ODBC attempts to establish a connection to the DBS (Teradata) In Phase 1, DNS success or failure is completely INDEPENDENT of the state of the DBS. In other words, DNS doesn't care if the DBS is Up or Down. Next, if and only if Phase 1 is successful, will ODBC attempt to establish a connection to the DBS (Phase 2). Now, lets look at the Pre-LCC functionality: 1. The initial IP address request FAILS because DNS listing is configured, which means the nodes are defined on a per COP basis. (e.g. ProductionCop1 = xxx.xx.xx.xxx, ProductionCop2 = xxx.xx.xx.xxx, etc) 2. ODBC will always attempt to connect to COP1 FIRST. 3. ODBC only attempts to connect to a SINGLE (Cop or Node) at a time. 4. ODBC must wait the ENTIRE time interval before it recieves a response back. LCC can significantly reduce connect times because it allows the interface to concurrently target multiple nodes for connection. This has the effect of bypassing COPs on down or extremely slow nodes and improving overall elapsed times for session establishment. Once the first successful connect is recognized, any remaining occupied sockets are released (closed). A down node will no longer force the client interface to pause until the wait time interval has expired before it attempts to connect to another node. To guard against excessive resource consumption (e.g., extraneous sockets) from becoming a consideration, LCC does not fire off connect requests all at once. Rather, it applies a proprietary dynamic delay interval between connect requests that adjusts according to the previous connect response time. In other words, LCC tries to optimize the delay interval based on current network and server performance characteristics. This means the delay interval will gradually increase if connect responses are relatively slow and decrease if they are relatively fast. This allows LCC to efficiently handle a variety of different network topologies and server workloads. Lets take a look at the LCC feature functionality: 1. ALL cop entries are returned to the API/Driver (e.g. ODBC) 2. ODBC will RANDOMLY determine which Node/COP to connect to first. (i.e. Will NOT always be Node 1/COP1) 3. ODBC connects to MULTIPLE nodes based on the a specified "Delay" Internval 4. Response time drastically reduced to milliseconds from full timeout intervals (e.g. 60 seconds, 30 seconds) This is the best facet of the feature. No changes are required to existing apps, DNS, or DBS settings. LCC is e-fixed from TTU 13.10 back to TTU 12.0 (with no co-requisite DBS e-fixes). There is no tuning needed and no parameter changes required because LCC is completely self-adjusting. In fact, most customers will never even be aware of LCC – unless a node goes down or a dormant DNS-registered HSN is present in the configuration. In such cases, connect times should be considerably faster. *NOTE*: Prior to LCC, customers were routinely advised to omit HSNs from DNS because of unacceptably long connect delays that would result. With LCC, customers are encouraged to include HSNs in DNS. The main reason for doing so would be to preserve network bandwidth that would otherwise be lost when a node goes down. However, unless such a loss would represent a significant performance hit for a particular configuration, the net benefit would be negligible. 1. My organization has deployed IP load balancing devices (e.g. F5's BIG-IP). What changes do I need to make? 1a. Configure the IP load balancing devices to direct traffic to TPA READY nodes and HSNs (IP load balancing devices can be programmed via their native scripting language to reroute “connection refused” failures returned by a dormant HSN to other COPs in the clique. This activity occurs wholly unbeknownst to the client interface). 2. My organization is using round-robin DNS (or "smart" DNS device) for load balancing and failover. What changes do I need to make? 2a. Include the IP address of each TPA READY node and each HSN 3. My organization is using a Classic COP naming scheme. What changes do I need to make? 3a. One COP name is defined for each TPA READY node and for each HSN. 4. Are there any restrictions or limitations I should be aware of for LCC? 4a. LCC was NOT developed for the OLEDB Provider, therefore care should be taken to NOT add a HSN to DNS or hosts for OLEDB clients 5. How much of a performance gain can I expect from LCC? 5a. As always, it depends. However, based on internal testing, the differences (Between Pre/Post LCC) were most appreciable and consistent for multi-threaded applications. Further, for down-node situations, users will avoid potential long delays. 6. In what versions is LCC available? 6a. CLIv2 implemented in 184.108.40.206, 220.127.116.11 & 18.104.22.168 JDBC implemented in 22.214.171.124, 126.96.36.199 & 188.8.131.52 .NET Data Provider implemented in 184.108.40.206, 220.127.116.11 & 18.104.22.168 ODBC implemented in 22.214.171.124, 126.96.36.199 & 188.8.131.52 QRYDIR implemented in 184.108.40.206, 13.02.0.1 & 220.127.116.11 If you have any questions or concerns regarding LCC, please leave a comment. For community support, please post a topic in the Connectivity Forum.
OPCFW_CODE
google-tag-manager.php to the - Activate the plugin through the ‘Plugins’ menu in WordPress - Go to General and set the ID from your Google Tag Manager account. - Why isn’t the output displaying? Two possibilities: First, you haven’t yet specified the ID in the admin panel, or second, your theme is missing a <?php wp_footer(); ?> call. Read all 10 reviews Otros plugins no me enviarvan al información de Google Analytics 4 pero este en un ratito lo solucionó, es genial y sencillo Sin tantas complicaciones, funciona correctamente Does what it says, inserts just the GTM code at the right time at the right location. Fell in love with the simplicity of this plugin, wonder why it does not show up in top when looking for GTM plugin for WP. Had to mess around with so many other plugins before I found it. All other related plugins are too complex, and slows down the site as well. This plugin does what its supposed to do and nothing else 🙂 "LESS is MORE" Kudos to the author for such a beautiful implementation. One suggestion though, if there would have been a single "do_action" with some custom action, before printing the GTM code, could make it easier to add some custom JS variables, if needed, to pass along to GTM, rather than printing them at wp_head with priority 9. I'm trying to follow some youtube tutorials on GTM. They say to install the code manually, however, I use a child theme and I don't know how to insert the code via functions php file. There are other GTM plugins, but they have so many options and things I'm afraid they will interfere with the tutorials. From reading the description, it sounds like the only thing your plugin does is insert the GTM code in the correct places. It doesn't do anything else, am I correct? I know there's been a lot of frustration with this plugin in the past, but after making the plugin author aware of Google's change to how the Tag Manager scripts should be loaded, he's updated the plugin and it works great. I've tested version 1.0.2 of the plugin by dynamically injecting a Google Analytics script through the Google Tag Manager (which didn't work properly in version 1.0.1) and it now works perfectly! “Google Tag Manager” is oopbron sagteware. Die volgende mense het bygedra tot die ontwikkeling van hierdie uitbreiding:Contributors - Add support for the new wp_body_open hook in core. - Add support for Genesis and Theme Hook Alliance themes to echo the iframe sooner in the dom. - Change to static methods to avoid errors in some versions of php
OPCFW_CODE
Server 2003 Cannot Boot To Safe Mode Find out how to start a system in Safe Mode in this tip. Windows 8 makes it harder for techs to troubleshoot problems. The causes of this is many various reasons, the most common causes of this is: The Windows Partition needs to be checked for errors The Boot Loader gets corrupt The Windows That's ... weblink Help, I got up extremely early to do this, and I can't get into safe mode to complete my task. Text Quote Post |Replace Attachment Add link Text to display: Where should this link go? GAG's Main Screen and it's Setup Screen When you first boot off of the GAG's CD it will guide you through a mini-wizard which include: To use GAG, Press 4 to Click the operating system that you want to repair (it should be blank), and click Next. https://support.microsoft.com/en-us/kb/325375 Click the operating system that you want to repair (it should be blank), and click Next. Window 2003 server immediate log of after login Windows 7 safe mode restarts at login Windows 2003 Server and Standby Mode Licence mode in Windows 2003 Terminal Server More resources Tom's Proffitt Forum moderator / November 22, 2012 1:09 AM PST In reply to: new info And now with your files safe, we can discuss other issues and possible cures.1. Type msconfig in RUN or start menu search box and press Enter. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. PLEASE TELL US HOW TO FIX THIS SITUATION. This > > creates a boot menu. Recovery Console Tools Microsoft does provide some tools within the Recovery Console environment to aide in fixing a bootloader problem. Note: The following method is not a safe way and is not recommended. then typed "chkdsk" an it ran a report becuase on another site they sugested if I could do that to try it and see if it pulled any bad sectors. Casey This doesn't make any sense to me because if we are in a situation where we have to launch windows in boot mode but we can't, how can we download https://social.technet.microsoft.com/Forums/windowsserver/en-US/307fcd6b-a29a-42ca-8e09-52de2510751f/cant-boot-into-safe-mode-server-2008-r2?forum=winservergen Don't worry! Checking the Partition for Errors About half of the time when Windows will not boot it has to do with errors on the Windows Partition. It takes just 2 minutes to sign up (and it's free!). You can research what it does but it's a common cure for some issues like this one. -> Why would you lose all data? Now click on "Edit" button as shown in following screenshot: 4. Please advice me on what to do. If a newly added device or updated driver is causing problems, you can use Safe Mode to remove the device or roll back the update. It'll start booting Windows in Safe Mode and you'll be able to boot Windows into Safe Mode without any problem: Feel free to share your feedback or ask any question regarding by R. There isn't anything else in that list besides things like Java, adobe, and Peachtree. http://stevemattley.com/server-2003/server-2003-cannot-login.html For Windows XP and Server 2003 users: 1. You should now be back at the Main Screen and the Windows Entry that you just added should be listed. Please login. After inserting the disk and pressing to be after f11 key it starts copying the files gets to about 25% then the computer screen showes the blue screen with an error by R. Restart the server and boot from DVD. http://stevemattley.com/server-2003/server-2003-cannot-find-primary-authoritative-dns-server.html It will then ask for a name for the Entry, just type in "Windows" or something similar. Need to hit it just as the BIOS loading screen ends. VG ^^ Your computer is showing select boot device menu upon F8 press. Sign up now! We'll select the last line showing "Microsoft Windows XP Professional", copy it and paste it at the end. Add My Comment Register Login Forgot your password? Flag Permalink Reply This was helpful (0) Collapse - How did you run CHKDSK given the story above? Sign Up Now! Upon selecting the Windows installation, you may be required to enter the Administrator Password. If that does not work, you can try booting to safe mode to continue troubleshooting. You can set theboot in F8 menu 3. In MSCONFIG I disabled all non Microsoft services except the Broadcom controller. http://stevemattley.com/server-2003/server-2003-cannot-update.html It will then ask which Windows installation you want to log into. Boost Exchange email security with Critical Security Controls The SANS Institute's 20 critical security controls all apply to Exchange Server, and many IT organizations are falling behind on ... Stay logged in Welcome to Windows Vista Tips Welcome to Windows Vista Tips, your resource for help for any tech support and computing help with Windows Vista.. Just select Safe Mode option from Windows boot loader and get ready to boot Windows in Safe Mode without any problem. Microsoft Certified Professional Microsoft MVP [Windows] Monday, April 02, 2012 8:42 PM Reply | Quote 0 Sign in to vote Hi, Thanks for your posting. If it does not come up, you may have a problem with the boot loader, if it does come up the problem with the system is with Windows itself and not
OPCFW_CODE
Control the current of a power supply using another variable power supply connected in series While disassembling a little electric oven (rated for 1300W at 230V), I scavenged one of the heating elements inside, which I measured its resistance to be about 40 Ohms (let's call it Rload). Since it should be able to handle some power, I would like to use it as a "mild" load, for testing power supplies. For instance, let's say I have a 5V DC, 550 mA power supply (let's call it Ptest): once connected, Rload should draw around 125 mA, loading Ptest a bit. I have a Rockseed RS310P programmable power supply, on which I can set voltage and current up to 30V and 10A (let's call it Pvar). I thought about connecting Pvar and Ptest in series with Rload, like this: My idea is that, by setting the current limit of Pvar, the PSU will act as a constant current source and it will adjust its voltage to whatever is needed to mantain that current. The current will cause a certain voltage drop (Vload) at the load, and given that Vload must be equal to Vtest + Vvar, Pvar will adjust its voltage to be Vvar = Vload - Vtest. For instance, given the current limit Iset = 250 mA, Vload = Iset * Rload = 10V, Vvar = Vload - Vtest = 5V: in order for 250 mA to flow through Rload, Vvar must be 5V. But, since everything is connected in series, this means that 250 mA are flowing through Ptest aswell, which is the effect I'm interested about: from Ptest point of view, it should be like a load which is drawing 250 mA. Thus, setting the current limit could drive this makeshift "variable load" within a certain range: in this case, I should be able to go from a little above 125 mA up to 875 mA (by setting Vvar to 30 V, which means Vload = Vtest + Vvar = 35 V. I know proper variable loads exist, but I would like to know if this could really work: theoretically it should, although I don't know much about power supply topologies (except really simple unregulated ones) so I'm not sure about practical pitfalls or even safety hazards involved. How about connecting a battery instead of a power supply as test source? It should discharge the battery at a specific current. Some amateurs creating schematics which going against engineer's logic, and it works. To find out if you right experiments needed. Easier will be to assemble some current stabilization. A lot of solutions exist. This will work just fine as long as your variable power supply supports series connections (its output must be floating, not grounded). Keep in mind that you might damage your PSU under test if it too goes into current limiting, as the variable supply might drive a reverse voltage into it. It might be safer to use the variable supply in constant voltage mode, which will result in the overall current dropping if one of the two PSUs goes into current-limiting mode accidentally. I checked my PSU: the output is floating, so I should be good to go
STACK_EXCHANGE
Home » Search term: "features-no-software-required" …Document Imaging Library Features The ImageGear document and imaging library includes a wide variety of features to help you build your application quickly. You can find the entire… …PrizmDoc Features PrizmDoc helps you streamline document management processes by allowing you to add viewing, redaction, annotation, conversion, and other services to your applications using our powerful RESTful… Control the viewer from your application, through easy APIs and exposed events. We also offer a ready-built version for SharePoint 2013. Prizm Content Connect empowers you to call… …PrizmDoc is designed to be supported on virtually any platform and language. If you have specific questions about your system’s compatibility with our software, please contact us…. …code. Zero installation is required on customer devices because everything they need is already in the browser. Functionality That Speeds Up Your Development Time Rather than taking time to custom… …software vendors, to perform image processing and image quality assessment,” reports Sean Murphy, Lockheed Martin Census Project Manager. “We also required best-of-breed software technology for workflow, scanner control, and data… …advanced features while speeding up development, reducing cost, and getting your product to market sooner. One Less EMR Worry Let’s face it… there’s nothing simple about “solving” the EMR problem…. Related to: FormFix ImageGear Medical ImagXpress PICTools Document PICTools Medical PICTools Photo SmartZone ISIS Xpress Prizm ActiveX Viewer ImageGear for .NET …and sophisticated layout options. With this release, the ImageGear .NET and Silverlight SDKs now allow developers to deploy smaller assemblies that include only those major features required by their applications…. …SDK when more PDF features and support are required. PDF Xpress v3 delivers numerous new features. Software engineers may use PDF Xpress to create image-only PDF/A compliant documents. They may… …distribution while preserving the source document intact for compliance and other purposes. Prizm Content Connect requires only a simple server-side installation – no additional software is required on the client… …typically required to validate low confidence data. You should plan for development of a process to display suspect characters or fields to a human for manual data entry. Human interaction… Related to: ScanFix Xpress FormFix FormSuite for Invoices SmartZone Related to: ImageGear for .NET …v8.3 enables users on almost any system or device to view hundreds of different document file formats with nothing but a browser, no additional software required. Chief among the new… …a new version of the industry-leading HTML5 document viewer that integrates seamlessly into Microsoft’s SharePoint 2013 collaboration environment. With no software required on end-user devices other than an HTML5 browser…
OPCFW_CODE
ACF Nested Repeaters is a fascinating feature. However, embedding numerous repeaters within a parent repeater block may be hard to implement on the frontend as you need to write tons of PHP code. But don’t worry; we have a complete code-free solution in this tutorial with the help of our plugin AnyWhere Elementor Pro. To give you a better understanding of how nested repeaters work, we will be recreating the below demo of this Hosting Price table. This is inspired by the pricing page structure of Cloudways hosting. The first level of repeater field is to add hosting platforms, like Digital Ocean, Linode, or Vultr. Withing each platform, there are multiple monthly plans which are managed by the second-level repeater field. And then there is a third level of repeater field to manage features of each plan. So, let’s see the setup. Table of Contents ACF Nested Repeaters Setup - Start by giving the field group a name and a location, like a post or a page. Please check our detailed article on ACF Repeater Field Complete Guide. 2. Then add a custom field (Platform) and make it a repeater field type. 3. Now, we need to add sub-fields to this Repeater block; for this demo, I have created two sub-fields. Title and Plans. Where Plan is also a Repeater field. The first level of nested Platform -> Plans. 4. Now add subfields inside the Plan Repeater field. I have added three subfields in this: Price, Configuration, and Features. Both the fields Configuration and Features are of Repeater type. So, here comes the second level of nested Plans-> Configuration-Features. 5. Add some subfields to the Configuration repeater. Then, add some subfields to the Features repeater. 6. Once done, save the changes. Till now, we have completed our ACF Nested Repeater Field setup; next, you need to populate these fields onto a page or post just like we do for the rest of our Custom Fields. Designing Block Layouts for each Repeater Block After setting up the repeaters, our next step is to design the templates for each repeater block that we have created – Features, Configuration, Plans, and Platform. For creating the templates, I will be using AnyWhere Elementor Pro. We will start with the innermost block – Features and Configuration and then move to Plans and Platform. Now, let’s start with creating a template for the Features and Configuration block. Block Layout for Features & Configurations Repeaters 1. Create a new AE Template, and then under the AnyWhere Elementor settings, do the following configurations. - Render Mode: Select ACF Blocks option when designing the layout for the Repeater field or Flexible field - Field Location: Select Post as location. - Field Type: Select the Repeater option. - Preview Post: Here, you need to select the posts to which you have applied the field group. - Field: Select the Features. Note: You need to follow the same settings for creating the block layout for the Configuration repeater block. Just in the Repeater field, select Configuration in place of features 2. Next, edit the template in the Elementor editor. 3. Under the Elementor editor, use the AE – ACF Fields widget and configure it. Block Layout for Plan Repeater The Plan repeater block acts as a container for the sub repeaters(features, Configurations). So, while designing its template, we will have to display data from sub repeaters also. 1. First, add ACF Fields to display the data from the Price field. 2. Then, to display data from the sub repeater field, we’ll use the AE- ACF Repeater widget. 3. Configure the widget: - In Skin, choose the display style like Accordion, Tab, or default. - Next, select the block layout created for the Configuration block. - Select configuration in the Repeater field option - You will see data from the nested repeater field Configuration is displayed. 4. Similarly, we need to add one for the ACF Repeater widget to display the data from the Features Repeater. Configure the widget the same way we did in the above step. Block Layout for Platform Repeater field 1. Create a new AE Template. 2. In Elementor, Add the AE – ACF Repeater widget and configure it. In Block layout, select the one we just created for the Plans repeater field. And in the Repeater field name choose Plans. Here is the final look of the layout, containing all the nested repeaters. Display Nested Repeaters on Frontend Now, we will be using the above-created templates to display the repeater field content on the page. - Edit the page in Elementor Editor, where you have populated the custom field meta details and drag and drop the AE – ACF Repeater widget onto a section. - In Skin, choose Tab or accordion. - Next, select the Repeater block layout created for the Platform field. - Choose Repeater field – Platform. - In the Tab title, enter the custom field slug you want to display inside the tabs. This title appears inside the Tabs (in blue). - Then, set the layout and alignment styles. And that is it; you are done. Have a look at the final output. As demonstrated in this tutorial, displaying the nesting repeater data on the Elementor frontend is a breeze. When we say Nested Repeater, people start thinking of the amount of code required for front-end implementation, but we have achieved this without writing a single line of code. So, make as many nested repeaters as you want and have total control over their templates designed according to your needs and creativity.
OPCFW_CODE
API to make IoT connectivity simpler Two Google engineers have proposed a way for IoT devices to easily connected to web pages. The move could pave the way for simpler installation of Internet of Things (IoT) sensors. The engineers, Reilly Grant and Ken Rockot, said their WebUSB API would enable hardware manufacturers to set up and control devices from web sites. The proposal would also make connecting USB devices and complex IoT sensors easier. When connecting devices, users either need the right drivers to set them up or you have to log into a small web server on the device itself. WebUSB allows the device to contact a web page and be configured from there. “For lots of devices it does because there are standardized drivers for things like keyboards, mice, hard drives and webcams built into the operating system. What about the long tail of unusual devices or the next generation of gadgets that haven’t been standardized yet? WebUSB takes “plug and play” to the next level by connecting devices to the software that drives them across any platform by harnessing the power of web technologies,” said the engineers on the WebUSB website. The engineers were quick to point out that the API will not provide a general mechanism for any web page to connect to any USB device. They said that historically, hosts and devices have trusted each other too much to let arbitrary pages connect to them. They added that there are published attacks against USB devices “that will accept unsigned firmware updates that cause them to become malicious and attack the host they are connected to; exploiting the trust relationship in both directions.” According to the engineers, WebUSB could replace native code and native SDKs with cross-platform hardware support and web-ready libraries. You might like to read: How APIs connect the world to the Internet of Things API connects IoT to the net The proposed mechanism has also been designed to be backwardly-compatible with USB devices without needing special firmware. “For devices manufactured before this specification is adopted information about allowed origins and landing pages can also be provided out of band by being published in a public registry,” the two said. The code is still a work in progress and is unofficial and hosted at W3C’s Web Platform Incubator Community Group (WICG). The engineers are welcoming members of the WICG to contribute to the project. Christian Smith, President and Co-Founder of TrackR, told Internet of Business that he sees WebUSB providing the standard to allow a seamless connection between hardware with USB and software. “It would allow me to take a mechanical design file from Google drive, automatically download the calibration settings for a 3D printer, plug in the 3D printer, and be able to print directly from the web. WebUSB short circuits the complications to hardware and allows your USB devices to have instant access to updatable drivers, files, and printers,” he said.
OPCFW_CODE
With ESP-AT release > v<IP_ADDRESS>_esp32c3 some AT+BLESCAN results are missing With ESP-AT release > v<IP_ADDRESS>_esp32c3 (for example current master) some AT+BLESCAN results are missing. With ESP-AT realese v<IP_ADDRESS>_esp32c3 AT+BLESCAN works correct. Advertising interval for "ef:9e:fa:6e:65:cc" is round about 1 second. Hardware ESP32-C3-DevKitM-1 ESP-AT setup after AT+RESTORE AT+BLEINIT=1 AT+BLESCANPARAM=0,0,0,0x30,0x30 AT+BLESCAN results with release v<IP_ADDRESS>_esp32c3 (works as expected) 2022-04-27 09:20:33 --> AT+BLESCAN=1,5,1,"ef:9e:fa:6e:65:cc" 2022-04-27 09:20:33 --> 2022-04-27 09:20:33 --> OK 2022-04-27 09:20:34 --> +BLESCAN:"ef:9e:fa:6e:65:cc",-33,0201061bff99040512ac43f0ca6dfffcfff003f8b3563f98ffef9efa6e65cc,,1 2022-04-27 09:20:35 --> +BLESCAN:"ef:9e:fa:6e:65:cc",-37,0201061bff99040512ab43ebca690000fff003fcb3563f9900ef9efa6e65cc,,1 2022-04-27 09:20:36 --> +BLESCAN:"ef:9e:fa:6e:65:cc",-44,0201061bff99040512ac43ebca680004fff40400b3563f9901ef9efa6e65cc,,1 2022-04-27 09:20:37 --> +BLESCAN:"ef:9e:fa:6e:65:cc",-44,0201061bff99040512ac43ebca680004fff40400b3563f9902ef9efa6e65cc,,1 AT+BLESCAN results with current master branch (5ccdf824ae58c490f174958bfba00fa0ecf7e80a) (doesn't works as expected) 2022-04-27 09:19:41 --> AT+BLESCAN=1,5,1,"ef:9e:fa:6e:65:cc" 2022-04-27 09:19:41 --> 2022-04-27 09:19:41 --> OK 2022-04-27 09:19:45 --> +BLESCAN:"ef:9e:fa:6e:65:cc",-42,0201061bff99040512ae43fdca6a0004fffc03fcb4363f98ecef9efa6e65cc,,1 2022-04-27 09:19:51 --> AT+BLESCAN=1,5,1,"ef:9e:fa:6e:65:cc" 2022-04-27 09:19:51 --> 2022-04-27 09:19:51 --> OK 2022-04-27 09:20:05 --> AT+BLESCAN=1,5,1,"ef:9e:fa:6e:65:cc" 2022-04-27 09:20:05 --> 2022-04-27 09:20:05 --> OK Closing due to inactivity. Please feel free to re-open or file a new issue if you have any more questions.
GITHUB_ARCHIVE
Yeah, Miles still reminds me of the DOS days. It's interesting that the thing still exists today, but it has evolved beyond just providing a set of sound card drivers and some extra. From the website: Today, Miles features a no-compromise toolset that integrates high-level sound authoring with 2D and 3D digital audio, featuring streaming, environmental reverb, multistage DSP filtering, and multichannel mixing, and highly-optimized audio decoders (MP3, Ogg and Bink Audio). I can see those features bringing value to present-day game developers. you just need to allow the portal2 binary to use execheap. Now obviously its not good that portal2 uses execheap, but SELinux is fine grained enough to allow for it. So it's either one of these that is the solution... a) Go to System Settings -> SELinux -> Exceptions tab -> Tick a checkbox next to "Portal 2". b) Read complex technical documentation with no good examples and spend a full day crafting the proper configuration by manually editing various text files. I wonder which one... Interestingly enough, the whole NX feature of Windows is still today enabled by default only for system services and processes. I guess Microsoft has made this decision to provide maximum compatibility. But when you install Windows, it's a good measure to go flip it on for all processes in Advanced System Settings. I personally have noticed that GoldSrc-based games and Rayman 2 crash under Windows if NX is enabled for them. I have not seen problems with any other software. Nevermind what the distro or the desktop environment is (well, within reason). So long as you can help her, even on the end of a crackly phone line, it's fine. When installing for any non-techie, Desktop Environment aside, show them how to find their browser and applications, show them how to find the file manager, and install Synapse so that they can search for pretty much anything (for bonus points, set the Synapse shortcut to something simple like Super+Space). Basically, give them their starting points, and show them how to search. Whether you choose to move your mother/relative/neighbour to KDE, Xfce, GNOME 3 or even Unity if you like (or even Windows or Mac at that) it has no bearing. Once you have set them up and you have installed the applications and configured all shortcuts, it's you who needs to know the system. I support my dad on his Mac (he's die-hard Mac which is why I haven't moved him to Linux) piloting him blind because I know the system inside out, I know if he clicks in one place, I can predict the set of dialogs he'll see. I use Manjaro Xfce for Linux because it's install-once and sufficiently light. When setting up for a non-technician, I customize shortcuts my way, show them the ropes in person and hand them a cheat sheet based on my setup choice. If they mail me or call me, I know how to pilot them back to safety. Link to Original Source The airline said in a statement that flight MH370 disappeared at 2:40 am Saturday. It was expected to land Beijing at 6:30 am. " Malaysia Airlines is currently working with the authorities who have activated their search and rescue team to locate their aircraft. According to China's state news agency, the plane lost communication over Vietnam with control department in Ho Chi Minh City at 1:20 a.m. The radar signal also was lost." Link to Original Source ...in case you missed the first three words of the summary: "Theoretical physicists propose..." I prefer to get my physics from physicists that actually exist, thanks. Thanks for coming, now put on your helmet and get back on the short bus. Just upgrade to Windows 8.1 and be done with it. I would generally agree, as these force-Linux-to-relatives plans are always a bit cringeworthy. But in this case I have heard reports of the Windows 8.x GUI causing problems to ordinary folks too, so I would look into other platforms as well. Just go with KDE. Of the big desktops, it runs the fastest and has best quality assurance. Also the UI resembles XP, which was one of your requirements. So Debian with KDE or the Fedora KDE spin. If she needs Flash, Google Chrome is pretty much the only option. If not, then Firefox is fine too. If you have extra money, I would just go with a new Chromebook or a tablet. In theory, (which is false in this case, I'm sure) we would do the best possible, cleanest refining we could, so as to cause the least amount of damage to the planet on the whole. That is, if we were looking beyond ourselves. We aren't, and we won't.
OPCFW_CODE
It is no secret there is a pronounced gender gap in the games industry. While nearly half of all game players are women, only 30 percent of developers identify as women. Why does this gap exist and what can the game industry of today do to correct it for the next generation of developers? Heroes to follow The current generation of game developers grew up playing games that almost purely featured male protagonists. White skin, short hair, and masculinity reigned supreme. As a girl in the 90s, few popular games featured women; “Metroid,” “Tomb Raider,” and some Barbie titles. When most game boxes feature characters who look nothing like you, it can be hard to feel like you belong. It is unsurprising that fewer women attempt to enter the game industry after such exclusion in their youth. Though many girls today play video games, there are still few women heroes in games for them to follow. The International Game Developers Association’s recent whitepaper, “Inclusive Game Design and Development,” dives into methods for combating this barrier with diverse inspirations research, varied game mechanics and goals, and relatable character development. A diverse team will innately generate diverse content, but any studio can pursue an inspiring mosaic of content. Game developers need to prioritize representation and depth in their character choices to provide the heroes that can inspire the next generation of women gamers. Just as girls need to be supported by relatable heroes, women in the games industry need leaders who uphold their interests and empower their growth. The women who are within the games industry see harassment and discrimination at a much higher rate than their male counterparts. An overwhelming 74 percent of developers also felt there was not equal opportunity for all within the industry. Three key approaches to combat these issues are outlined in the IGDA’s “Guide for Game Companies: How to Create and Sustain a Positive Work Culture:” proactive cultural development, team building, and team development. Upholding inclusive company values provides a North Star that guides workplace decisions to support all employees, especially those who are often forgotten. Employees of a company with a good culture will feel inspired, and will grow and carry their projects along with them. Yet, where are the women leaders in games? These values and cultural decisions are driven by leadership. Recruiting women talent is only the first step. The average game developer changes studios 2.2 times every five years, and diverse hires leave our industry at a higher rate. Without mentorship and growth opportunities, women developers will not be empowered to evolve into the leaders we need. Together, we can condemn the failures in the game industry’s past and take the steps forward to ensure our games and the game industry will uplift our current and next generation of women developers. It’s time to power up women in games.
OPCFW_CODE
I suffered internet disconnection caused by cable modem reboot or malfunction. It happened every few hour or feww days. I tried contact technical service and on-site check. They all failed to find the root cause. I will post cable model log and hope there is a solution without switching modem. Thanks. 1. Sepctrum internet. 2. Motorola MB7420 3. ASUS AC 1300 Cable modem log 2. Event log Usually reboot cable modem and router will fix the connectivity but only last a couple of hours or days. Here is the router log. Jul 22 21:26:53 rc_service: ntp 837:notify_rc restart_diskmon Jul 22 21:26:53 disk_monitor: Finish Jul 22 21:26:55 disk monitor: be idle Jul 22 21:27:01 crond: time disparity of 1565126 minutes detected When you try to "reboot", you have to start the modem first and wait until all the LED indicators stop flashing, THEN apply power to the router and let it boot up. You might also need to reset your router to factory defaults if it does not receive the proper network setup values from the modem. Thanks for the reply. Yes, that's indeed the procedure I used to reboot everyting. Here is the detailed steps 1. Turn off cable modem and router. 2. Turn on cable modem until the globe icon stop flashing and turn to solid green light. 3. Turn on router and wait for connections to be estabilished. But such procedure does not guarantee a reliable connection. Another thing I tried is to reset and restore the factory default setting for both the cable modem and router. But this fails to solve the probem as well. The lastest thing I tried is to set traffice manger QoS (upload: 3Mb/s Download: 80 Mb/s) This seems to have a better connection but still fails after a few hour today. Also, this method seems stupid to me. Any comment or suggestions? Have you checked all the coax connections from the point the cable comes in from the street to the modem for any loose connections or damaged coax? I personally check the connection from wall plug to cable modem and the connection in the in the electrical room. Both seems to have tight coax cable connection. And there was a spectrum technician check all the cable signal and found no issues. Is there any other info I can provide to help debug? Thanks First, turn off those traffic manager functions in your router for now. When your data flow approaches the limits you set for upload and download speeds they add extra delays, which really screws up the Speedtest results. Next, as @RAIST5150 said, disconnect any in-house cable that doesn't have a device or terminator attached. If your existing splitter has any open, unused ports, those too MUST have terminations installed to keep the noise out! What we can see of your signal levels look OK, but the SNR (Signal to Noise ratio} is below 40 on every channel. We REALLY need to see the corrected and uncorrected data packet errors. Those get reset to zero when you unplug power to the modem. Let the modem run for at least six hours (preferably more than 12) and then copy and post the signal page here. That will give us some valuable information about your home connection. 1. I disable the traffice manager function in the router. 2. Install terminator for all the unconnected plugs. Right after the reboot. After two weeks and a few calls with spectrum service. My internet is more stable now to show log more than two days. And here is the log Signal levels of both the downstream [DS] and upstream [US} channels are very good. It looks like Spectrum has masked the packet errror counters since both the corrected and uncorrectable error counts are still displaying all zeros after almost 72 hours. That simply NEVER happens, there are always errors detected while the modem auto-ranging circuits adjust the signal levels between all of the video channnels.
OPCFW_CODE
# -*- coding: utf-8 -*- import pytest from .as_status_codes import AerospikeStatus from .udf_helpers import wait_for_udf_removal, wait_for_udf_to_exist from .test_base_class import TestBaseClass from aerospike import exception as e import aerospike @pytest.mark.usefixtures("as_connection") class TestUdfPut(TestBaseClass): def setup_class(cls): cls.udf_name = "example.lua" def teardown_method(self, method): """ Teardown method """ udf_list = self.as_connection.udf_list() for udf in udf_list: if udf["name"] == self.udf_name: self.as_connection.udf_remove(self.udf_name) wait_for_udf_removal(self.as_connection, self.udf_name) def test_udf_put_with_proper_parameters_no_policy(self): """ Test to verify execution of udf_put with proper parameters, and an empty policy dict """ filename = self.udf_name udf_type = 0 status = self.as_connection.udf_put(filename, udf_type) assert status == 0 udf_list = self.as_connection.udf_list({}) present = False for udf in udf_list: if self.udf_name == udf["name"]: present = True break assert present def test_udf_put_with_proper_timeout_policy_value(self): """ Test that calling udf_put with the proper arguments will add the udf to the server """ policy = {"timeout": 180000} filename = self.udf_name udf_type = 0 status = self.as_connection.udf_put(filename, udf_type, policy) assert status == 0 udf_list = self.as_connection.udf_list({}) present = False for udf in udf_list: if udf["name"] == filename: present = True break assert present def test_udf_put_with_filename_unicode(self): policy = {} filename = "example.lua" udf_type = 0 status = self.as_connection.udf_put(filename, udf_type, policy) assert status == 0 # wait for the udf to propagate to the server wait_for_udf_to_exist(self.as_connection, filename) udf_list = self.as_connection.udf_list({}) present = False for udf in udf_list: if "example.lua" == udf["name"]: present = True assert present def test_udf_put_empty_script_file(self): policy = {} filename = "empty.lua" udf_type = 0 with pytest.raises(e.LuaFileNotFound): self.as_connection.udf_put(filename, udf_type, policy) def test_udf_put_with_filename_too_long(self): policy = {} filename = "a" * 510 + ".lua" udf_type = 0 with pytest.raises(e.ParamError): self.as_connection.udf_put(filename, udf_type, policy) def test_udf_put_with_empty_filename(self): policy = {} filename = "" udf_type = 0 with pytest.raises(e.ParamError): self.as_connection.udf_put(filename, udf_type, policy) def test_udf_put_with_empty_filename_beginning_with_slash(self): policy = {} filename = "/" udf_type = 0 with pytest.raises(e.ParamError): self.as_connection.udf_put(filename, udf_type, policy) def test_udf_put_with_proper_parameters_without_connection(self): policy = {} filename = self.udf_name udf_type = 0 config = TestBaseClass.get_connection_config() client1 = aerospike.client(config) client1.close() with pytest.raises(e.ClusterError) as err_info: client1.udf_put(filename, udf_type, policy) assert err_info.value.code == AerospikeStatus.AEROSPIKE_CLUSTER_ERROR def test_udf_put_with_invalid_timeout_policy_value(self): """ Test that invalid timeout policy causes an error on udf put """ policy = {"timeout": 0.1} filename = self.udf_name udf_type = 0 with pytest.raises(e.ParamError) as err_info: self.as_connection.udf_put(filename, udf_type, policy) assert err_info.value.code == AerospikeStatus.AEROSPIKE_ERR_PARAM def test_udf_put_without_parameters(self): """ Test that calling udf_put without parameters raises an error """ with pytest.raises(TypeError) as typeError: self.as_connection.udf_put() assert "argument 'filename' (pos 1)" in str(typeError.value) def test_udf_put_with_non_existent_filename(self): """ Test that an error is raised when an invalid filename is given to udf_put """ policy = {} filename = "somefile_that_does_not_exist" udf_type = 0 with pytest.raises(e.LuaFileNotFound) as err_info: self.as_connection.udf_put(filename, udf_type, policy) assert err_info.value.code == AerospikeStatus.LUA_FILE_NOT_FOUND def test_udf_put_with_non_lua_udf_type_and_lua_script_file(self): """ Test to verify that an invalid udf_type causes an error """ policy = {"timeout": 180000} filename = self.udf_name udf_type = 1 with pytest.raises(e.ClientError) as err_info: self.as_connection.udf_put(filename, udf_type, policy) assert err_info.value.code == AerospikeStatus.AEROSPIKE_ERR_CLIENT def test_udf_put_with_all_none_parameters(self): """ Test to verify that calling udf_put with all 3 parameters as None will raise an error """ with pytest.raises(TypeError) as exception: self.as_connection.udf_put(None, None, None) assert "an integer is required" or "cannot be interpreted as an integer" in str(exception.value) @pytest.mark.parametrize( "filename, ftype, policy", ( (1, 0, {}), (None, 0, {}), ((), 0, {}), ("example.lua", "0", {}), ("example.lua", (), {}), ("example.lua", None, {}), ("example.lua", 0, []), ("example.lua", 0, "policy"), ("example.lua", 0, 5), ), ) def test_udf_put_invalid_arg_types(self, filename, ftype, policy): """ Incorrect type for second argument raise a type error, the others cause a param error """ with pytest.raises((e.ParamError, TypeError)): self.as_connection.udf_put(filename, ftype, policy) def test_udf_put_with_extra_arg(self): policy = {} with pytest.raises(TypeError): self.as_connection.udf_put(self.udf_name, 1, policy, "extra_arg")
STACK_EDU
[BUG] The docs.count is not match for leader and follower What is the bug? the docs.count is not match for leader and follower How can one reproduce the bug? I am using opensearch 2.9.0 version Steps to reproduce the behavior: Start a stress test program, which will create a test index in the leader cluster and write 10,000,000 docs to this index. such as: nohup ./opensearch-stress write --opensearch-address "http://{leader_ip:port}" --index-name "es-bulk-0" --bulk-batch-size 1000 --bulk-times 10000 & Before the stress test program ends, start the index replication task in the follower cluster. such as: curl -XPUT -k -H 'Content-Type: application/json' 'http://{follower_ip:port}/_plugins/_replication/es-bulk-0/_start?pretty' -d ' { "leader_alias": "leader-cluster-opensearch", "leader_index": "es-bulk-0" }' After the stress test program ends, wait for the index replication task done, the docs.count of the leader index is 10,000,000, which is as expected, but the docs.count of follower index is always less than 10,000,000. What is the expected behavior? When the leader index is writing docs, then start the index replication task. When the leader index stops writing and the index replication task ends, the docs.count of follower index should be equal to the docs.count of the leader index. What is your host/environment? OS: Ubuntu 20.04 Version: opensearch 2.9.0 Plugins: cross cluster replication Do you have any screenshots? As far as I know, cross cluster replication has two stages, In the first phase, the existing data is synchronized. it will do a snapshot for the segment files of the leader index, then read these files and transfer them to the follower cluster. In the second stage, it reads changes with localCheckPoint from translog to synchronize incremental data after first stage finished,I found that the docs.deleted of the follower index was 6561, but the stress test program only write docs without specifying _id and did not perform any delete/update/upsert operations. after the stress test program ends, the docs.count of the leader index is 10,000,000, which is as expected after two stages finished,I found that the docs.count of the follower index is 9993439, which was less than the leader index. I executed the refresh and flush APIs, but the docs.count was still less than the leader index, and the difference was exactly the value of docs.deleted in the first stage I found. (10000000 - 9993439 = 6561) Do you have any additional context? This bug can be easily reproduced, only need to start the index replication task when data is being written in batches to the leader index. I have reproduced this bug many times. I have tried other stress testing programs and this bug always appears. So I hope I can find help here, thx. btw, If there is no writing to the leader index,then start the replication task,after the task ends,the dos.count of leader index is equal to the follower index. Can you trigger refresh on follower API and then verify the doc count?
GITHUB_ARCHIVE
Please, don't laugh, I'm pretty amateurish about this... I use Bosch software, LapSim and WinDarab, for cars with Motronics injection (BMW only, in my case). LapSim is "universal" (that is, you can use it to simulate anything) but Win Darab is pegged to Motronics. I also have a very old (bought around 2000) Brembo acquisition card with "ride height" sensors made with cheap measuring lasers, adapted locally, but I imagine it's discontinued, as I could not find the webpage again. For Bosch software, check here: http://www.bosch-motorsport.com/content ... l/3589.htm The documentation is here: http://www.bosch-motorsport.com/content ... Manual.pdf And here: http://www.bosch-motorsport.com/content ... mV2007.pdf I posted in my site a scan of the article about simulations, that appeared in RaceTech magazine, just in case, here: : the data logger by Bosch costs around 2.300 euros, I bought the card from Brembo by U$300... I use it in karts, that explains it all. Second warning: you can stop reading now, this is NOT what you're looking for, but it could help you tangentially. I've also rigged "locally" a system by Vectra , a really old (founded in the XVIIIth century, I swear) french firm, using a GPS (with 3 meters precission) and a computer to get the track center coordinates, like this: If you have a GPS you can get more or less accurate coordinates if you use WAAS. Otherwise, you have to pay around 500US to get good precission for a post-processing. The Vectra system is originally developed for road inventory (I helped to install it for a colombian road agency). It's also rigged by me to use a taxicab distance measuring device (I swear I'm not making this up). You can laugh now... Anyway, I'm sure that the guys around here that have specific experience with Formula Student can help you more. I just use all this paraphernalia to play around with the karts at our local track. I've learnt a lot (that's something easy to do, as I don't know a lot). Something I've found really useful is to deduce the track coordinates from Google Earth pictures. You'll need ArcGis (or some GIS software capable of projecting coordinate systems) and AutoCAD or similar to get them. The results are pretty impressive, as I've explained at my site: Autocad drawing of Catalunya circuit - Story of a restitution extraordinaire (follow the link at the bottom of the page). I was able to deduce the length of straights, the radii of curves and the approximate equations of transition curves, like this: Of course, I cannot find slopes that way, , nor lateral nor longitudinal, that's something you have to measure directly (or estimate, if you wish). I'd be glad to help with that last, unnecesary part about track coordinates, if time allows me (it never does!).
OPCFW_CODE
Accurate LiDAR classification and segmentation is required for developing critical ADAS and autonomous-vehicle components. Mainly, its required for high definition mapping and developing perception and path/motion planning algorithms. This article will cover best practices for how to accurately annotate and benchmark your AV/ADAS models against LiDAR ground truth training data. Autonomous Vehicle (AV) developers need vast amounts of accurately labeled images and point cloud scenes to train their artificial intelligence systems. Point clouds are generated by LiDARs which stands for Light Detection and Ranging. LiDARs use pulsed laser light to construct 3D representations of objects and terrain. Recently, interest in LiDAR has grown significantly, especially for generating high-definition maps that can semantically specify static objects in a map within 5-cm level accuracy. For AV scientists to build advanced AI models, they are logging millions of miles in experimental cars. But once those vehicles collect camera feeds and point clouds to build high-resolution maps, trained humans must painstakingly label those images and point clouds by tagging every object. “For every hour driven, it’s approximately 800 human hours to label,” Carol Reiley, Board member of the AV technology startup Drive.ai, said last year. Human grunt work is no substitute for an elegant solution from a hyper-focused modular supplier. This challenge is fundamentally different from any that exists in the traditional automotive industry, so there is no existing supplier base to solve it. That spells an opportunity for startups such as Deepen AI, which is using artificial intelligence to quickly and accurately label point cloud scenes and images that are currently tagged by hand. Deepen AI has no ambitions of being one of the full-stack AV developers. Instead, it is working on various tools to accelerate their work, just as Microsoft’s Visual Studio helps developers to build and debug software. In other words, Deepen AI is positioning itself for the AV industry’s modular future. In his talk, Founder & CEO of Deepen AI; Mohammad Musa will discuss best practices for accurate LiDAR data classification and segmentation. There are many problems that face humans in understanding and working with LiDAR data. Most of the time, data processors don't have access to a camera feed to help in understand the LiDAR point cloud data which leaves a lot of room for guessing and mistakes. Through Deepen AI's years of experience working with various LiDAR data sets from leading LiDAR vendors, Tier 1 suppliers and OEMs, Mohammad will share how they have been able to help these companies increase safety of their autonomous systems by better utilizing LiDAR training data. "To realize the great benefits of autonomy, we need to resolve all the bottlenecks preventing us from increasing the safety and reliability of autonomous systems much faster. At the current rate of development, we are wasting too many human cycles and unnecessary costs while still risking lives during the testing process. This is where Deepen AI is focusing on accelerating the autonomous system development process while helping customers ensure a high safety and reliability bar. That can only be done by using very smart tools that are specialized and dedicated for this problem domain" - Mohammad Musa Mohammad Musa will be presenting LiDAR Training Data Best Practices on Tuesday, June 26 in Meeting Room 211AD.
OPCFW_CODE
Bamboo 2.6 upgrade guide On this page: Please read the Supported platforms page for the full list of supported platforms for Bamboo. Upgrading from Bamboo 2.5 to 2.6 We strongly recommended that you back up your xml-data directory before proceeding. For full instructions please follow the Bamboo upgrade guide. We also strongly recommend that you export your Bamboo data for backup before proceeding. Please note, that this may take a long time to complete depending on the number of builds and tests in your system. For full instructions please see Exporting data for backup. If you are using plugins, please make sure that your plugins are compiled against 2.6 before upgrading. Before you upgrade, please read the following important points that relate to Bamboo 2.6. Please set aside some time when upgrading to Bamboo 2.6 or later As part of the performance improvements in version 2.6, test result data is stored differently. In versions of Bamboo prior to (and excluding) 2.6, all test result data has been stored in XML files on the filesystem. From Bamboo 2.6, some* of this test result data is stored in the database, permitting quicker retrieval of this information (and consequently faster Bamboo responsiveness) than what can be achieved by accessing XML files. * Only test result data from failed and fixed builds is stored in the database, since this data will most likely be examined by Bamboo users. (Fixed builds are those which built successfully but had failed the previous time they were built.) Be aware that the test result data for successful builds is still stored in XML files on the filesystem. During the Bamboo 2.6 upgrade process, relevant test result data generated by previous versions of Bamboo will automatically be migrated to the database when Bamboo 2.6 first starts up. No user-intervention is required during this process, which only runs once. All subsequent Bamboo starts will not involve this data migration process. Bamboo administrators should be aware that this data migration process might take some time, depending on the amount of data that needs to be moved to the database. In many cases, this process should be completed within a matter of minutes. However, if your stored test result data is extensive, this data migration process could take over an hour. The table below is a guideline to help provide an estimate on how long it will take this data migration process to complete during the Bamboo upgrade procedure. The first column is a multiplication of the number of builds in history with the average number of test results per build. You can estimate the number of builds in history by multiplying the number of plans configured in Bamboo by the number of times each of these plans has run. For example, if you have 20 plans configured and each plan has run 300 times, there will be 6,000 builds (i.e. 20 x 300) in the build history. Note that expired builds are removed from the build history. Number of Builds in History x Number of Tests per Plan Estimated Migration Time 2,500,000 (5,000 x 500) < 3 min 5,000,000 (10,000 x 500) < 6 min 10,000,000 (20,000 x 500) < 10 min 15,000,000 (30,000 x 500) < 15 min 20,000,000 (40,000 x 500) < 25 min 25,000,000 (50,000 x 500) < 45 min 30,000,000 (60,000 x 500) < 75 min 35,000,000 (70,000 x 500) up to 3 hours The estimated migration time (above) is only just an estimate. The actual time it will take for this step of your Bamboo 2.6 upgrade to complete will also strongly depend on the performance of the hardware running Bamboo and the database that Bamboo uses. Automatic Clover Integration Issue A bug in Bamboo 2.6 forces automatic Clover integration and adds Clover targets or goals for Ant, Maven and Grail builds, despite having opted for manual Clover integration. If you are affected by this issue, please apply the patch provided in JIRA issue BAM-5920. Bamboo Home Directory — Disk Usage changes This issue only affects Bamboo 2.6 and is fixed in Bamboo 2.6.1 and above. Due to backend changes in Bamboo 2.6 (implemented for a feature that will be fully supported in a future version of Bamboo), the structure for storing temporary build files in the Working Directory has changed. In versions of Bamboo prior to (and excluding) 2.6 had the following structure: From Bamboo 2.6, the location for storing this data is now: Hence, each agent now has its own directory for storing temporary build files, which means that the disk usage requirements for the Bamboo Home directory have increased in Bamboo 2.6. If you are concerned about disk usage, please upgrade to Bamboo 2.6.1 or above. Changes in seraph-config.xml that affect new Bamboo security features If you use a customized version of the seraph-config.xml file with Bamboo, you will need to ensure that these lines of code are added to your customized seraph-config.xml, to ensure the availability of these new Bamboo security features. Other Known Issues Sometimes we find out about a problem with the latest version of Bamboo after we have released the software. In such cases, we publish information about these other known issues in the Bamboo Knowledge Base. Before you begin the upgrade, please check for any of these other known issues in the Bamboo Knowledge Base first and if provided, follow the instructions to apply any necessary patches. If you encounter a problem during the upgrade and cannot solve it, please create a support ticket and one of our support engineers will help you. Developing for Bamboo 2.6 If you are a Bamboo plugin developer, please refer to our Changes for Bamboo 2.6 guide, which outlines changes in Bamboo 2.6 that may affect Bamboo plugins compiled for Bamboo version 2.5.x or earlier. Upgrading from Bamboo prior to 2.5 In addition to the above, please read the upgrade guide for every version you are skipping during the upgrade. In particular, if you are upgrading from a version of Bamboo prior to 2.0, please ensure that you upgrade to Bamboo 2.0.6 first before upgrading to Bamboo 2.5. Please ensure that you read the Bamboo 2.0 upgrade guide which contains important upgrade instructions for upgrading from earlier versions of Bamboo.
OPCFW_CODE
Consuming RESTful response via Angular service I'm following scotch.io's tutorial on building a RESTful API while trying to get familiar with the MEAN stack. I've followed pretty much everything so far, and got my RESTful API sending out JSON as intended. Should I try to access it via browser address bar or try it out with Postman it works. I'm having problems with the consumption of said JSON response. According to the tutorial, the Angular app is divided in controllers and services. The service uses $http to call the RESTful endpoint. My doubt is where and how should I use that service to call for the data. Is it in the controller? Is the service exposed in a way that I can add its response to $scope? I'm new to Angular/client-side routing, so please be gentle:) My code is below. (Blog) Controller: angular.module('BlogCtrl', []).controller('BlogController', function($scope, $http) { $scope.tagline = 'Blog page!'; // can and should I call the service here? }); Service: angular.module('BlogService', []).factory('Post', ['$http', function($http) { return { // call to get all posts get : function() { return $http.get('/api/blog'); } }]); Routes: angular.module('appRoutes', []).config(['$routeProvider', '$locationProvider', function($routeProvider, $locationProvider) { $routeProvider // blog page that will use the BlogController .when('/blog', { templateUrl: 'views/blog.html', controller: 'BlogController' }) $locationProvider.html5Mode(true); }]); Angular App: angular.module('myApp', ['ngRoute', 'appRoutes', 'MainCtrl', 'BlogCtrl', 'BlogService']); Yes, you can make $http call in your BlogController. However if you want to use your 'Post' factory, you should inject it to controller angular.module('BlogCtrl', []).controller('BlogController', function($scope, Post) {...} and make the request Post.get().then( function(response){console.log(response.data)}, function(errorResponse){/*...*/} ); (I think you should also read about $resource (https://docs.angularjs.org/api/ngResource/service/$resource). Maybe it is something what you could use to replace your Post factory ;)) no need to create new instance, factory doesn't return this like a service does Thank you for replying! Unfortunately, it isn't working. Nothing shows up at console, either response or errorResponse. Any thoughts? @akn my bad... had linting active. It's working, thank you! You want to inject the service into controller ( or anywhere else you would use it) and then make the function call using the injected service object angular.module('BlogCtrl', []) .controller('BlogController', function($scope, Post) { $scope.tagline = 'Blog page!'; // Use service to get data Post.get().then(responsePromise){ $scope.someVariable = responsePromise.data; }).catch(function(err){ console.warn('Ooops error!') }); }); Note that BlogService is a module name and Post is his factory name, so it probably won't work ;) Thank you for replying! sadly it's not working. If I assign responsePromise.data to $scope.posts, wouldn't ng-repeat be able to use x in posts? Also, I think something could be wrong with your syntax. could you please check?
STACK_EXCHANGE
import { ProtectedEventEmitter } from 'eventemitter-ts'; import { KQStream } from './KQStream'; import { Character, PlayerKill } from './models/KQStream'; type StatisticType = 'kills' | 'queen_kills' | 'warrior_kills' | 'deaths'; export type GameStatsType = { [character in Character]: CharacterStatsType }; type CharacterStatsType = { [statisticType in StatisticType]: number }; type GameStateType = { [character in Character]: CharacterStateType }; type CharacterStateType = { isWarrior: boolean }; export type ChangeFilter = { [character in Character]?: StatisticType[]; }; export interface KQStat { character: Character; statistic: StatisticType; value: number; } interface Events { 'change': KQStat; } export class GameStats extends ProtectedEventEmitter<Events> { private stream: KQStream; private hasGameStartBeenEncountered: boolean; private gameStats: GameStatsType; private gameState: GameStateType; /** * Complete list of valid statistic types. */ private static get statisticTypes(): StatisticType[] { return [ 'kills', 'queen_kills', 'warrior_kills', 'deaths' ]; } /** * Default game statistics. This is what the * statistics of a game are when it begins. */ static get defaultGameStats(): GameStatsType { return { [Character.GoldQueen]: GameStats.defaultCharacterStats, [Character.BlueQueen]: GameStats.defaultCharacterStats, [Character.GoldStripes]: GameStats.defaultCharacterStats, [Character.BlueStripes]: GameStats.defaultCharacterStats, [Character.GoldAbs]: GameStats.defaultCharacterStats, [Character.BlueAbs]: GameStats.defaultCharacterStats, [Character.GoldSkulls]: GameStats.defaultCharacterStats, [Character.BlueSkulls]: GameStats.defaultCharacterStats, [Character.GoldChecks]: GameStats.defaultCharacterStats, [Character.BlueChecks]: GameStats.defaultCharacterStats }; } private static get defaultCharacterStats(): CharacterStatsType { return { kills: 0, queen_kills: 0, warrior_kills: 0, deaths: 0 }; } private static get defaultGameState(): GameStateType { return { [Character.GoldQueen]: GameStats.defaultCharacterState, [Character.BlueQueen]: GameStats.defaultCharacterState, [Character.GoldStripes]: GameStats.defaultCharacterState, [Character.BlueStripes]: GameStats.defaultCharacterState, [Character.GoldAbs]: GameStats.defaultCharacterState, [Character.BlueAbs]: GameStats.defaultCharacterState, [Character.GoldSkulls]: GameStats.defaultCharacterState, [Character.BlueSkulls]: GameStats.defaultCharacterState, [Character.GoldChecks]: GameStats.defaultCharacterState, [Character.BlueChecks]: GameStats.defaultCharacterState }; } private static get defaultCharacterState(): CharacterStateType { return { isWarrior: false }; } private static get defaultChangeFilter(): ChangeFilter { return { [Character.GoldQueen]: GameStats.statisticTypes, [Character.BlueQueen]: GameStats.statisticTypes, [Character.GoldStripes]: GameStats.statisticTypes, [Character.BlueStripes]: GameStats.statisticTypes, [Character.GoldAbs]: GameStats.statisticTypes, [Character.BlueAbs]: GameStats.statisticTypes, [Character.GoldSkulls]: GameStats.statisticTypes, [Character.BlueSkulls]: GameStats.statisticTypes, [Character.GoldChecks]: GameStats.statisticTypes, [Character.BlueChecks]: GameStats.statisticTypes }; } /** * Returns true if character is a queen. * * @param character The character to evaluate */ private static isQueen(character: Character): boolean { return character === Character.GoldQueen || character === Character.BlueQueen; } /** * Returns true if the kill was maybe a snail eating a drone (i.e. snail kill). * * - On day and dusk maps, snail kills happen at `y: 20`. * - On night map, snail kills happen at `y: 500`. * * Drones killed while standing on a platform at the same height as the snail * will also have the same y-pos as a snail kill. * * @param kill The kill to evaluate */ private static isMaybeSnailKill(kill: PlayerKill): boolean { // Snail kills can happen within roughly 40 position units from // the default y-pos of 20 on day and dusk, and 500 on night. return ( (kill.pos.y > -20 && kill.pos.y < 60) || (kill.pos.y > 460 && kill.pos.y < 540) ); } constructor(stream: KQStream) { super(); this.stream = stream; } start() { this.resetStats(); this.hasGameStartBeenEncountered = false; this.stream.removeAllListeners('playernames'); this.stream.removeAllListeners('playerKill'); this.stream.on('playernames', () => { this.resetStats(); if (!this.hasGameStartBeenEncountered) { this.stream.on('playerKill', (kill: PlayerKill) => { this.processKill(kill); }); } this.hasGameStartBeenEncountered = true; }); } /** * Triggers a change event on the specified statistics. * If no filter is specified, a change event is triggered * for all statistics. * * @param eventType The 'change' event * @param filter The statistics to filter */ trigger(eventType: 'change', filter?: ChangeFilter) { if (filter === undefined) { filter = GameStats.defaultChangeFilter; } for (let character of Object.keys(filter)) { const characterNumber = Number(character); if (!isNaN(characterNumber)) { for (let statistic of filter[character]) { this.protectedEmit('change', { character: characterNumber, statistic: statistic, value: this.gameStats[characterNumber][statistic] }); } } } } private resetStats() { this.gameStats = GameStats.defaultGameStats; this.gameState = GameStats.defaultGameState; this.trigger('change'); } private processKill(kill: PlayerKill) { const filter: ChangeFilter = { [kill.by]: ['kills'], [kill.killed]: ['deaths'] }; // Increment kills and deaths this.gameStats[kill.by].kills++; if (kill.killed === Character.GoldQueen || kill.killed === Character.BlueQueen) { this.gameStats[kill.by].queen_kills++; filter[kill.by]!.push('queen_kills'); } else if (this.gameState[kill.killed].isWarrior) { this.gameStats[kill.by].warrior_kills++; filter[kill.by]!.push('warrior_kills'); } this.gameStats[kill.killed].deaths++; // Set state of characters if (!GameStats.isQueen(kill.by) && !GameStats.isMaybeSnailKill(kill)) { this.gameState[kill.by].isWarrior = true; } if (!GameStats.isQueen(kill.killed)) { this.gameState[kill.killed].isWarrior = false; } this.trigger('change', filter); } }
STACK_EDU
I'm a Native American, and this is pretty well-known to us. I also have experience with Chinese language and dialects. I'll explain why I'm mentioning the Chinese part later. This is a very good example of security through obscurity. It works until it's figured out. However, there are so few Navajo speakers compared to English speakers, that it would've been incredibly difficult to find native speakers to learn from at the time. How would they know it's the Navajo language to begin with? It sounded alien to them. While the code may have been simple, it's hard to find people to teach you that code. During WW2, Japanese in America were rounded up and placed in internment camps. Anyone caught sympathizing with them may have been likewise punished. The chances of finding a Navajo turncoat were extremely low. Why is Encryption stronger than Languages and Dialects? Regarding encryption, other people such as Thomas Pornin can explain this better than me, but I'll try to show you similarities: Encryption is like a language. It's sort of like a conversion of one word (or character) to another set of highly different characters. With proper encryption, you can't convert A -> Z then Z -> A unless you have the key. With languages, you can compare words with similar meanings in other languages, and translate accordingly. With languages, your key is knowing the languages. With proper encryption, you cannot do this. You cannot put properly-encrypted text into a decryption routine and get the results back. Not without the key. You need the key to decrypt it. With a hash, you can never get the message back, only verify whether or not it's genuine, but those can be vulnerable to collision attacks. Here are some examples: Pretend this is Encryption. The "key" to decrypting the hidden message is to understand both languages. There is a second hidden message which requires deeper understanding. Hello -> 你好 Alpaca -> 草泥马 In the case of a hash (one-way), it could act like this: Bring potter to Naria. The Death Star is ready for our war against the Time Lords. With a hash, you would have to verify the contents against the message to see if they match. This is not really a method of encryption because you can't return the original message. You may eventually cause collisions as well. In the case of AES 256 bit (two-way): [English with key] Find Potter and bring him to Narnia. The Death Star is ready for our war against the Time Lords. [AES 256 encryption result] How are you going to crack these? The same way as languages, actually. The difference is it's much, much harder to crack real encryption. Firstly, mapping a language and creating a translator is much, much easier than mapping an encryption database. With one-way hashing, you would have to match every single combination of letters for every single encrypted character, to "crack" it, but that isn't going to return the message to you: it's only going to verify that the hash matches the message. With many hashing algorithms, collisions become a problem after a while. With two-way encryption, you can get the original message back provided you have the key. Kind of like languages. Larger keys will very likely prevent brute force attacks, as with current technology, it would take longer than the estimated life of the universe to crack them. With languages, the "key" is understanding both languages. This is why the Navajo "code" was cryptographically, "very simple" in comparison. The security through obscurity of the times made it very difficult to crack, but it would be easily very crackable today with current machines. Having obscure dialects or languages can add an additional layer of "encryption," (note that the quotes indicate it's not really encryption) but, again, this is security through obscurity, and shouldn't be relied on by itself. In the case of encryption, these algorithms have been tested to be secure by well-educated researchers for a long time. Languages on the other hand, are much easier to learn. Almost any idiot can learn a language. I'm living proof. Well-educated cryptographers and mathematicians can't crack many strong forms of encryption. What hope does the average layperson have? Language and Dialect Usefulness Regarding obscure dialects mentioned in #2, you can still see such things today. In fact, this isn't the only example of the Japanese getting owned by security through obscurity. For example, the Chinese military relies on another layer of "encryption" (mind the quotes) -- which is really just security through obscurity -- with their dialects. During the last war with China, Japan wasn't able to "crack" the Wenzhou dialect because they didn't understand the implementation, and it was used against them. Side note: considerable effort has been made by organizations such as Phonemica to demask obscure Chinese dialects. Part of the problem with relying on dialects and languages today is that hacks are everywhere, so chances are high that someone will find your teaching manuals. Computers are at a good point where demasking a language or dialect could be automated in a very short time. On the other hand, real encryption is nigh unbreakable. Once someone quickly learns your language or dialect, your security through obscurity would disappear very rapidly. This was used for thousands of years, but in modern times, it's not going to be that effective. Is Security Through Obscurity Completely Invalid? Despite what I've said, this does add another layer of obfuscation that is technically difficult to break. Even if your encrypted messages are broken, your attacker would still have to spend considerable time mapping it out. While it's child's played compared to proper encryption, security through obscurity does have it's uses, but you shouldn't rely on it entirely anymore.
OPCFW_CODE
Changing only part of date (input) does not change form's pristine state (Chrome) Have a look at this plnkr. When (in Chrome) I click the year and type something - say 2015 - and blur the field, model value does not change. However, when I select the date from calendar and then I delete the year value (select it and hit backspace), then field & form error are set as expected: Not sure if Angular's or Chrome's issue as in Firefox it is working fine (although there is no date picker). I did a little looking into here, it looks like if you type all of the date values (mm/dd/yyyy, say 03/14/2015), validation does occur. This may have to do with how partial date entry is validated - I suspect it has to do with either the fact that it is an input of type date. I've noted this issue also. Looking into angular.js I notices that validators are not trigger when a part of the date is changed or cleared. I've als noticed that clearing the complete date (by using backspace in each part) does not trigger any validator. As explained in https://github.com/angular/angular.js/issues/11622#issuecomment-109571015, I think this is due to the fact that an invalid or partial date sets the input's value to an empty string. (Further edits on the already invalid date, will not have an effect on the input's value (it will remain empty), until a valid date is picked.) It is related to how the browser does/doesn't emit input events upon partial input. Responding to keydown on date inputs (as suggested by @Narretz), sounds like a viable solution. A quick POV showed that the problem is not only the lack of firing input events. Off the top of my head, another part of the problem must be the fact that once the input is an empty string, further (invalid) edits do not impose a change in the value ($viewValue), thus no validation is triggered (this.just a speculation though). @gkalpak There's a condition inside $commitViewValue that forces validation etc. when the input has a native validator, even if the viewValue is empty: https://github.com/angular/angular.js/blob/master/src/ng/directive/ngModel.js#L652 The problem is that there's no change event in the following cases, because the browser does not register a change in the input's value (as gkalpak has pointed out) when adding data into the input, the change event is only fired after all field (day, month, year) have been filled. when removing data from the input, the change event is fired as soon as the date becomes invalid, but not when all data is cleared. I think @Narretz, I thought there was another issue, because my original POC (see https://github.com/angular/angular.js/issues/12207#issuecomment-137675456) didn't work. It turned out to be a timing issue. So, triggering input on keyup, seems to solve the issue. The real problem is more related to how browsers fire input events on date inputs though (as described previously), so Angular might have to work around that. @Narretz, wdyt about imulating input on keyup (or something) for date-family inputs ? @gkalpak Yeah, let's give it a shot. Do you want to open a PR? And I wonder if we should restrict it to Chrome, because FF and IE don't have datepickers (at least on desktop) @Narretz, sure, PR is on the way. Good idea restricting it to browsers that support input[type="date/..."] (currently Chrome and Edge - hopefully more in the future). Date inputs report type === 'text' in non-supporting browsers. @gkalpak to add to your comment of 11 Sep 2015 you said: when adding data into the input, the change event is only fired after all field (day, month, year) have been filled. This is partially true, it depends which keys you use to edit the date. If I type the following (wrong) date in one flow then no validator is triggered: 31112015. Even if I leave the field it is still not triggered. When I return to the field and change the year it is not triggered. When I change the day or month the it gets striggered. @TimoPot, indeed, I was referring to filling the fields with valid data. If not, the input remains empty (aka unchanged). Ah missed that. Well actually the input is not empty after all I've just entered a value ;-) am I correct to notice that this issue is not solved yet? Also manually invoke $validate() or $setTouched() or $setDirty() on the date field does not help to get the date validated. Is there a known workaround? @TimoPot The problem is that for Angular the input is actually empty, because the browser does not set the value property of the input when invalid data is entered (for date and number). I think we need to listen to key events in addition to the input / change event. I believe @gkalpak worked on a PR for this, we should try to get it in after 1.5. The workaround I've used (and is more or less what is included in the PR) is to trigger an input event, whenever a keyup event is fired on date fields. [Demo pen] I'll resume work on the PR, so we can get this fixed soon. Oke. I assume the Demo pen will follow. Thanks. @TimoPot, btw, it's not a proper solution, because one needs to take into account keys that do not change the input (e.g. modifiers etc) and possibly other stuff. But it's good enough for a quick workaround.
GITHUB_ARCHIVE
Standalone STM32F030CC minimal configuration I'm making a PCB with the STM32F030CC microcontroller and have been reading up on the minimal configuration in the datasheet. They don't include a minimal setup diagram, but I found some information on how to connect the power with filter capacitors. I will be using all 6 UARTs, but nothing else. Do I need to have the ADC filter capacitors as I won't be using the ADC? Is there anything essential for it to work that I have missed? I will be using the internal oscillator, so I won't need any external crystal. Ther should be a hardware development starting guide on ST's website. I'm making a PCB with the STM32F030CC microcontroller and have been reading up on the minimal configuration in the datasheet. They don't include a minimal setup diagram [...] ST typically provide the minimal configuration information in a separate document for each of their MCU families, not in the datasheet. That document usually has "Getting started" as part of the title. For your MCU that document is "AN4325 - Getting started with STM32F030xx and STM32F070xx series hardware development". I will be using all 6 UARTs, [...] I will be using the internal oscillator so I won't need any external crystal. I hope you've considered the HSI (internal oscillator) accuracy limits, temperature drift, and potential need for calibration. Do I need to have the ADC filter capacitors as I won't be using the ADC? I believe you are referring to the capacitors across VDDA and VSSA, and yes, it still makes sense to do that because those pins provide power to other parts of the MCU too (e.g. the "Reset block" and the internal oscillators) not only the ADC. Unless you are producing something that is extremely cost-sensitive, then you reduce your risk by following the ST guidelines. Note that their reference design (in the document linked above) lists a 1 uF VDDA capacitor in the "mandatory" section. Since the VDDA and VSSA pins are next to each other, it's really easy to added their recommended capacitor there. Is there anything essential for it to work that I have missed? I haven't done a full design review, but one point is that your BOOT0 pin has no pull‑up. So while it is tied to Gnd for booting from the main Flash then fine. However if you ever want BOOT0 to be "high" (e.g. to use the built-in bootloader) then you should not assume a floating BOOT0 pin will be treated as "high". I have tested the internal oscillator, and it works fine for my use (i'm only running the UARTs at 250kbaud) The Boot0 pin is supposed to have a internal pull-up according to the datasheet, so i think i will be fine. Edit: no it doesn't, i confused it with the reset pin "I have tested the internal oscillator" - understood, but while they quote 1% accuracy at 25 degrees C (IIRC), I believe it could be up to 5% across the whole temperature range. If you are OK with that, or are sure to only use it at room temperature, then great! i guess i'll have to bring out my hair dryer for a test ;) I should have room to add the traces for a external oscillator and not use it unless i need to I should have room to add the traces for a external oscillator. that would be my approach. when i design a board, I always lay down more than I would otherwise need so the board becomes more generic: if i needed a part, I can simply populate the board with it, as the traces are already there. having options is a good thing, especially if it doesn't cost you much. @Pownyan - "add the traces for a external oscillator" - yes, I would do that; I would add a 3-pin footprint (centre pin gnd) where a resonator or xtal can be fitted, as well as footprints for the two xtal caps. "hair dryer for a test" - I would not do that, as that test doesn't give you the reassurance you might think. You don't know if that IC will show max freq deviation or not, so you might not provoke worst-case behaviour. Instead I would use a scope to check initial UART freq, then program BRR to cause worst-case (5%?) deviation; then test the overall system behaviour with ... [cont'd] whatever devices are attached to your STM32F0. Also note that the receiver's clocks could be at the opposite end of the frequency tolerance i.e. the Tx could be fast and the Rx could be slow (or vice versa). That reduces the maximum UART freq deviation allowed at each end of the link. If the UART receivers are also STM32F0, then all of STM32F0 Reference Manual section 23.4.5 "Tolerance of the USART receiver to clock deviation" applies. If the receiver is not STM32F0 then you need to adapt its formulas and advice. A typical UART receiver is unlikely to cope with that worst-case 5% (or more - e.g. if the Tx is fast and Rx is slow) HSI frequency deviation from nominal, unless it is doing auto-baud rate detection often enough to follow changes in the Tx frequency. However that's all part of the overall system design, and not part of the original question. I just wanted to mention why I originally highlighted this point in my answer. I have used STM32F030F on an adapter, absolutely nothing else and it worked like a charm. However, what is minimal is quite subjective and application-specific. I would check the datasheet and see what's required from powering up the chip. Typically Vdd/GND + decoupling, and then boot0/1 pins, reset, AVdd/AVREF if you want to do ADC, programming header and an LED - always invaluable.
STACK_EXCHANGE
Just out of curiosity, I decided to sniff around and see how Regscvs.exe does it's work to register an assembly in a COM+ application. While this information is probably not very useful to anyone, I'm posting here a few interesting snippets of what I found: - Regsvcs is just a driver around the System.EnterpriseServices.RegistrationHelperclass. This class is esentially a wrapper that ensures that all registration is done from a thread in an appropriate apartment, while the actual registration work is done inside the RegistrationDriver class (the entry point to this class is the - First thing It does is create a TLB if it's needed, which is done through the RegistrationDriver::GenerateTypeLibrary()method, which is, of course, done through the TypeLibConverterclass in the - All Custom attributes implemented in System.EnterpriseServicesfor COM+ registration have private implementations of the System.EnterpriseServices.IConfigurationAttributeinterface. All the basic catalog registration is driven through this interface. In essence, a great deal of the work of the RegistrationDriveris simply to iterate through collections of types to extract this attributes and call the IConfigurationAttributemethods on them at the appropriate time. - The interface has the following declaration: bool IsValidTarget(string s); bool Apply(Hashtable info); bool AfterSaveChanges(Hashtable info); - The first method called is IsValidTarget(), which should return true if the registration attribute can deal with the specified kind of object, which is identified in COM+ terms with a string. So for example, here are some possible values: - "Application": The object representing the COM+ application object being created. - "Component": The object representing the COM+ component being registered. - "Method": The method of the component/interface being processed. - Both of the other two methods of IConfigurationAttributeinterface take a Hashtable as an argument, in which keys are strings, and values are instances of types that implement the ICatalogObjectinterface. As you've probably guessed by now, common keys in the Hashtable are the three strings presented above ("Application", "Component", "Method"). Apply()method is where most of the work actually occurs. Most attribute classes simply get the appropriate ICatalogObjectinstance out of the Hashtable and call to configure a particular property in the COM+ Catalog, so it's actually quite straightforward. AfterSaveChanges()method is, from what I can tell, basically a way to provide for two-step registrations. Most attibutes simply don't do anything in this method (In fact, I couldn't find any attribute that actually takes advantage of this method in v1.0 of the framework.). Keep in mind, of course, that all of this stuff is completely undocumented, and very likely to change in future versions of the framework...
OPCFW_CODE
I have decided to start a separate thread about Artificial Intelligence in AEC. I hope here we will able to collect as much information as what happens today in the AI world, show examples of using AI in BIM programs and programs driven by AI, and discuss what architects and other AEC professionals would like to expect from it. I have a very strong opinion, that these technologies will come to us very, very soon (because AI already exists in many things we are using every day - like search engines or digital photo applications) and this might be a very interesting subject to review. Please find below a scheme I have prepared, showing how AI for BIM might look like. Building elements (as we know tools in ArchiCAD) are controlled by placement algorithms, that coming from building classification databases). For example - placing partition walls in the office with the right chosen sound insulation, fire ratings, correct corridors lengths, fire escapes, etc. I/O engine responsible for Input / Output - but in architectural terms - automatic drawings generation and publishing, remote communication, including communication via e-mails, teamwork, IFC exchange. It is something like a secretary-robot, that supervising the BIM project. Physical simulations help to improve correct element placement. Simulations shall be 100% on physics (more like physical engines in 3D animation software). Includes loads, earthquakes, heat distribution and loss, fire spread, wind load, radiosity, and photon tracing, similar to Monte Carlo. Additional block called construction simulations helps to represent the construction process, including delivery, animation of cranes and installation process, construction timeline, and similar. Each building element has two additional layers - assembly (if it’s a wall, then it might be studs, cover, insulation, and brackets) and behavior. Behaviour is connected to both physical and construction simulations. these days, i tried https://chat.openai.com/auth/login to get some code for GDL. It told me, its capable of GDL, but that was not right - it made code for something like phyton and pretendet it was GDL. But it was interesting for me to chat with the AI-bot itself. The developer did not give the AI full Access to the In formation in the Internet, so i could not make it "learn" gdl from existing informations in the internet. Then i tried to intend it to write a children-story about "Räuber Hotzenplotz" - a famos children story wit my own parameters. Its astonishing, what results it gave me back - really! And - all in german! When thinking in future - this will change the work-flow of an Architect in a way, we cant pretend really now. Maybe, the only work left for a human will be to design the outer geometry and the room-function-list and what kind of building material should be used.. Who knows... Is this the world we want? Do we have a choice? Isaacs Asimovs rules for Robots should really be implemented in AI's and also in us - by the way. Its not implemented yet - i asked for it, it knew it and told me that it wasnt. And it suggested it to implement it... Lets hope someone is using AI to build a better software rather than dishing out design solutions - it sure would be tragic if we never get to see the full potential of Computer Aided Design, instead jumping straight to Computer Driven/Generated Design. I was wondering about this also, thanks for trying! I suspect it has everything to do with the volume and type of training data and seed data it can receive. When GPT4 is open to the internet - there may be greater possibilities. .... Andy Thomson, M.Arch, OAA, MRAIC Director Thomson Architecture, Inc. Instructor/Lecturer, Toronto Metropolitan University Faculty of Engineering & Architectural Science AC26/iMacPro/MPB Silicon M2Pro
OPCFW_CODE
Nested TAILQ double free error The following code is a minimal example of a problem in a bigger codebase. It can be compiled via a simple gcc -g main.c. I have a list of items, each of which holds a list of attributes. The goal is to parse a list from some input and in order to only ever have completely parsed items in the list, it uses a temporary item that is added to the list once fully parsed. Since the parsing happens periodically, the list of items must be free'd at some point. The code, as it stands, at this point produces "double free or corruption" in line 49. My question, simply put, is: why does this happen / what am I doing wrong and how can I fix it? I can't seem to wrap my head around where memory is corrupted. main.c #include <sys/queue.h> #include <stdlib.h> #include <string.h> typedef struct attribute_t { char *name; TAILQ_ENTRY(attribute_t) attributes; } attribute_t; typedef struct item_t { char *name; TAILQ_HEAD(attributes_head, attribute_t) attributes; TAILQ_ENTRY(item_t) items; } item_t; TAILQ_HEAD(items_head, item_t) items = TAILQ_HEAD_INITIALIZER(items); static item_t item_builder; static void read_item_start(const char *name) { TAILQ_INIT(&(item_builder.attributes)); item_builder.name = strdup(name); } static void read_item_end() { item_t *new_item = calloc(1, sizeof(item_t)); memcpy(new_item, &item_builder, sizeof(item_t)); TAILQ_INSERT_TAIL(&items, new_item, items); memset(&item_builder, '\0', sizeof(item_t)); } static void read_item_attribute(const char *name) { attribute_t *new = calloc(1, sizeof(attribute_t)); new->name = strdup(name); TAILQ_INSERT_TAIL(&(item_builder.attributes), new, attributes); } static void free_items() { item_t *item; while (!TAILQ_EMPTY(&items)) { item = TAILQ_FIRST(&items); free(item->name); // <-- crash attribute_t *attribute; while (!TAILQ_EMPTY(&(item->attributes))) { attribute = TAILQ_FIRST(&(item->attributes)); free(attribute->name); TAILQ_REMOVE(&(item->attributes), attribute, attributes); free(attribute); } TAILQ_REMOVE(&items, item, items); free(item); } } int main() { read_item_start("first item"); read_item_attribute("first attribute"); read_item_attribute("second attribute"); read_item_end(); read_item_start("second item"); read_item_attribute("first attribute"); read_item_attribute("second attribute"); read_item_end(); free_items(); return 0; } I think the problem is that TAILQ_INIT initializes tqh_last to be a pointer to tqh_first: #define TAILQ_INIT(head) \ do { \ (head)->tqh_first = NULL; \ (head)->tqh_last = &(head)->tqh_first; \ } while (0) (&(head)->tqh_first is equivalent to the slightly more clear &((head)->tqh_first)). So, when you use memcpy, you violate the invariant that tqh_last points to the memory location of tqh_first. I think the clean solution would be to initialize a new TAILQ in the copied structure and move all elements from the old list into the new TAILQ. A slightly less clean but more efficient solution would be to fix the pointer directly. Yeah, that makes sense (we do the same for status blocks [code that I wrote {a long time ago}]). I'll accept the answer once I actually fixed it and can verify it. I've inserted printfs after each allocation and free, this is the result: strdup = 0x800e15000 read_item_attribute: calloc = 0x800e20000 read_item_attribute: calloc = 0x800e20020 read_item_end: calloc = 0x800e21000 strdup = 0x800e15020 read_item_attribute: calloc = 0x800e20060 read_item_attribute: calloc = 0x800e20080 read_item_end: calloc = 0x800e21030 free_items(name): 0x800e15000 free_items(attribute->name): 0x800e15010 free_items(attribute): 0x800e20000 free_items(attribute->name): 0x5a5a5a5a5a5a5a5a Observe that free_items(attribute->name): 0x800e15010 is already trying to free something that wasn't allocated. Apparently, attribute = TAILQ_FIRST(&(item->attributes)); doesn't give you a struct with an allocated name member. That explains why it crashes (thanks!), but unfortunately doesn't help me understand what I'm doing wrong / how to fix it. Any ideas? If I printf the memory address of attribute inside the loop right after assigning it, I can see that it's the same both times. But I can't explain that.
STACK_EXCHANGE
I would like to put a shortcut on my desktop that will start Microsoft Outlook with a specific address or list of addresses entered in the To field. Is there any way to do this? Yes, you can create a shortcut to launch a new e-mail message that has the recipient and various other fields preloaded with the data of your choice. We'll start with a simple shortcut that just opens a blank new e-mail message in your default e-mail client. Right-click on the desktop and choose New Shortcut. The Create Shortcut wizard will prompt you for the location of the item—enter mailto: and click on Next. Enter a name for the shortcut and click on Finish. Double-click on the new shortcut to make sure it correctly launches a new, blank e-mail message. Now we'll define a recipient. Right-click on the shortcut you just created, choose Properties from the menu, and edit the URL field. Add two e-mail addresses separated by a comma, and click on OK (for example mailto:email@example.com, firstname.lastname@example.org) . Launch the shortcut and check whether your e-mail program accepts the addresses. If it doesn't, change the separator between the addresses from a comma to a semicolon. The comma is technically correct, per the RFC 2368 document that defines the mailto: protocol, but some clients (including Outlook) require a semicolon instead. You can add other fields to the URL using the form name=value. The first such field must be preceded by a question mark, and any additional fields must be preceded by an ampersand. The cc= and bcc= fields can be used to add recipients, the subject= field defines the subject, and you can even include a little text for the message itself with the body= field. The subject= and body= fields are tricky, though, because many special characters must be encoded. Every space has to be replaced by %20, and any new lines in the message body are represented by %0D%0A. Beyond that, it's easiest to avoid any characters other than letters, numbers, and these few safe punctuation marks: $ - _ . + ! * ' ( ) , (dollar sign, hyphen, underscore, period, plus, exclamation mark, asterisk, single quote, open parenthesis, close parenthesis, and comma). Here's a sample mailto: URL that uses all the elements described above. It creates an e-mail with a particular subject and body and sends it to several recipients: mailto:email@example.com;boss_elf@ northpole.com?cc=rudolph@northpole. com&subject=This%20week&body=Been%20 good%20again.%0D%0AReally! Some e-mail clients, particularly older ones, have trouble with complex mailto: URLs. Outlook 97 didn't properly handle multiple fields in the URL, nor did versions of Outlook Express before 5.0. That's why it's important to start simple the first time you try.
OPCFW_CODE
In a couple of days we will be saying goodbye to 2014 and ringing in the New Year 2015. Simple math should show you that if you are still running Windows Server 2003, it is long since time to upgrade. However here’s more: When I was a Microsoft MVP, and then when I was a Virtual Technical Evangelist with Microsoft Canada, you might remember my tweeting the countdown to #EndOfDaysXP. That we had some pushback from people who were not going to migrate, I think we were all thrilled by the positive response and the overwhelming success we had in getting people migrated onto either Windows 8, or at least Windows 7. We did this not only by tweeting, but also with blog articles, in-person events (including a number of national tours helping people understand a) the benefits of the modern operating system, and b) how to plan for and implement a deployment solution that would facilitate the transition. All of us who were on the team during those days – Pierre, Anthony, Damir, Ruth, and I – were thrilled by your response. Shortly after I left Microsoft Canada, I started hearing from people that I should begin a countdown to #EndOfDaysW2K3. Of course, Windows Server 2003 was over a decade old, and while it would outlast Windows XP, support for that hugely popular platform would end on July 14th, 2015 (I have long wondered if it was a coincidence that it would end on Bastille Day). Depending on when you read this article it might be different, but as of right now the countdown is around 197 days. You can keep track yourself by checking out the website here. It should be said that with Windows 7 there was an #EndOfDaysXP Countdown Gadget for the desktop, and when I migrated to Windows 8 I used a third party app that sat in my Start Menu. One friend suggested I create a PowerShell script, but that was not necessary. I don’t remember exactly which countdown timer I used, but it would work just as well for Windows Server 2003 – just enter the date you are counting down to, and it tells you every day how much time is left. The point is, while I think that migrating off of Server 2003 is important, it was not at that point (nor is it now) an endeavour that I wanted to take on. To put things in perspective, I was nearing the end of a 1,400 day countdown during which I tweeted almost every day. I was no longer an Evangelist, and I was burnt out. Despite what you may have heard, I am still happy to help the Evangelism Team at Microsoft Canada (although I think they go by a different name now). So when I got an e-mail on the subject from Pierre Roman, I felt it important enough to share with you. As such, here is the gist of that e-mail: 1) On July 14, 2015 support for Windows Server will come to an end. It is vital that companies be aware of this, as there are serious dangers inherent in running unsupported platforms in the datacenter, especially in production. As of that date there will be no more support and no more security updates. 2) The CanITPro team has written (or re-posted) several articles that will help you understand how to migrate off your legacy servers onto a modern Server OS platform, including: - Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2003 to 2012 R2 (by Dishan Francis, Microsoft MVP) - Step-By-Step: Migrating DHCP From Windows Server 2003 to 2012 R2 (by Dishan Francis, Microsoft MVP) - Step-By-Step: Active Directory Migration from Windows Server 2003 to Windows Server 2012 R2 (by Anthony Bartolo) 3) The Microsoft Virtual Academy (www.microsoftvirtualacademy.com) also has great educational resources to help you modernize your infrastructure and prepare for Windows Server 2003 End of Support, including: 4) Independent researchers have come to the same conclusion (IDC Whitepaper: Why You Should Get Current). 5) Even though time is running out, the Evangelism team is there to help you. You can e-mail them at firstname.lastname@example.org if you have any questions or concerns surrounding Windows Server 2003 End of Support. Of course, these are all from them. If you want my help, just reach out to me and if I can, I will be glad to help! (Of course, as I am no longer with Microsoft or a Microsoft MVP, there might be a cost associated with engaging me ) Good luck, and all the best in 2015! Leave a Reply
OPCFW_CODE
Rumored Buzz on digital signage education If you simply click the moreover button in error, find any executable file in Installer Path, then a Terminate button will become available, allowing you to complete the provisioning package deal devoid of an software. You should use Settings to immediately configure 1 or a handful of products like a kiosk. (Employing Settings isn't really realistic for configuring a great deal of units, but it will perform.) After you arrange a kiosk (often called assigned entry Shell Launcher isn't going to help a customized shell by having an application that launches a unique course of action and exits. For example, You can not specify create.exe in Shell Launcher. Shell Launcher launches a tailor made shell and monitors the procedure to identify once the custom shell exits. Universities usually depend on donations from alumni and enterprise companions that will help enhance plans. Interactive displays permit for universities to Screen media, for instance pics or movies, to inform tales with regards to the donor’s affiliation With all the campus. Serious cases demand significant technological innovation. That’s why our software program comes equipped with the chance to supply dynamic Crisis Alerts. The Crisis Notify function lets administrators to press wayfinding and timely messages suitable when college students will need them most. Modify the subsequent PowerShell script as ideal. The opinions within the sample script make clear the objective of Every single area and tell you the place you will want to change the script for your personal applications. Learn how NEC replaced growing older machines and standardize classroom technological innovation throughout lcdenclosure.co.uk a large faculty district. In the event you press Ctrl + Alt + Del and don't sign up to a different account, following a established time, assigned accessibility will resume. The default time is 30 seconds, however , you can change that in the following registry important: Digital menu boards display extra than simply menu objects, operators can provide dynamic pictures and also films depicting clips with the cafe’s best providing goods. An online-based touch-screen kiosk which might be Outfitted with interactive articles for nearly any purpose. We would love to hear your ideas. Select the style you want to offer: Product or service opinions Register to present documentation feed-back Written content suggestions You might also leave suggestions straight on GitHub . Our new feedback system is built on GitHub Problems. Read about this transformation inside our blog site article. For kiosks in general public-struggling with environments with car indicator-in enabled, you'll want to make use of a consumer account with the very least privilege, such as a regional standard user account. This thirty day period Digital Signage Today is focusing on digital signage in the education vertical. Digital signage performs a number of tasks from the education vertical from wayfinding to interactive answers. You can create a local regular person account that will be used to operate the kiosk application. If you toggle No, Guantee that you have an present person account to run the kiosk app. Using Shell Launcher, it is possible to configure a kiosk gadget that runs a Classic Windows software as being the user interface. The application that you choose to specify replaces the default shell (explorer.exe) that typically operates any time a person logs on. Determine a lot more on how Education institutions are building essentially the most of digital signage. Why not try out our software program without spending a dime
OPCFW_CODE
from typing import Iterable import torch from uncertainties import ufloat import math from estimators.base import Estimator from models.base import ProbingModel, ValueModel from readers.base import Reader class FixedSamplesEstimator(Estimator): def __init__(self, probing_model: ProbingModel, reader: Reader, attribute: str, select_dimensions: Iterable[int], value_model: ValueModel): self._reader = reader self._attribute = attribute self._select_dimensions = list(select_dimensions) super().__init__(probing_model, value_model=value_model) def estimate_integral(self, value_name: str) -> ufloat: """ Estimates the integral we need to compute """ # Let samples be the ones we have value = self._probing_model._value_model.get_value_ids([value_name]).cpu().tolist()[0] filter = lambda x: x.has_attribute(self._attribute) and x.get_attribute(self._attribute) == value_name embeddings = self._reader.get_embeddings_with_filter_from_cache( f"{value_name}_{value}", filter)[:, self._select_dimensions] num_samples = embeddings.shape[0] attribute_values = value * torch.ones(num_samples).to(self._probing_model.get_device()) model_samples = embeddings.to(self._probing_model.get_device()) # Compute probabilities log_prob = self._probing_model.log_prob_conditional(model_samples, attribute_values) log_prob_normalizer = self._probing_model.log_prob(model_samples) sampled_log_prob = log_prob - log_prob_normalizer mean_log_prob = sampled_log_prob.mean().item() std_log_prob = sampled_log_prob.std().item() / math.sqrt(num_samples) # Return estimated mean with 95% confidence bound return ufloat(mean_log_prob, 2 * std_log_prob)
STACK_EDU
TERMS and Conditions of REXPO No tables will be reserved before payment is received. REXPO will sell out of vendor space. You are welcome to be applied to a waiting list or purchase table space for the next event. Only show personnel will be allowed during set up and break down hours. Only paid vendors are allowed to display animals for sale at the event. The expo ends at 4pm, please plan on staying for the duration. All Venomous Reptiles and the following species CAN'T be sold or displayed at a REXPO event: Burmese Python (Python m. bivittatus), Reticulated Python (Python reticulatus), African Rock Python (Python sabae), Green Anaconda (Eunectes maurinus), Yellow Anaconda (Eunectes notaeus), Australian Amethystine Python (Morelia amethistina kinhorni), Indian Python (Python molurus), Asiatic (water) Monitor (V. salvator), Nile Monitor (V. nilocitus), White Throat Monitor (V. albigularus), Black Throat Monitor (V. albigularus ionides) and Crocodile Monitor (V. salvadori) and any hybrid thereof,and or all Crocadilla ,all are illegal under NYS law Native species are not permitted for sale. Turtles and Tortoises must have a minimum carpace length of 4 inches to be offered for sale. All animals must be displayed in cleanly quarters without visible feces or urates. Multiple animals per enclosure are permitted as long as they aren’t overcrowded. This will be at the discretion of show hosts. Any animals that are deemed unhealthy cannot be displayed for sale. Animals with mites or ticks will be removed from the sales area and cannot be displayed. Animals must be identified to the buyer as captive bred, captive born, farm raised, or wild caught. The buyer has the right to know and it is just good business practice. New York State requires a tax certificate to buy and sell in the state. Please have a NYS Tax Certificate available. Vendors assume all liability for their animals. Vendors will assume all liability allowing a customer to handle their animals and removal from it's secure enclosure. It is recomended that each vendor carries his/her own liability insurance. REXPO is fully insured but will not accept liability for bites, scratches, or anything harmful to patrons of the event, this is the vendors responsibility for their specific animals. Any injury to vendor or custmer is the sole responsibility of your orginization. REXPO and all promoters, vendors, volunteers, and anyone affiliated with the REXPO team maintains no liability. REXPO reserves the right to remove vendors that don't comply with the rules, guidelines, and state regulations. REXPO reserves the right to discontinue doing business with any vendor. Each individual show contract is for a single show and does not imply that you are entitled to an exact location each REXPO. Each individual show contract does not imply that you will be allowed to return every expo. Each individual show contact is exactly that. Payment for a table(s) for a specified date and time to sell ones product. Exhibitors are solely responsible for all needed permits, licensing, sales, transportation, importation, exportation, taxes or anything unmentioned to conduct business. Vending at REXPO is agreement to these terms and conditions.
OPCFW_CODE
RedHat has issued an advisory today (July 15): As Fedora 20 was the last with this package and is EOL, we'll have to sync with RHEL 6 or 7 for this update. I haven't seen any upstream announcements about new IcedTea versions yet. Steps to Reproduce: Corresponding Oracle CPU: Updated package uploaded for Mageia 4. See https://bugs.mageia.org/show_bug.cgi?id=14051#c4 for useful links to test java Updated java-1.7.0-openjdk packages fix security vulnerabilities: Multiple flaws were discovered in the 2D, CORBA, JMX, Libraries and RMI components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions (CVE-2015-4760, CVE-2015-2628, CVE-2015-4731, CVE-2015-2590, CVE-2015-4732, CVE-2015-4733). A flaw was found in the way the Libraries component of OpenJDK verified Online Certificate Status Protocol (OCSP) responses. An OCSP response with no nextUpdate date specified was incorrectly handled as having unlimited validity, possibly causing a revoked X.509 certificate to be interpreted as It was discovered that the JCE component in OpenJDK failed to use constant time comparisons in multiple cases. An attacker could possibly use these flaws to disclose sensitive information by measuring the time used to perform operations using these non-constant time comparisons A flaw was found in the RC4 encryption algorithm. When using certain keys for RC4 encryption, an attacker could obtain portions of the plain text from the cipher text without the knowledge of the encryption key Note: With this update, OpenJDK now disables RC4 TLS/SSL cipher suites by default to address the CVE-2015-2808 issue. Refer to Red Hat Bugzilla bug 1207101, linked to in the References section, for additional details about A flaw was found in the way the TLS protocol composed the Diffie-Hellman (DH) key exchange. A man-in-the-middle attacker could use this flaw to force the use of weak 512 bit export-grade keys during the key exchange, allowing them do decrypt all traffic (CVE-2015-4000). Note: This update forces the TLS/SSL client implementation in OpenJDK to reject DH key sizes below 768 bits, which prevents sessions to be downgraded to export-grade keys. Refer to Red Hat Bugzilla bug 1223211, linked to in the References section, for additional details about this It was discovered that the JNDI component in OpenJDK did not handle DNS resolutions correctly. An attacker able to trigger such DNS errors could cause a Java application using JNDI to consume memory and CPU time, and possibly block further DNS resolution (CVE-2015-4749). Multiple information leak flaws were found in the JMX and 2D components in OpenJDK. An untrusted Java application or applet could use this flaw to bypass certain Java sandbox restrictions (CVE-2015-2621, CVE-2015-2632). A flaw was found in the way the JSSE component in OpenJDK performed X.509 certificate identity verification when establishing a TLS/SSL connection to a host identified by an IP address. In certain cases, the certificate was accepted as valid if it was issued for a host name to which the IP address resolves rather than for the IP address (CVE-2015-2625). Updated packages in core/updates_testing: This apparently adds a requires to lib64sctp1 (libjavasctp.so) from Core Release, is that wanted? (In reply to Samuel VERSCHELDE from comment #3) > This apparently adds a requires to lib64sctp1 (libjavasctp.so) from Core > Release, is that wanted? Yes, it's dlopen()'d by sun.nio.ch.sctp, so RedHat added that requires to make that work. Testing java-1.7.0-openjdk-126.96.36.199-188.8.131.52.mga4 and java-1.7.0-openjdk-headless-184.108.40.206-220.127.116.11.mga4 Playing minecraft (java -jar minecraft.jar) : OK Running eclipse : OK. From the procedure linked above, with icedtea-web installed: http://www.w3.org/People/mimasa/test/object/java/ tests run http://www.java.com/en/download/installed.jsp indicates "Version 7 Update 85" http://www.addictinggames.com/action-games/potty-racers-4-game.jsp runs but it's a flash game so that doesn't prove anything at all :P Testing complete on Mageia 4 i586 as well. has_procedure MGA4-64-OK => has_procedure MGA4-32-OK MGA4-64-OK has_procedure MGA4-32-OK MGA4-64-OK => has_procedure MGA4-32-OK MGA4-64-OK advisoryCC: An update for this issue has been pushed to Mageia Updates repository.
OPCFW_CODE
Format input text content when you are typing… http://nosir.github.io/cleave.js Cleave.js Cleave.js has a simple purpose: to help you format input text content automatically. Features Credit card number formatting Phone number formatting (i18n js lib separated for each country to reduce size) Date formatting Numeral formatting Custom delimiter, prefix and blocks pattern CommonJS / AMD mode ReactJS […] Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards. The System Design Primer Motivation Learn how to design large-scale systems. Prep for the system design interview. Learn how to design large-scale systems Learning how to design scalable systems will help you become a better engineer. System design is a […] Compile a Node.js project into a single file. Supports TypeScript, binary addons, dynamic requires. ncc Simple CLI for compiling a Node.js module into a single file, together with all its dependencies, gcc-style. Motivation Publish minimal packages to npm Only ship relevant app code to serverless environments Don’t waste time configuring bundlers Generally faster bootup time […] React Native APIs turned into React Hooks for use in functional React components React Native Hooks React Native APIs turned into React Hooks allowing you to access asynchronous APIs directly in your functional components. Note: This is an experimental library. As of this time React Native does not yet support React version 16.7 out of […] A GraphQL Directive For Rate Limiting Your Resolvers 💂♀️ 💂♀️ GraphQL Rate Limit 💂♂️ A GraphQL directive to add basic but granular rate limiting to your Queries or Mutations. Features 💂♀️ Add rate limits to queries or mutations 🔑 Add filters to rate limits based on the query or mutation args ❌ Custom error messaging ⏰ Configure using a simple max per window arguments 💼 Custom stores, […] SVG icons for popular brands https://simpleicons.org Simple Icons Free SVG icons for popular brands. See them all on one page at SimpleIcons.org. Contributions, corrections & requests can be made on GitHub. Started by Dan Leech. Usage General Usage Icons can be downloaded as SVGs directly from our website – simply click the icon you want, and the download should start […] 🎚 React beautiful input range slider https://react-smooth-range-input.now.sh/ 🎚 React Smooth Range Input Butter smooth input range Beautiful animation interaction Tiny size Install $ npm install react-smooth-range-input Example Navigate into example folder and install yarn && yarn start || npm install && npm run start 😍 Check it out. Quickstart import react from ‘react’; import Slider from ‘react-smooth-range-input’; export default () => […] A React component for building Web forms from JSON Schema. https://mozilla-services.github.io/re… react-jsonschema-form A simple React component capable of building HTML forms out of a JSON schema and using Bootstrap semantics by default. Testing powered by BrowserStack Documentation Documentation is hosted on: https://react-jsonschema-form.readthedocs.io/ Live Playground A live playground is hosted on gh-pages. Contributing Read our contributors’ guide to get started.
OPCFW_CODE
Queries using EntityGraph and maxResults don't return all items in joined collection Hello, This might be the case of misusing API perhaps, but some of our tests using EntityGraph started failing with hibernate reactive 2 alpha. We have these example entities @Entity public class Author { @Id @GeneratedValue private Long id; private String name; @OneToMany(cascade = CascadeType.ALL, mappedBy = "author") private Set<Book> books; ... } @Entity public class Book { @Id @GeneratedValue private Long id; private String title; private int pages; @ManyToOne(fetch = FetchType.LAZY) private Author author; @OneToMany(cascade = CascadeType.ALL, mappedBy = "book") private List<Chapter> chapters = new ArrayList<>(); ... } @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; @ManyToOne private Book book; ... } and method in AuthorRepository private static final String FIND_AUTHOR_BY_NAME_LIKE_QUERY = "SELECT a FROM org.example.domain.Author AS a WHERE (a.name LIKE :name)" + " ORDER BY a.name"; public Mono<Author> findByNameLike(String name) { return Mono.fromCompletionStage(sessionFactory.withTransaction(session -> { Stage.Query query = session.createQuery(FIND_AUTHOR_BY_NAME_LIKE_QUERY, Author.class) .setParameter("name", name); query.setHint("jakarta.persistence.fetchgraph", createEntityGraph(session)); query.setMaxResults(1); return query.getSingleResult(); })); } private EntityGraph<Author> createEntityGraph(Stage.Session session) { RootGraph<Author> rootGraph = (RootGraph<Author>) session.createEntityGraph(Author.class); rootGraph.addAttributeNode("books"); Graph<?> graph = rootGraph.addSubGraph("books"); graph.addAttributeNode("chapters"); return rootGraph; } If we have author and 2 books, this method will return only one book. Most likely because of setMaxResults(1) and query adding fetch first $2 rows only that worked differently in hibernate reactive 1.1.9. Also, method with explicit joins and setMaxResults returns correct join collection. It works correctly with non reactive hibernate. Like I said, maybe using setMaxResults is not considered to be used with this case, but we have this used in some generic API and can't have workaround currently. Attaching hibernate 2 and 1.1.9 examples to reproduce behavior if needed. hib-entitygraph-reactive2.zip hib-entitygraph-reactive1.zip Thanks, Radovan Having a simliar or maybe the same issue with the Mutiny.SessionFactory where you can't event do query.setHint Ah, this issue fell under the radar. I will have a look soon I've tested this with the latest Hibernate Reactive and I think it's a bug in Hibernate ORM. Basically, if the HQL query looks like this: FROM Author a WHERE a.name LIKE :name ORDER BY a.name it will add fetch first ? rows only to the end of the SQL query: select a1_0.id,b1_0.author_id,b1_0.id,c1_0.book_id,c1_0.id,c1_0.name,b1_0.pages,b1_0.title,a1_0.name from Author a1_0 left join Book b1_0 on a1_0.id=b1_0.author_id left join Chapter c1_0 on b1_0.id=c1_0.book_id where a1_0.name like ? escape '' order by a1_0.name fetch first ? rows only This means that it won't return all the associated elements. But, if I change the HQL and fetch the collections, it works as expected. This HQL: FROM Author a left join fetch a.books b left join fetch b.chapters WHERE a.name LIKE :name ORDER BY a.name the SQL becomes: select a1_0.id,b1_0.author_id,b1_0.id,c1_0.book_id,c1_0.id,c1_0.name,b1_0.pages,b1_0.title,a1_0.name from Author a1_0 left join Book b1_0 on a1_0.id=b1_0.author_id left join Chapter c1_0 on b1_0.id=c1_0.book_id where a1_0.name like ? escape '' order by a1_0.name without limiting the results in the SQL. But I don't see any difference between Hibernate ORM or Reactive in terms of behaviour in the latest version. I've created a test case that uses ORM and Reactive: https://github.com/DavideD/hibernate-reactive/commit/1e5e9c0109dc3b2e97d1a03da05d1e5b195de259#diff-8e8a62954bbe01f50f87992c231ad3bd081e1b5cc4b063c5a4317d2b8d55f30fR143 @gavinking, I think this is a bug in Hibernate ORM. Or am I missing something? [There's a big difference between Hibernate 5 and 6 here.] But the issue description is extremely unclear, and I'm not sure what it is that the user is claiming is wrong. The issue is about the fact that the user expects to retrieve all the books from the selected author. But when setting .setMaxResults(1), the books association in author only contains one book (instead of the whole collection) . If I understand correctly, this happens because collections aren't eagerly fetched. The issue is about the fact that the user expects to retrieve all the books from the selected author. But when setting .setMaxResults(1), the books association in author only contains one book (instead of the whole collection) If I understand correctly, this describes the legacy behavior of Hibernate prior to H6. In H6, this should not occur. I've tested this with H6. I think it's still happening. You can check the test I wrote here: https://github.com/DavideD/hibernate-reactive/blob/1e5e9c0109dc3b2e97d1a03da05d1e5b195de259/hibernate-reactive-core/src/test/java/org/hibernate/reactive/types/EntityGraphTest.java#L111 Maybe I've missed something. I'm not really sure what the subtleties are, since I didn't work on this. Better ask @sebersole OK, now I see what's going on. So this is very specific to the use of Query.setEntityGraph() and does not affect the same query written with join fetch. And it's indeed a bug in core, so it should be reported there. I've created an issue for Hibernate ORM: https://hibernate.atlassian.net/browse/HHH-17698
GITHUB_ARCHIVE
How Caché Supports OAuth 2.0 and OpenID Connect This chapter introduces Caché support for OAuth 2.0 and OpenID Connect. With Caché support for OAuth 2.0 and OpenID connect, you can do any or all of the following: Use a Caché web application as a client Use a Caché web application as a resource server Use a Caché instance as an authorization server For example, you can use a Caché web application as a client of an authorization server that uses third-party technology. Or you can use third-party clients with an authorization server that is built on Caché. The resource server or resource servers could be implemented in Caché or in a different technology. In all cases, the authorization server is the most complex element and is generally created first. You create clients later. When you create a client, it is generally necessary to understand the capabilities and requirements of the authorization server, such as the scopes it supports. Caché Support for OAuth 2.0 and OpenID Connect The Caché support for OAuth 2.0 and OpenID Connect consists of the following elements: Configuration pages in the Management Portal. If you configure a client (or a resource server), use the options at System Administration > Security > OAuth 2.0 > Client Configuration. If you configure an authorization server, use the options at System Administration > Security > OAuth 2.0 > Server Configuration. Classes in the %SYS.OAuth2 package. These classes are the client API. If you define a Caché web application as an OAuth 2.0 client, your client uses methods in these classes. Classes in the %OAuth2 package. If you use a Caché instance as an OAuth 2.0 authorization server, you customize the server by subclassing one or more of the classes in the package %OAuth2.Server. Other classes in %OAuth2 provide utility methods for your code to call. Classes in the OAuth2 package (in the CACHESYS database). These include persistent classes for internal use by Caché, and you can ignore most of them. However, if you want to create configuration items programmatically, you would use a subset of the classes in this package. The following subsections provide an overview of the configuration items. Configuration Items on a Client Within a Caché instance that is acting as an OAuth 2.0 client, it is necessary to define two connected configuration items for a given client application: a server description (which describes the authorization server) and a client configuration (which configures the client). A given Caché instance can have any number of server descriptions. Each server description has multiple client configurations, as shown in the following figure, which also indicates some of the information stored in these configuration items: This architecture intended to simplify configuration, because it enables you to define multiple client configurations that use the same authorization server without needing to repeat the details of the authorization server. You can create these items via the Management Portal, as described in the chapter “Using a Caché Web Application as an OAuth 2.0 Client. ”Or you can create them programmatically, as described in the appendix “Creating Configuration Items Programmatically.” Configuration Items on the Server Within a Caché instance that is acting as an OAuth 2.0 authorization server, it is necessary to define a server configuration (which configures the authorization server) and a number of client descriptions. The following figure indicates some of the information stored in these configuration items. A given Caché instance can have at most one server configuration and can have many client descriptions. One client description is necessary for each client application. A client description is also necessary for each resource server that uses any endpoints of the authorization server. If a resource server does not use any endpoints of the authorization server, there is no need to create a client description for it. You can create these items via the Management Portal, as described in the chapter “Using Caché as an OAuth 2.0 Authorization Server.” Or you can create them programmatically, as described in the appendix “Creating Configuration Items Programmatically.” Standards Supported in Caché This section lists the standards that Caché supports for OAuth 2.0 and Open ID Connect: The OAuth 2.0 Authorization Framework (RFC 6749) — See https://datatracker.ietf.org/doc/rfc6749 The OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750) — See https://datatracker.ietf.org/doc/rfc6750 OAuth 2.0 Token Revocation (RFC 7009) — See https://datatracker.ietf.org/doc/rfc7009 JSON Web Token (JWT) (RFC 7519) — See https://datatracker.ietf.org/doc/rfc7519 OAuth 2.0 Token Introspection (RFC 7662) — See https://datatracker.ietf.org/doc/rfc7662 OpenID Connect Core 1.0 — See http://openid.net/specs/openid-connect-core-1_0.html OAuth 2.0 Form Post Response Mode — See http://openid.net/specs/oauth-v2-form-post-response-mode-1_0.html JSON Web Key (JWK) (RFC 7517) — See https://datatracker.ietf.org/doc/rfc7517 OpenID Connect Discovery 1.0 — See https://openid.net/specs/openid-connect-discovery-1_0.html OpenID Connect Dynamic Client Registration — See http://openid.net/specs/openid-connect-registration-1_0-19.html
OPCFW_CODE
The Open Jobs Observatory was created by Nesta, in partnership with the Department for Education. This first article describes how we identified jobs in green industries. The second article compares green job definitions, presents preliminary results from applying our methodology, and discusses the current policy climate surrounding the transition to a green economy. In parallel with government intervention to stimulate the green economy, we have developed one of the first open methodologies for automatically identifying job advertisements in green industries. This effort has come during a time of busy policy action: the UK Government has recently committed to creating and supporting two million jobs in green industries by 2030, in a new Ten Point Plan for a Green Industrial Revolution report. They have also created a Green Jobs Taskforce to facilitate this goal. While there has been considerable effort to generate additional jobs in green industries, providing tangibility, such as common job titles and dense green job locations, is a critical next step in the green transition. Our methodology for identifying job advertisements as ‘green’ serves to address this lack of tangibility. At the highest level, we took a supervised machine learning approach to identifying jobs in green industries. This meant that we manually labelled jobs as either green or not green and trained a classifier to label unseen jobs as belonging to either of those categories. We chose to operationalise one official definition of jobs in green industries: the United Nations System of Environmental Accounting’s Environmental Goods and Services Sector (EGSS). The EGSS is made up of areas of the economy engaged in producing goods and services for environmental protection purposes, as well as those engaged in conserving and maintaining natural resources. There are 17 different UK specific activities associated with EGSS, including (but not limited to): wastewater management, forest management, environmental consulting and in-house business activities that include waste and recycling. Our methodology identifies both critical roles (e.g. a renewable energy engineer) and general roles (e.g. an accountant for a green energy company) within these sectors. The set of job adverts which was used to train our model and then identify jobs in green industries comes from Nesta’s Open Jobs Observatory. The Observatory, which is in partnership with the Department for Education, provides free and up-to-date information on the skills requested in UK job adverts. The collection began in January 2021, and the Observatory now contains several million job adverts. While our pipeline does a reasonable job of identifying jobs in green industries within our evaluation set, there are invariably some limitations to bear in mind. Primarily, by using this approach we will not capture jobs in green industries where their adverts contain descriptions that are vague and lack green-specific terminology. Secondly, our pipeline relies on the assumption that our training data is representative of all 17 different EGSS activities. In the event that there are too few labelled jobs in specific EGSS activities (such as environmental construction), the model will be less effective at identifying these jobs. The article will now walk through, in greater detail, the steps that we took to identify jobs in green industries, within the Observatory, using a supervised approach. Our approach to identifying jobs in green industries can be broken down into three steps: Before following the methodology shown above, we first generated labelled data to train our classifier. We did this by manually reviewing a random sample of the job adverts in the database and labelling these jobs as green or not green, accordingly. Jobs were labelled ‘green’ if they fell into any of the 17 different EGSS activities, while jobs were labelled ‘not green’ if they did not fall into any of the categories. After we labelled the random sample of job adverts as green or not green, we manually generated a list of key phrases that mapped onto the 17 different EGSS activities. For example, EGSS activity number 6 - ‘production of renewable energy’ - and its associated description - were summarised as ‘renewable energy production’, ‘renewable heat’ and ‘biofuels’. After developing the list of key phrases that mapped onto all EGSS activities, we ‘expanded’ those queries by identifying similar phrases. We did so by using Word2Vec’s word embeddings. Word embeddings are a learned representation of text where words that have similar meaning will be similarly represented numerically. This representation allowed us to conduct mathematical operations (such as distance calculation) to identify similar words in an embedding space. This process was helpful because we were able to generate additional key phrases that have similar representations and are related to EGSS activities beyond the initial keyword list. Following this process, we generated approximately 230 key phrases and terms. This keyword list acted as part of a ‘feature’ to input into our classifier. Once we generated our expanded list of key phrases and terms associated with EGSS activities, we turned our attention to the job adverts, namely to the raw job title and description text. We ‘preprocessed’ our text data so that the text was in a predictable and analysable form for the task at hand. The preprocessing steps we took included removing punctuation from the text, converting all text to lowercase, removing ‘stop words’ (i.e. uninformative words such as ‘the’, ‘a’ or ‘your’) and lemmatising terms. Lemmatization is the simplification of inflected words by converting them to their canonical, dictionary forms. Once we had preprocessed our text data, we wanted to identify useful features that the model could use to determine whether a job was green or not. We identified two groups of features that could be helpful: keyword counts and the ‘relevance’ of each word in the text of the job adverts. For keyword counts, we counted the number of expanded EGSS green terms or phrases that were present in the preprocessed job title and job description. We normalised this count by the total number of words in the title and description of the advert. Meanwhile, we captured word relevance in job texts by representing the text data as matrices of Term Frequency-Inverse Document Frequency (TF-IDF) features. TF-IDF is a common information retrieval technique that weighs the frequency of a word (or term) against the inverse document frequency. We also trimmed our text data to remove terms that appear in more than 60% and fewer than 5% of all job advertisements. We did this to remove ‘noisy’ terms that were not helpful for distinguishing between jobs in green and non-green industries, such as ‘resume’, ‘seeking’, or ‘apply’. After representing our cleaned text data numerically, we were able to train a classifier using our labelled data to predict whether or not the job advert was likely to be for a position within a green industry. But first, we addressed the imbalance between the number of jobs in green versus non-green industries in the Observatory. As there were far fewer adverts in the Observatory for jobs in green industries than in non-green industries, we oversampled the incidence of jobs in green industries in our training data to generate an even class distribution. We did so by applying a data augmentation method called Synthetic Minority Oversampling Technique (SMOTE). At a high level, this technique works by 1) selecting a random vectorised green job 2) k Nearest Neighbors (kNN) are found and then 3) a randomly selected neighbor point is chosen as a green job. kNN is a simple supervised machine learning algorithm that calculates the proximity between a random labelled point (or in our instance, a green job) and k closest matrices where k is the number of neighbors. The key assumption kNN makes is that ‘similar’ matrices exist in close proximity to each other. Finally, once we had oversampled our training data, we were able to train our classifier. While we tested multiple different classifiers, we ultimately chose to deploy an Extreme Gradient Boosting (XGBoost) model, owing to its superior performance on our evaluation set. The ‘gradient boosting’ element of the model refers to the fact that it is based on a series (or ensemble) of weak classification and regression decision tree models. The model differs from other gradient boosting algorithms as it uses a different mathematical formula to build the decision trees. Whereas other gradient boosting algorithms typically use the mean squared error or gini impurity as a splitting criterion for building trees, XGBoost uses its own splitting criterion formula with stronger ‘regularisation’. This means that XGBoost typically does a better job of not overfitting to the training data. Once we trained our tuned XGBoost model to identify job adverts as green or not green, we used it to assess job adverts that the model had not yet seen. When we ran our pipeline on an evaluation set, we were able to achieve a precision score of 93% and a recall score of 94% for the green class. In this instance, the evaluation metric, precision, refers to the percentage of job adverts in green industries that the model correctly labelled as green, while recall referred to the percentage of job adverts in green industries that the model was able to recall. Ultimately, our methodology (when applied to the last three months of adverts collected) estimated that 3% of the job adverts were for positions in green industries. After we identified job adverts that were likely to sit within green industries, we applied a clustering algorithm to embedded representations of unique green job titles in an effort to identify groups of common job titles. We labelled the groups of job titles by identifying job titles above a probability threshold of cluster membership and then deriving the top n phrases associated with these job titles using TF-IDF. While our model does a reasonable job of classifying jobs within the repository as green or not, there are a number of methodological improvements to consider for future development: we could a) label more training data, b) treat the task as a multi-class problem and c) change our representation of the text. Firstly, increasing the amount of data the classifier is trained on could provide additional information and improve the overall fit of the model. This is especially the case for jobs within EGSS activities that may be underrepresented in the labelled training data. Secondly, while we treated the problem as a binary classification task, there are 17 different activities (or ‘classes’) in the EGSS. We could therefore treat this problem as a multi-class task and train the model to predict which activity (or activities) are connected to a given job. This would provide additional specificity and more granular insight into the green economy. Finally, we could represent our text data in alternative ways beyond TF-IDF, such as using transformer models to embed our job descriptions. The aims of this work have been two-fold: to demonstrate the types of analysis that are possible using job adverts from the Open Jobs Observatory, and to start exploring the green economy via data science methodologies. Click here to read about the results of our methodology and how it relates to the current policy climate surrounding the transition to a green economy. HM Government, The Ten Point Plan for a Green Industrial Revolution, 2020, London, UK. HM Government, Green Jobs Taskforce, 2020, London, UK. Eurostat, Environmental Goods and Services Sector Accounts - Practical Guide, (United Nations System of Environmental Economic Accounting: 2016). Office for National Statistics, UK Environmental Goods and Services Sector (EGSS) Methodology Annex, (ONS: 2021). Wikipedia, “tf-idf,” Wikipedia, https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Definition. Wikipedia, “Data augmentation”, Wikipedia https://en.wikipedia.org/wiki/Data_augmentation#Synthetic_oversampling_techniques_for_traditional_machine_learning XGBoost, “XGBoost Documentation”, https://xgboost.readthedocs.io/en/latest/index.html Wikipedia, “Mean squared error”, Wikipedia https://en.wikipedia.org/wiki/Mean_squared_error Wikipedia, “Decision tree learning”, Wikipedia https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity Wikipedia, “Hyperparameter optimisation”, Wikipedia https://en.wikipedia.org/wiki/Hyperparameter_optimization Hugging Face, “Sentence Transformers”, Hugging Face https://huggingface.co/sentence-transformers
OPCFW_CODE
Tweet. It was designed to store and manage huge volumes of data in an efficient manner. HDFS is one of the major components of Apache Hadoop, the others being MapReduce and YARN. So the Hadoop application utilizes HDFS as a primary storage system. a. Apache Hadoop (/ h ə ˈ d uː p /) is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. MapReduce is a Batch Processing or Distributed Data Processing Module. Other Resources: Acronym Finder has 7 verified definitions for HDFS. MapReduce provides the distributed processing for Hadoop. To overcome this problem, Hadoop was used. What is HDFS? And Yahoo! What is HDFS. Unlike other distributed systems, HDFS is highly faultto Gain specialized knowledge about development and relationships that you can apply to work in other majors/careers. Highly distributed file system. Your home dir is always the prefix of the path, unless it starts from /. Before moving ahead in this HDFS tutorial blog, let me take you through some of the insane statistics related to HDFS: In 2010, Facebook claimed to have one of the largest HDFS cluster storing 21 Petabytes of data. It is part of the Apache project sponsored by the Apache Software Foundation. It provides high performance access to data across Hadoop clusters. HDFS breaks up our CSV files into 128MB chunks on various hard drives spread throughout the cluster. HDFS stands for Human Development Foundation of Sikkim. HDFS stands for Hadoop Distributed File System. It is usually deployed on low-cost commodity hardware. The source of HDFS architecture in Hadoop originated as 32. The two main elements of Hadoop are: MapReduce – responsible for executing tasks; HDFS – responsible for maintaining data; In this article, we will talk about the second of the two modules. HDFS is a default file system for Hadoop where HDFS stands for Hadoop Distributed File System. In HDFS, the standard size of file ranges from gigabytes to terabytes. It is a very scalable, save and fault tolerant system with a high level of performance. Hive use HQL language. LEARN THE SCIENCE OF GROWTH AND RELATIONSHIPS A. HDFS must deliver a high data bandwidth and must be able to scale hundreds of nodes using a single cluster. Pig: Pig is a data flow language for creating ETL. hdfs dfs -pwd does not exist because there is no "working directory" concept in HDFS when you run commands from command line.. You cannot execute hdfs dfs -cd in HDFS shell, and then run commands from there, since both HDFS shell and hdfs dfs -cd commands do not exist too, thus making the idea of working directory redundant.. The dask.distributed workers each read the chunks of bytes local to them and call the pandas.read_csv function on these bytes, producing 391 separate Pandas DataFrame objects spread throughout the memory of our eight worker nodes. It is a sub-project of Hadoop. It is used as a Distributed Storage System in Hadoop Architecture. Hadoop - HDFS Overview - Hadoop File System was developed using distributed file system design. HDFS is one of the core components of the Hadoop framework and is responsible for the storage aspect. HDF is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms The Free Dictionary HDFS commands are very much identical to Unix FS commands. What is the Difference Between HDFS and MapReduce – Comparison of Key Differences. HDFS stands for Hadoop Distributed File System. Key Terms. Hadoop architecture HDFS. Hadoop Distributed File System (HDFS): The Hadoop Distributed File System (HDFS) is the primary storage system used by Hadoop applications. In 2012, Facebook declared that they have the largest single HDFS cluster with more than 100 PB of data. What does HDFS stand for? HBase: HBase is Key-Value storage, good for reading and writing in near real time. HDFS stands for Hadoop Distributed File System. The inter process communication between different nodes in Hadoop uses 33. concat() will throw IOException if files are mixed with different erasure coding policies or with replicated files. List of 45 HDFS definitions. HDFS has been designed to be easily portable from one platform to another. This filesystem is used to safely store a large amount of data on the distributed clusters. And each of the machines are connected to each other so that they can share data. Our 'Attic' has 6 unverified meanings for HDFS. Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. HDFS stands for Hadoop Distributed File System which uses Computational processing model Map-Reduce. HDFS usually works with big data sets. What does HDFS stand for?. It is built by following Google’s MapReduce Algorithm. It is also know as HDFS V1 as it is part of Hadoop 1.x. YARN stands for A. Yahoo’s another resource name B. One for master node – NameNode and other for slave nodes – DataNode. Acronym Finder has 7 verified definitions for HDFS Apache Hive is an open source data warehouse software for reading, writing and managing large data set files that are stored directly in either the Apache Hadoop Distributed File System (HDFS) or other data storage systems such as Apache HBase.Hive enables SQL developers to write Hive Query Language (HQL) statements that are similar to standard SQL statements for data query and analysis. Newer of versions of hadoop comes preloaded with support for many other file systems like HFTP FS, S3 FS.
OPCFW_CODE
[e-lang] E-on-CL progress, and request for information kpreid at attglobal.net Sat Jun 25 19:31:50 EDT 2005 On Jun 25, 2005, at 15:20, Mark Miller wrote: > Kevin Reid wrote: >> Incomplete listing of significant changes since the first source I forgot to mention: for anyone who missed the original announcement, there is no web page yet, but the source is at: > Very cool! I just updated, and noticed from one of your update > comments that you're also reporting all your progress to the CIA. That > is, the CIA Open Source Notification System ;) Yes. I did it in order to have their IRC bot deliver commit notifications to <irc://irc.freenode.net/erights>, but it seems that the person who configures that is on vacation. > I hadn't known about this system. Any idea why they call it "CIA"? >> Is there anything you'd like to use E for that you can't do with any >> existing implementation due to lack of features? > For me, when I think about using E-on-CL, the most pressing lack is > the joint absence of a tamed gui library and Pluribus. I've been experimenting with getting E-on-CL and CLIM (the "Common Lisp Interface Manager" UI framework) to cooperate. It mostly works, but I haven't done much yet, and I haven't commited the relevant code. I suspect that CLIM is not very suitable for taming, but I haven't looked carefully yet. I also have not investigated other Lisp GUI frameworks/libraries. I picked CLIM because it looked technically interesting and is supposed to be cross-platform. > That's why I've bumped up my priority for integrating Tyler's > VatTP-on-TLS changes, and for redoing CapTP in E using Data-E. > Without these, E-on-CL doesn't have a gui library, and can't > communicate to E-on-Java it order to use its gui library. Todo for my end: * Make sure the Data-E emakers will run. * Find an appropriate encryption library. (I know very little about this area. What protocol names should I look for?) (Interoperation issue: What should happen if VatA sends a non-mobile-code Selfless object (say, a Twine) to VatB, which doesn't have an implementation for it? Should this cause the message send to break, or deliver the object as a broken reference?) > If your E-on-ABCL-on-Java experiment works out, if we can get a > boot-comm system working between an E-on-ABCL-on-Java vat and an > E-on-Java vat in the same JVM process, then we could use E-on-Java's > gui well before we have a working implementation of Pluribus. E-on-CL, if/when it gets intra-process multi-vat operation, will need some equivalent of the boot-comm system. Not having looked at how boot-comm works in E-on-Java: would a reasonable implementation approach be hooking up a deSubgraphKit builder and recognizer across a more-trivial comm system that can pass (Also, I've concluded that calling it "CL-E" is a bad idea. I haven't renamed anything yet, though.) Kevin Reid <http://homepage.mac.com/kpreid/> More information about the e-lang
OPCFW_CODE
<?php declare(strict_types = 1); namespace Dms\Core\Form; use Dms\Core\Exception\BaseException; use Dms\Core\Exception\InvalidArgumentException; use Dms\Core\Language\Message; /** * Exception for an invalid form submission. * * @author Elliot Levin <elliot@aanet.com.au> */ class InvalidFormSubmissionException extends BaseException { /** * @var IForm */ private $form; /** * @var array */ private $input; /** * @var InvalidInputException[] */ private $invalidInputExceptions = []; /** * @var InvalidInnerFormSubmissionException[] */ private $invalidInnerFormSubmissionExceptions = []; /** * @var UnmetConstraintException[] */ private $unmetConstraintExceptions = []; /** * @param IForm $form * @param array $input * @param InvalidInputException[] $invalidInputExceptions * @param InvalidInnerFormSubmissionException[] $invalidInnerFormSubmissionExceptions * @param UnmetConstraintException[] $unmetConstraintExceptions */ public function __construct( IForm $form, array $input, array $invalidInputExceptions, array $invalidInnerFormSubmissionExceptions, array $unmetConstraintExceptions ) { InvalidArgumentException::verifyAllInstanceOf(__METHOD__, 'invalidInputExceptions', $invalidInputExceptions, InvalidInputException::class); InvalidArgumentException::verifyAllInstanceOf(__METHOD__, 'invalidInnerFormSubmissionExceptions', $invalidInnerFormSubmissionExceptions, InvalidInnerFormSubmissionException::class); InvalidArgumentException::verifyAllInstanceOf(__METHOD__, 'unmetConstraintExceptions', $unmetConstraintExceptions, UnmetConstraintException::class); $messages = []; foreach ($invalidInputExceptions as $exception) { foreach ($exception->getMessages() as $message) { $messages[] = $exception->getField()->getName() . ' => ' . $message->getId(); } } foreach ($invalidInnerFormSubmissionExceptions as $exception) { $messages[] = $exception->getField()->getName() . ' => { ' . $exception->getMessage() . ' }'; } foreach ($unmetConstraintExceptions as $exception) { foreach ($exception->getMessages() as $message) { $messages[] = $message->getId(); } } parent::__construct(implode(', ', $messages)); $this->form = $form; $this->input = $input; foreach ($invalidInputExceptions as $exception) { $this->invalidInputExceptions[$exception->getField()->getName()] = $exception; } foreach ($invalidInnerFormSubmissionExceptions as $exception) { $this->invalidInnerFormSubmissionExceptions[$exception->getField()->getName()] = $exception; } $this->unmetConstraintExceptions = $unmetConstraintExceptions; } /** * @return IForm */ public function getForm() : IForm { return $this->form; } /** * @return array */ public function getInput() : array { return $this->input; } /** * @return InvalidInputException[] */ public function getInvalidInputExceptions() : array { return $this->invalidInputExceptions; } /** * @param string $fieldName * * @return InvalidInputException|null */ public function getInvalidInputExceptionFor(string $fieldName) { return isset($this->invalidInputExceptions[$fieldName]) ? $this->invalidInputExceptions[$fieldName] : null; } /** * @return InvalidInnerFormSubmissionException[] */ public function getInvalidInnerFormSubmissionExceptions() : array { return $this->invalidInnerFormSubmissionExceptions; } /** * @param string $fieldName * * @return InvalidInnerFormSubmissionException|null */ public function getInnerFormSubmissionExceptionFor(string $fieldName) { return isset($this->invalidInnerFormSubmissionExceptions[$fieldName]) ? $this->invalidInnerFormSubmissionExceptions[$fieldName] : null; } /** * @param string $fieldName * * @return Message[] */ public function getMessagesFor(string $fieldName) : array { $invalidInputException = $this->getInvalidInputExceptionFor($fieldName); if ($invalidInputException) { return $invalidInputException->getMessages(); } $invalidFormSubmission = $this->getInnerFormSubmissionExceptionFor($fieldName); if ($invalidFormSubmission) { return $invalidFormSubmission->getFieldMessageMap(); } return []; } /** * @return Message[][] */ public function getFieldMessageMap() : array { $messages = []; foreach ($this->form->getFields() as $field) { $messages[$field->getName()] = $this->getMessagesFor($field->getName()); } return $messages; } /** * @return UnmetConstraintException[] */ public function getUnmetConstraintExceptions() : array { return $this->unmetConstraintExceptions; } /** * @return Message[] */ public function getAllConstraintMessages() : array { $messages = []; foreach ($this->unmetConstraintExceptions as $exception) { foreach ($exception->getMessages() as $message) { $messages[] = $message; } } return $messages; } /** * @return Message[] */ public function getAllMessages() : array { $messages = []; foreach ($this->invalidInputExceptions as $inputException) { $messages = array_merge($messages, $inputException->getMessages()); } $messages = array_merge($messages, $this->getAllConstraintMessages()); foreach ($this->invalidInnerFormSubmissionExceptions as $e) { $innerFormMessages = []; foreach ($e->getAllMessages() as $message) { $parameters = $message->getParameters(); if (isset($parameters['field'])) { $parameters['field'] = $e->getField()->getLabel() . ' > ' . $parameters['field']; } $innerFormMessages[] = $message->withParameters($parameters); } $messages = array_merge($messages, $innerFormMessages); } return $messages; } }
STACK_EDU
Laravel: Best way to implement dynamic routing in routes.php based on environment variable? My aim is to roll out a big re-theming / re-skinning (including new URL routing) for a Laravel v5 project without touching the existing business logic (as much as possible that is). This is my current approach: I placed a APP_SKIN=v2 entry in my .env file My app\Http\routes.php file has been changed as follows: if (env('APP_SKIN') === "v2") { # Point to the v2 controllers Route::get('/', 'v2\GeneralController@home' ); ... all other v2 controllers here ... } else { # Point to the original controllers Route::get('/', 'GeneralController@home' ); ... all other controllers } All v2 controllers have been placed in app/Http/Controllers/v2 and namespaced accordingly All v2 blade templates have been placed in resources/views/v2 the rest of the business logic remains exactly the same and shared between the "skins". My question: Is there a "better" way to achieve the above?. Please note that the idea here is to affect as few files as possible when doing the migration, as well as ensure that the admin can simply change an environment variable and "roll back" to the previous skin if there are problems. Within app/Providers/RouteServiceProvider.php you can define your routes and namespaces, etc. This is where I would put the logic you talked about (rather than in the routes file): protected function mapWebRoutes() { if (App::env('APP_SKIN') === 'v2') { Route::group([ 'middleware' => 'web', 'namespace' => $this->namespace, ], function ($router) { require base_path('routes/web_v2.php'); }); } else { // ... } } This way, you can create separate route files to make it a bit cleaner. Aside from that, I personally can't see a better solution for your situation than what you described as I'm guessing your templates want to vary in the data that they provide, which if that is the case then you will need new controllers - otherwise you could set a variable in a middleware which is then retrieved by your current controllers which could then determine which views, css and js are included. This would mean you would only need to update your existing controllers, but depending upon your current code - this could mean doing just as much work as your current solution. Yup, that's what I was after thanks! This is a better approach in the sense that the existing routes.php will not be affected or changed at all, but rather a routes_v2.php will be created just for the new skin. Routes pass through Middleware. Thus you can achieve this by BeforeMiddleware as follows public function handle($request, Closure $next) { // Get path and append v2 if env is v2 $path = $request->path(); $page = $str = str_replace('', '', $path); // You can replace if neccesary // Before middleware if (env('APP_SKIN') === "v2") { return $next($request); } else { } } Thanks! But please note that I'm not after appending or changing the url in any way. But rather, I want the same routes to use different controllers (and views) when the environment variable is set.
STACK_EXCHANGE
Looks like there could be default themes in 2.81: What happened to the themes for 2.80 Finally someone who appreciates dark themes. Only real “Pros” prefer dark theme, just saying Color does make it better. I’ve added color to the icons and matched the node colors to Blender’s default theme. And I’ve updated the theme on GitHub. Thanks for requesting this Alperen! The theme is good, dark colors soothe eyes, especially if you use Blender for a long time. But I noticed an annoyance in the current version of the Blender Pro theme: In Wireframe mode: vertices are clearly visible, no problem at all: edges are ok but slightly less visible: faces are ok, no problem: But imo, the real problem is with the Solid mode, it’s hard to see vertices and edges: If you test with the default Blender Dark theme, they are more visible (there is a stronger contrast between the wireframe color and the selected elements). This contrast would even be stronger if the selected elements were yellow, instead of orange. I know it’s difficult to find a good color that would work with the Wireframe AND the Solid mode, but I just wanted to report this small annoyance. Thank you xan2622! Your feedback really helps to make this theme better. And I clearly see your point. Maybe adding color to the unselected elements will help visibility. I’ll let you know as soon as I’ve updated the repo! Check it out: 3d-viewport-update And please let me know if this solves this small annoyance I don’t have much to offer this thread, other than I’m following it, and really enjoying your work! Thanks! Thank you AlphaVDP2, I really appreciate it. hi! i was exacly going to tell about that edit mode colors issue but good to see you solved it fast! another thing i noticed is; i can’t see bevel weights on edges. first is blender dark second is blender pro. Thnx Alperen! Now I know where those colors are for New edge-weight colors in Blender Pro theme: Check it out: Download Blender Pro theme. And let me know if this update works for you. hmm… I am sorry guys but for me, it’s not solved. I couldn’t test your recent changes earlier but now that I am back from vacancy, I have been able to test the latest version from your github repository. And here is what I see. I don’t think it’s better, at all. IMO, the goal is to have nice colors (not hurting eyes) but also to be able to clearly see the selected elements (vertices, edges, faces), and I think that the new color for the Wire Edit is not really better. BTW, these changes don’t look good with Edge Seams, Edge Sharps, Edge Creases, which are not really visible anymore with this new Wire Edit color (CC9C53). blender_pro_modified.xml (42.2 KB) Objects unselected, then selected: Edge Seams, Edge Sharps, Edge Crease: What do you think? I have tried to create a good compromise between the Wireframe and Solid modes. There are certainly some colors to check (I haven’t tested these changes in all conditions) but I think these colors work pretty well. Or, since your theme is supposed to have a bluish main hue, what about these colors: blender_pro_modified_2.xml (42.2 KB) Hi xan2622, Welcome back, and thanks for the feedback! And I agree, usability first, not hurting eyes close second I like your modifications, just a few things we have to consider. - with white being the selected color you’re unable to tell the last selected/active vert/edge/face. - the unselected wireframes on wire/x-ray are hard to see. - the edge weights hues are based on Blender’s default hues for recognizability. I’m not much of a modeler so I’m not sure how important those hues are. I’ll have another go at the 3d-viewport colors. And if you quickly like to test theme settings, I’m working on this other project: testblends Have you ever heard of this add-on ? It seems useful but 15$… There is a 5$ lite version though. Yeah, I’ve seen it passing by at devtalk. Seems useful and I’m pretty sure it would save me quite some time. But as for paid add-on’s… when I start making money using Blender, the Blender Foundation will be the first entity i’ll pay Hi xan2622! Thanks again for your feedback and modifications. I’ve modified your modified version and updated the 3d-viewport-branch. I’ve also updated the outliner to reflect the new object selection colors. Please let me know if this update works for you. Check it out: 3d-viewport-update in terms of usability, I’m doing some tests, and I think it’s one of the best P.S. updates the color of the selected object of the outliner, which they probably have made some small changes in development and is a color that has nothing to do with the theme. Thank you noki paike, And I’ve changed the new active color in the outliner for 2.81 and updated the 3d-viewport branch. hi Paul! I just wondered why different branch for updates? Master is no longer getting updates?
OPCFW_CODE
Use the table above and diagram reference to connect your i2c LCD screen to the Raspberry Pi. Note: I'm using a backpack module to make the process a little easier. You can connect the LED to either the LCD itself or the Pi for custom control options. To use our LCD screen, we'll need to enable i2c. Access the Raspberry Pi configuration menu: Under Interfacing Options, select the option to enable I2C. Confirm the change and restart your Raspberry Pi. We need to see which I2C address our LCD is using. To do this, we'll be installing a package called I2C Tools. Run the following command: sudo apt-get install i2c-tools Install the following SMBUS Python library: sudo apt-get install python-smbus Restart the Pi and run this command to find the I2C address. i2cdetect -y 1 This will return a table full of addresses. Jot down the number used by your LCD screen, mine happens to be 27. Update i2c_driver.py with the address number your screen is using on line 6. Next we'll need to install this Python i2c driver. On the GitHub repository for this project, we've added a driver that you can use! It's basically just a refactor of a driver provided by DenisFromHR on GitHub. Installing the driver is pretty simple. Just make sure you're in your home directory and use wget. cd ~ wget https://raw.githubusercontent.com/Howchoo/smart-alarm-clock/master/i2c_driver.py Now it's time to set up the RPLCD library. Begin by installing PIP: sudo apt-get install python-pip Once complete, install the RPLCD library package. Run the following: sudo pip install RPLCD It's time for the fun part! We need our LCD screen to give us some kind out output. The following script will display the time and date on our LCD screen. You can get creative with output settings. #!/usr/bin/env python import I2C_LCD_driver import time mylcd = I2C_LCD_driver.lcd() while True: mylcd.lcd_display_string(time.strftime('%I:%M:%S %p'), 1) mylcd.lcd_display_string(time.strftime('%a %b %d, 20%y'), 2) Save this python script to a file and drop it in /home/pi. I've named my file Note: I'm from America so I prefer my date with the Month in front. However, you can use this opportunity to adjust the output however you like. Visit this Ubuntu page for more information on clock output customization. display.py script, Python's time module will output the current time in whatever timezone is set on the Raspberry Pi. So, in order to see the time in our local timezone, we'll need to configure the timezone. We've written a detailed guide showing you how to set the timezone on your Raspberry Pi (tl;dr Test your script by running the following code: sudo python display.py Note: Replace display.py with the file name you chose for your python script. Find a sound to use for your alarm clock. I'm using a file labeled alarm.wav. Drop it in the /home/pi folder. The alarm script requires a scheduler to work properly. Run the following script to install the scheduler: pip install schedule Create a python script to initiate our sound file at a specific time. Below is the script I used to schedule my alarm (for 7:00AM). #!/usr/bin/env python import schedule import subprocess import time def job(): subprocess.call(['aplay /home/pi/alarm.wav'], shell=True) schedule.every().day.at('7:00').do(job) while True: schedule.run_pending() time.sleep(1) Make sure the alarm script is running the alarm when scheduled. Plug in your speaker and run the following script. Replace alarm.py with the name of your alarm script file. sudo python alarm.py Note: If you have trouble when testing your alarm, try scheduling it for at least 5-10 minutes into the future. Sometimes the Pi needs a few minutes before will work. Open the crontab file with the following command At the end of the file, add the following two lines of code. Be sure to replace the 'display.py' and 'alarm.py' with your custom display and alarm scripts. @reboot nohup python display.py & @reboot nohup python alarm.py & Save and close the file. Congratulations! Every time your Pi restarts, it’s going to tell you the time and schedule your custom alarm. Now let’s get to the good stuff. To make our alarm clock a smart alarm clock, we’ll be using AVS. Visit our guide here to learn how to set up AVS on the Raspberry Pi. Using Alexa, you can create commands and even trigger custom python scripts! Here are a few example ideas to get you started: - Set alarms using voice commands - Check the weather - Program reminders for future events If your Pi has an internet connection (see our guide on setting up WiFi on your Pi), the time should remain perfectly in sync. Where you take the project from here is up to you. Congratulations! Your Raspberry Pi is now a completely functional alarm clock. If you want to get really creative, incorporate it with our awesome Vinyl Record Clock project.
OPCFW_CODE
Allow Multi Selection, This property allows you to select multiple values from LOV or Combobox. Data source, It specifies the datasource name from which the values should be filled in the combobox. Max Height, It returns the dropdown list's max height. Eg: Max height=20. Min List Width, It sets the minimum width of the dropdown list in pixels based on the value that you provide. Type Ahead, To populate and auto select the remainder of the text being typed after a configurable delay ... |Video User Interface| The default value for 'Type' is LOV, check Active and click icon. Once object mapping has been created then you need to do the mapping columns configurations.Click the icon to define mapping columns. If you want to create a Object mapping, atleast you need to define two mapping columns entries. Select Attribute, Reference Attribute, and Join Type from the respective dropdown windows. As a second entry for object mappings, you need to provide the reference attribute and the return value for Join ... |Server Side Setups Wiki| Steps to create a Dependent ComboBox on a Form. Note: Dependent ComboBox won't work on a Grid. Use LOV if you need dependent fields within a Grid. What is Dependent ComboBox ? The values listed in a ComboBox are based on the value selected in another ComboBox. Steps to create Dependent ComboBox. Let us consider two dependant combo box say Risk-Type level1 and Risk-Type level2. Now before you configure the dependant combo box, you need to setup the settings as follows. Then a popup box will be displayed with Page name drop down box and Item Name LOV. Here you have to specify the Page Name. Then specify the Item Name by clicking on the Search option . This will provide you with all the Items that are there in the above selected Page.Then click on Select. Then click on Copy and the item will be automatically copied to the destination page and as a child to the corresponding Item selected. Now you can see that the Component is copied to the destination page and ... Change Theme of your application. Go to File → My Profile. Click on the My Profile option and you will be directed to its detail page. Now click on search and all the profile values will be displayed. Go to theme and click on the LOV field. A popup with list of themes will be displayed. Select your theme and click on select. A confirmation message appears. Now click on ok. And then Right click and click on the Reload Profiles. The selected theme will be applied. In the property pallet you will be provided with attributes of the corresponding component. Select Item Type drop down and select the required Field Type. Once the Field Type is selected, below it the properties are displayed . Here a Allow Blank Checkbox is provided. To make the field mandatory disable the checkbox. There are different Field Types to which this property is associated. They are Attachments,Combobox, Date Field, Email, Lov Field, Number Field, Phone, Rich Text,Text Area,Text Field. Validate Row. VALIDATE_ROW is a PL/SQL procedure that get's called when a " Server Validate on Change" property is set on an editable field (TextField, LOV, ComboBox, CheckBox, NumberField, DateField etc). When do we use it? It can be used to check whether the value changed by the user on the UI is valid selection or not. It can be used to fetch some extra columns from the DB to show in the UI. Basically if a value change in the field requires interaction with the database, then VALIDATE ... |Advanced Topics Wiki| Assuming that you are already in the Roles page, click on the Role to which you want to assign the user. Goto Users Assigned tab just below the Roles grid . Click on Add New User button. A new row will be populated. Click on the Lov and a popup arises with list of users. Select the user you wish to add to the current role and click on Select button. This is how a user will be added to the current role. 101 California Street, Suite 2710 San Francisco, CA 94111 440 N. Wolfe Rd. Sunnyvale, CA 94085 Office 11, 5th Floor, Building 9, Mindspace IT Park
OPCFW_CODE
[quote=“therealdb, post:119, topic:185261”]You have to set the variables as per the original plugin. I’m busy finishing a new book, but I’ll take a look and write a guide in the next weeks.[/quote] Thanks again. Anything I can do to help, let me know. In the mean time. I have managed to get Vera to operate the OS controller (Disable/Enable) and it’s outputs (Valves) (On/Off) but unfortunately there is still no feedback on status of the OS controller or any of the outputs. To do this I had to completely delete the OS add on and follow the first post on this thread. I ended up with the following and I can now enable and disable the OS controller and turn on and off any of the valves connected to that controller. Remember the valves will turn on and then automatically off for the predetermined time set in the Variables tab called “ManualMaxMinutes” or you can turn them off before that time set via a scene or manually. Great. I forgot to commit a new version with local icons. I’ll do next day since I’m travelling again. If you are able to write a small manual, feel free to make a pull request or send it to me directly and I’ll publish it. [quote=“therealdb, post:122, topic:185261”]Great. I forgot to commit a new version with local icons. I’ll do next day since I’m travelling again. If you are able to write a small manual, feel free to make a pull request or send it to me directly and I’ll publish it.[/quote] Everything is already documented on the first post to this thread, or the link I provided. If there is anything I can further do, please let me know. Thank you for your effort here. Yep, I’ve been struggling trying to get this working and using the instructions from the first post and well… no bueno. I tried installing the original app, overwriting the files mentioned in the first, and then uploading the two files from this most recent modification. Nothing works however. As a final resort, I uploaded all the files in the most recent update and that just about killed Vera Luckily I was able to uninstall the app and reinstall it. still no luck getting this working. yeah it killed my vera as well === it loads abou 35 valves / zones that kills the vera I found you’re supposed to put the password (not sure if it’s the hashed or clear text) into the I_OpenSprinkler1.xml file. Problem is I did that and it wouldn’t work. I did get it working once after I disabled the password on the controller but then I had to deny internet access to the controller. When I went to start modifying the sprinkler names that killed it and now I can’t get it to communicate again. Been struggling with it since without making any file changes. Edit: Got it working in UI7 on my Veraplus, with md5 hashed password and 6 zones - Install the opensprinkler app and let it install the controller - Add your IP to the device. Under Advanced - Variables, add your zones (2 are added by default) - If yours works, abort. Proceed no further! For most though, it won’t work. - Upload the 3 files from the first post in this thread. - Note, the I_OpenSprinkler1.xml file will be uploaded again later but with the password modification - Ensure your OS controller has “Ignore Password” enabled. At this point, - Open the 2 files from therealdb’s github (D_OpenSprinkler1_UI7.xml and D_OpenSprinkler1_UI7.json) - In Vera, go to the controller’s property and click Advanced. Change the device_file to D_OpenSprinkler1_UI7.xml - You should be ready to rock at this point. To restore security - Uncheck “Ignore Password” on your controller - Open up “I_OpenSprinkler1.xml” in notepad or a true xml editor (this tripped me up till i just used a plain text editor) - Look for the line “HASHEDPW = md5(PW)” and change md5(PW) to your hashed password in quotes - for example: HASHEDPW = “bj289hgkadjfjashggjio2” - To generate a hash I’d recommend doing it offline using an app but there are online generators too - Save and upload the file Now I just wish I could use these as a light switch so I could integrate them into Alexa, Google, or Homekit. I’ve got the older OS2.2 which doesn’t support IFTT. Although, now that it’s in Vera… I didn’t tried with my Alexa, but it should work if you change category_num (3) and subcategory_num (3) to your device and do a rediscovery again. Remember to not use all lights on/off, becuase you will change your spinklers status as well. Will this be updated in the app section or do we have to manually install it? I don’t understand this one… “Open the 2 files from therealdb’s github (D_OpenSprinkler1_UI7.xml and D_OpenSprinkler1_UI7.json)” As I don’t own the code and no license was attached, I simply fixed it and uploaded on my GitHub with no guarantee I’ll work, etc. Unfortunately the original author seems to not reply to questions about this plugin, so I cannot “officially” take ownership of it. [quote=“Viruta57, post:129, topic:185261”]I don’t understand this one… “Open the 2 files from therealdb’s github (D_OpenSprinkler1_UI7.xml and D_OpenSprinkler1_UI7.json)”[/quote] this is not necessary, if you set the proper variable via UI/LUA. doesnt work on my veraplus crashed and had to do a reboot for some reason my vera plus wnot accept it and it just crashes I didn’t tested it with the latest firmware, but it should work OK. I moved away from using it, since I have invested in my own automation middleware, taking care of notifications and automations related to Open Sprinkler, but this should work OK out of the box. the setup is a bit convoluted, since you have to patch the official plugin. I’m not sure if @Sorin has any opinion about dead plugins in the store and an external dev taking over to fix things when no license is attached to the original code, but I’m eventuallu available to submit the fixes to the original plugin if that’s allowed. is not that it doest work its just that its not reading the astatus codes propery so when i use it produces the task handler message " OpenSprinkler : Error manually controlling valve Connection Issue" this only happen when resp in the code reads nill if (resp == nil) then return “ERROR”, nil, “Connection Issue” local lul_base_cmd = http://192.168.15.130/cm?pw=a6d82bced638de3def1e9bbb4983225c&sid=0&en=1&t=1800 if lul_settings.NewModeTarget == "Disable" then lul_cmd = lul_base_cmd .. 'en=0' elseif lul_settings.NewModeTarget == "Enable" then lul_cmd = lul_base_cmd .. 'en=1' end local isOk, resp, err = os_http_call(lul_cmd) local taskHandle = TASK_HANDLE or -1 if (isOk ~= "OK") then taskHandle = luup.task("Error enabling/disabling controller: " .. err, luupTaskReturns["Error"], MSG_CLASS, taskHandle) else taskHandle = luup.task("", luupTaskReturns["Success"], "", taskHandle) luup.variable_set("urn:fowler-cc:serviceId:OpenSprinkler1", "ModeStatus", lul_settings.NewModeTarget, lul_device) end if (TASK_HANDLE == nil) then TASK_HANDLE = taskHandle end debug("OpenSprinkler : SetModeTarget : Exit") return 4 the http response it Status Code:200 OK the text on the page displays by looking at this code, you seems to use the old plugin files. you should upload them as per the previous posts, then try again. you can grab them here: https://github.com/dbochicchio/vera then go to Apps, Develop Apps, Upload Files and upload them all. if you them correctly installed, you should see a water valve icon, instaed of the original one. make sure to also save your MD5 password in the device variables. did that a few time did not work … it crashed my vera even up to tonight im gonna try again … is there any other help for this plugin after about 2 years i was able to fix it using some of therealdb’s code all u have to do is upload the I_OpenSprinkler1.xml implementation file
OPCFW_CODE
GlobalCapture uses an OCR engine to extract indexing data from document images. GlobalCapture Templates define the areas where extraction should take place as Zones. Zones can be configured for a Template in a variety of ways, from simply defining a set area for structured extraction to complex, dynamic, multi-Zone unstructured extraction. Each Template can have one or more Zones. Zones can be applied to one or more pages or multi-page documents. The options available for Zones in your Template design depend upon the GlobalCapture licensing available. Note that licensing is not enforced at the time of Template design. Licensed extraction features are only enforced by the GlobalCapture Engine at run time because the Template Designer may be used by multiple Engines with different licensing in the same installation. Be sure you understand the licenses available to your production installation prior to creating Templates with features you may not be able to leverage. Some key points about Zones include: Structured Data Extraction – Use Structured Data Extraction for standardized documents (like forms and certificates) with data in specific locations. Extraction areas are defined by their coordinates on the document page. The Marker and Positional Zones are Structured Data Extraction Zones. Unstructured Data Extraction – Unstructured Data Extraction uses advanced capture technology to evaluate documents in a far more dynamic manner. Use it, for example, to determine how to separate a batch scan into its various documents, extract line-item data, find keywords in a document, and find values in proximity to keywords. Unstructured Data Extraction Zones include Pattern Match, Directional, and Data Lookup Zones. Hierarchical Relationships Between Zones – Zones can be bound to parent Zones, for as many nested levels as you need. For example, you could set a parent Zone to find the text “PO Number” and a child Zone to look to the right of that text and extract the number found in that location. Map to Fields – When you need to track the data extracted from Zones (such as vendor name or invoice number), you can map the Zones to indexing fields in GlobalCapture and GlobalSearch (if installed). Create whatever fields you need in the Field Catalog and then map them to your Zones. Regular Expressions – You can use regular expressions (Regex) to precisely define the variables of your Zone's search string. As you create and test your Zones in the Template Designer, two rectangular areas will appear on the sample document image. These are the Search Region and the Results Region. The Search Region is the area of the document to be searched for data to extract. It can be as large as an entire page or as small as a single character. Search Regions can be defined by coordinates on the page (in pixels) or by its relationship to other Search Regions in the Template. The position of Zone coordinates or the start of a search string match goes from the top-left corner of the document image or parent Zone to the bottom-right corner. If the OCR engine returns results from searching the Search Region, the Results Region appears. This is usually, but not always a subset of the Search Region. Since the Search Region may be configured to extract overlapping data or dynamically extract data from other Zones, the Results Region may not be in exactly the same place on the page as the assigned Search Region. To create a Zone, in the Template Designer menu bar, click the Add icon and in the drop-down menu, click the Zone icon. Configure the settings for the new Zone. The Zone parameters are contextual, depending upon which Zone type you have selected. Note that if a configuration error exists after applying Zone Properties, a validation message displays in red under the Zone in the Zones Pane. Click the Apply icon to save the Zone configuration. To edit a Zone, select it in the Zones Pane. Its properties will display in the Properties Pane. Reconfigure the settings for the new Zone. Click the Apply icon. To delete a Zone in a Template, in the Zones Pane, select the Zone. In the Properties Pane for that Zone, click the Delete icon. Name Zones for Future Use When you name each Zone, you should clearly indicate what it is for, especially if using several Zones in the Template. Indicate the Type and possibly the parent/child relationship. For example, you could begin all of your Marker Zones with “M_” to make it easy to select a Marker Zone when you are building a parent/child relationship with a Zone.
OPCFW_CODE
I’m a fan of static code analyzing. With the use of fancy scanner tools we can get detailed reports about source code mishaps and quite decently pinpoint what source code that is suspicious and may contain bugs. In the old days we used different lint versions but they were all annoying and very often just puked out far too many warnings and errors to be really useful. Out of coincidence I ended up getting analyses done (by helpful volunteers) on the curl 7.26.0 source base with three different tools. An excellent opportunity for me to compare them all and to share the outcome and my insights of this with you, my friends. Perhaps I should add that the analyzed code base is 100% pure C89 compatible C code. Some general observations First out, each of the three tools detected several issues the other two didn’t spot. I would say this indicates that these tools still have a lot to improve and also that it actually is worth it to run multiple tools against the same source code for extra precaution. Secondly, the libcurl source code has some known peculiarities that admittedly is hard for static analyzers to figure out and not alert with false positives. For example we have several macros that look like functions and on several platforms and build combinations they evaluate as nothing, which causes dead code to be generated. Another example is that we have several cases of vararg-style functions and these functions are documented to work in ways that the analyzers don’t always figure out (both clang-analyzer and Coverity show problems with these). Thirdly, the same lesson we knew from the lint days is still true. Tools that generate too many false positives are really hard to work with since going through hundreds of issues that after analyses turn out to be nothing makes your eyes sore and your head hurt. The first report I got was done with Fortify. I had heard about this commercial tool before but I had never seen any results from a run but now I did. The report I got was a PDF containing 629 pages listing 1924 possible issues among the 130,000 lines of code in the project. Fortify claimed 843 possible buffer overflows. I quickly got bored trying to find even one that could lead to a problem. It turns out Fortify has a very short attention span and warns very easily on lots of places where a very quick glance by a human tells us there’s nothing to be worried about. Having hundreds and hundreds of these is really tedious and hard to work with. If we’re kind we call them all false positives. But sometimes it is more than so, some of the alerts are plain bugs like when it warns on a buffer overflow on this line, warning that it may write beyond the buffer. All variables are ‘int’ and as we know sscanf() writes an integer to the passed in variable for each %d instance. sscanf(ptr, "%d.%d.%d.%d", &int1, &int2, &int3, &int4); I ended up finding and correcting two flaws detected with Fortify, both were cases where memory allocation failures weren’t handled properly. Given the exact same code base, clang-analyzer reported 62 potential issues. clang is an awesome and free tool. It really stands out in the way it clearly and very descriptive explains exactly how the code is executed and which code paths that are selected when it reaches the passage is thinks might be problematic. The reports from clang-analyzer are in HTML and there’s a single file for each issue and it generates a nice looking source code with embedded comments about which flow that was followed all the way down to the problem. A little snippet from a genuine issue in the curl code is shown in the screenshot I include above. Given the exact same code base, Coverity detected and reported 118 issues. In this case I got the report from a friend as a text file, which I’m sure is just one output version. Similar to Fortify, this is a proprietary tool. As you can see in the example screenshot, it does provide a rather fancy and descriptive analysis of the exact the code flow that leads to the problem it suggests exist in the code. The function referenced in this shot is a very large function with a state-machine featuring many states. Out of the 118 issues, many of them were actually the same error but with different code paths leading to them. The report made me fix at least 4 accurate problems but they will probably silence over 20 warnings. From this test of a single source base, I rank them in this order: - Coverity – very accurate reports and few false positives - clang-analyzer – awesome reports, missed slightly too many issues and reported slightly too many false positives - Fortify – the good parts drown in all those hundreds of false positives
OPCFW_CODE
Move to RecyclerView For the moment there's nothing wrong with the ListView's in the app but it would be nice and open up some new opportunities if we switched to RecyclerView. Though there are some things to bear in mind: Headers and footers are required. (Sticky headers are a plus) Needs to handle multiple view types in the same list. (If there's more, do tell me) I've had a look at Groupie and I like the way you can easily build the lists. This is what I'd prefer we implement. I've been using AdapterDelegates in my own app for a while and it has been a delight to use, plus it is very lightweight (only 6 classes). I haven't used Groupie before but from a quick glance at the README, it seems like AdapterDelegates is less invasive as you don't have to extend your models with any classes. You don't have to extend your models with anything. You do have to create a class binding the data to the view (Since it uses view binding) but that also means less textView.setText(...) in the code. I gonna be looking both at AdapterDelegates more but also waiting too see what happens to Groupie since it's a young project. Someone is implementing Sticky headers on top of it so that would be kind of nice to have. Sorry I must have misunderstood the README on the first pass. As Groupie uses data binding (and it appears that you aren't suggesting to just use the generated view binder of data binding without the other aspects of data binding), does this mean that the architecture of PocketHub is migrating towards MVVM? From the looks of it, Groupie seems to have a lot of features whereas AdapterDelegates is more bare bones, so I guess it depends on your slant. Do you have any more information about the implementation of the sticky headers e.g. will it be a fork of Groupie or will it be part of the library? Is there currently a GitHub issue for it if so? The guy has only made a PR for a small change to allow him to add it but it's implemented in his fork. here, and the diff is here Looks good, do you think it is best to wait for the PR to be merged before implementing the RecyclerView in case there is any changes to the Groupie API if it is merged? We'll I'm still thinking about which I prefer, AdapterDelegates or Groupie. Will take a day or two to decide completely. Gonna go for AdapterDelegates for three reasons. It's more mature than Groupie and will probably give us less problems when implementing. When I last checked we were very close to the 64k DEX limit and databinding/Groupie would probably push us over that. There will be less code to refactor since we don't have to move everything to XML. There are somthing that we should remember doing when we migrate to RecyclerView: Headers and footers are important for now. We can simply extend the adapter from AdapterDelegates and att some logic above it to handle headers and footers. Try to split up IconAndViewTextManager into multiple adapters (Feels like a huge class to have) Headers and footers are important for now. We can simply extend the adapter from AdapterDelegates and att some logic above it to handle headers and footers. Couldn't we just have two separate AdapterDelegates, one which is a header and one which is a footer? Or is that what you meant by the above? Well we could but that still means we need to pass a list with the headers/footers inside it. What it mean is that the actual adapter should handle the headers and footers. Like this. We set a list of comments on the adapter We call addHeader on the adapter. The adapter itself ads the header above all other items. When replacing/updating the comments the header is the same. This way the activity/fragment doesn't need to handle any header work. Unless the headers are inside the list as: Header Notification Notification Header Notification Notification Header Is fast scroll on the list view still a desired feature? RecyclerView doesn't natively support this and a third party library would have to be pulled in (or implement ourselves). Personally, I think the fast scroller should just be removed as I can't see a use case where you would want to scroll to random parts of the list at a moments notice. This also eases migration to using a RecyclerView.
GITHUB_ARCHIVE
I recently read a paper on the trimming of random effect structure by Bates, Kliegle, Vasishth and Baayen (2015). My understanding is that the Parsimonious Mixed Model they proposed mainly follows the principle of progressively excluding random slopes that account for almost no amount of variability (i.e., proportion of variance is almost 0). I want to follow Parsimonious Mixed Model approach to prune my random effect structure in a paper that I am currently writing for publication, and want to summarise the principles that I use to trim the random effect structure in an accurate and precise manner. Any correction for my misunderstanding would be appreciated. Yes, your understanding is correct, but it if probabily a good idea to understand that background of that paper. Following the publication of the "Keep it Maximal" paper by Barr etc al (2013), which is referenced substantially by Bates, practitioners were increasingly confronted with models that convereged with a singular fit, due to a hopelessly over-parameterised random effects structure. Just see the number of posts on here about singular fits as some evidence for that. Bates et al (2015) were specifically attempting to address this problem and I wrote an answer based on their recommendations here: However I don't think it is correct to say that Bates recommends starting with a maximal model and simplifying. This is the recommendation for the people who think a maximal model is a good idea in the first place. It clearly isn't when the number of estimated variance components becomes close to the number of observations, but it might be a good idea when this is not the case. For example in many observational studies it is perfectly reasonable to allow all the main exposure(s) to vary by subject. But the same can't as easily said for competing exposures and confounders. It might very well be the case that models with random slopes for these have a better fit to the data than ones without, but starting out with a fully maximal model and pruning it according to p-value thresholds of likelihood ratio tests, is in my opinion the wrong thing to do. I would start with a parsimonious model only including random slopes that I believe a priori should be allowed to vary by subject, based on domain knowledge and theory - and this would not normally include confounders and competing exposures. If that model had a singular fit then I would use the approach outlined in my answer above, but if it didn't then I would not seek to make the random structure any more complex. Bates, D., Kliegl, R., Vasishth, S. and Baayen, H., 2015. Parsimonious mixed models. arXiv preprint arXiv:1506.04967. Barr, D.J., Levy, R., Scheepers, C. and Tily, H.J., 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), pp.255-278.
OPCFW_CODE
import pandas as pd from pandas.core.dtypes.common import is_object_dtype import xlwings as xw __version__ = '0.1.1' def show_in_excel( df, return_df=False, activate_excel=True, print_info=False, max_cols=None ): """Show DataFrame/Series in Excel.""" if print_info: if isinstance(df, pd.DataFrame): print(df.info(max_cols=max_cols)) else: print(df.info()) if not isinstance(df, pd.DataFrame): tmp_df = df.to_frame() else: tmp_df = df book = xw.Book() for col, dtype in tmp_df.dtypes.iteritems(): if is_object_dtype(dtype): tmp_df[col] = tmp_df[col].astype(str) book.sheets[0].range("A1").value = tmp_df if activate_excel: book.activate(steal_focus=True) if return_df: return df def via_excel(df): """Show DataFrame/Series in Excel and return the DataFrame/Series. This can be added to existing scripts without any change to the script's behaviour: >>> from excellentpandas import show_in_excel_pipe >>> result = df.do_something().pipe(show_in_excel_pipe) """ return show_in_excel(df, return_df=True) def via_info_excel(df, max_cols=None): """Show DataFrame/Series in Excel and return the DataFrame/Series. This can be added to existing scripts without any change to the script's behaviour: >>> from excellentpandas import show_in_excel_pipe >>> result = df.do_something().pipe(show_in_excel_pipe) """ return show_in_excel(df, return_df=True, print_info=True, max_cols=max_cols)
STACK_EDU
I'm still looking for a way to implement translations in an easier way. For now, it's still done by hand. That's not usable :P Anyway, if you want to translate, please do so But do translate the BETA instead of the latest full version. Instructions can be find in post #1 and #2. I added many functions (so text), and it's not worth it to translate twice Well, maybe I'll wait for a while. Don't see so many Danes here, so once you get an easier translate going, I'll be more than happy to help. But for now, I might just have to re partition my hard drive, so I can take this for a spin. Look really awesome. Sorry for the typos - Better safe than sorry. Samsung Galaxy Nexus [maguro] Official CM11 nightly, or KitKang. Asus Nexus 7 [grouper] Official CM11 M-release If everything else fails, trust your common sense. Squirrels are nice [^] Implemented my own APK Tool (Compile, Decompile, Zipalign, Sign, Extract, Repackage) [^] Implemented Smali, Baksmali, Odex, De-Odex, etc [>] Merged Compile and Decompile. quite a job. [>] Merged Extract and Repackage. [^] Added Aroma menu. Not yet usable, hold on. [^] You can now choose your key when signing [^] A HELL LOT of bug-fixes. I wasn't using the functions, so did not find out they were not working anymore. [^] JDKInstaller now supports building ICS. [>] When Building from source, you can now choose your device [>>] Build from Source big-bug fixed. [>>] Switch BUILD-Mode big-bug fixed [>] Completely re-designed Resize, it was confusing for some users. Clear now? [>] Updated SAI script for myself, first every changelog had to have 11 lines in total :P [>] Switched to Build* instead of Source*. Build* now also includes Kernel building (not fully defined yet!) [>] Redesigned the tool, again :P Every time more and more options to come. Request please! [>] Implemented red text. That will highlight a warning [>] When using "SA" instead of "./SA", the script will now also copy all arguments. [ ] Every piece of the script is now much cleaner. I updated the "case" situation again. Android L, once it is eventually released, will featuredata encryption turned on by … more 20 Sep 2014 By Tomek Kondrat XDA Developers was founded by developers, for developers. It is now a valuable resource for people who want to make the most of their mobile devices, from customizing the look and feel to adding new functionality. Are you a developer?
OPCFW_CODE
Still, there's plenty people around complaining about a static background noise (distorting/clipping sounds) in inumerous racing-games, P&G v2.0 has been affected too (I've received inumerous PM's reporting this problem, with positive feedback after these suggestions). ...the culprits here are, most likely, the settings on the Creative XFI drivers, not the sound samples! I have a Creative X-Fi XtremeGamer myself (vanilla, not Fatality) and it's pretty damn good once the driver settings are "tweaked", so I presume this will work for you with same or different Creative X-Fi soundcards as well. I lost countless hours to find the best settings for it, as I was initially cursing the damn soundcard and blaming myself for its purchase for too much "coloring" in the sound (I already had a Focusrite Saffire-LE professional soundcard, completely different ballgame) and then... *BLING* ...found it's because too much crapp is ON (and wrongly so!) by default in Creative drivers, messing with the sound reproduction. Ok, so starting on the Creative X-FI driver settings, I presume yours will use similar drivers (be it for Win 7, Vista or XP) compared to those that I have for my soundcard, so here's something that might help you there, and note, using X-FI panel control in "Game Mode"... (click the thumbnail pics to enlarge to full size, beware it may take a while to load) GENERAL SETTINGS (ones that I use no matter the speakers/headphones settings used): (NOTE: I like to use Crystalizer "OFF", as it enhances the high-frequencies too much for me (this setting, when on, makes the sounds "brighter", with plenty more treble). If for any reason you decide to use it "ON", I would definitely not recommend going over 50%). (NOTE: I use "flat" equalization -same as not used/unchecked- because that's how I prefer it for being "true" to the sound to be reproducted. Try that one first, before changing anything there). (NOTE: two settings found here, SVM and EAX, are, perhaps, the biggest culprits of crappy sounds... SVM and EAX should be "OFF" (unticked), SVM and EAX forces effects, colloring sound, when and where it shouldn't be used). BassBoost (set for headphones): (NOTE: this option is very usefull to adjust amount of "bass" if using headphones (therefore if placed "ON", adjust to suit tastes) but, if you're using a speakers+subwoofer system, definitely better to leave this always "OFF". In this last case, and instead of using this setting, adjust the bass-level within your speakers+sub system (usually there's a knob for that in the back of the subwoofer) 5.1 (true) Surround Speakers (NOTE: same settings used in any speakers system, 2.1, 5.1, 7.1, etc) Finally, two things that are VERY important... First, no matter what speakers/headphones settings used, never, ever, go over 80% in the "Master Volume" (3/4 turn in the chrome knob adjuster, as you see in screenshots) as this seems to be the max limit of sound quality volume in Creative cards (at least on mine it definitly is), from there on the signal gets distorted/clipping, and crappy sound reproduction is almost garanteed. Second, you should have your subwoofer connected directly to its respective plug-hole on the soundcard (check the soundcard manual, if not sure) and the bass intensity at about 50% (there's usually a button/knob in the back of subwoofer), so if we imagine that knob as a clock, it should be set at 12'o clock. Adjust only to suit taste from there on (quality of subwoofer/speakers is obviously important too)... Last suggestion, just in case you use headphones AND speakers, and if you really have to connect more than one jack (one for left/right speakers, another for headphones) to the same plug-hole (the left/right one, green one usually) in the soundcard, then use a (stereo) jack-splitter instead. Alright... try P&G or other racing-games again and tell us later if it's any better! Edited by DucFreak, Mar 03 2012 - 06:55 AM.
OPCFW_CODE
Deep Dive under the Bridge Connect your instrument to the Bridge Connect Elk Bridge to your router Connect to your bandmates Play live together! How does it actually work? The main problem that Elk LIVE fights is called latency. Latency is the time it takes for the sound produced to reach the ears of the listener. As sound travels 1 meter per 3 milliseconds in air, technically latency always exists, even when playing in the same room. But it starts becoming a problem when you introduce factors, like distance and digital processing in the audio path. The Elk LIVE Bridge Our solution to the problem is the dedicated Elk LIVE Bridge, designed from the ground up to cut out that pesky latency. The Bridge is powered by the award-winning Elk Audio OS and delivers unmatched performance with less than 1 ms internal roundtrip. This combined with our unique latency perception tools cuts out all overhead latency, leaving you to play with just the latency of your internet connection. The latency you'll get from the internet depends on distance and your specific internet connection, but in general, you get about 100 km/millisecond (62 miles) on an average fiber connection (Approx 50% of what you can technically get from a pure fiber connection). This means that playing with someone 1000 km (621 miles) away will give you an overall latency of 10 ms. About the same latency you will get from being 3 meters apart in the same room. She’s got the Lookbuy now! 2 analog inputs (XLR / TRS combo connector) - Line (balanced and unbalanced +22dBu max input level) - Microphone preamp (-12dBu max input level) - Instrument (unbalanced on TRS/TS connector only +13dBu max input level) - Selectable 48V phantom power per port (XLR only) 4 analog outputs: - 2 unbalanced 1/4″ line outputs, +2 dBu maximum output level - Stereo headphone output (selectable 1/4″ or 3,5mm), +4dBu maximum output level - Digital IN/OUTPUT: ADAT, S/PDIF - USB class-compliant (UAC-2) Audio/MIDI device - A/D and D/A conversion 24-bit up to 192kHz - Network – Gigabit (1000BASE-T) - Power – 5 V, 3 A(15W) USB type C - Dimensions – 140x140x45mm - Weight – 483 gr - Audio OS: Elk Audio OS – Ultra Low-Latency Audio Operating System
OPCFW_CODE
Cmake Error: Board is not supported: vendor.board ESP-IDF Describe the bug When following the ESP32-DevkitC getting started guide (https://docs.aws.amazon.com/freertos/latest/userguide/getting_started_espressif.html), CMake fails when I run idf.py build from the amazon-freertos directory. System information Espressif ESP32 DevkitC Linux (Ubuntu 20.04) Version of FreeRTOS : 202012.00-242-g441e02e37 Project: MQTT demo Expected behavior As stated on in the getting started guide, I would expect running this command to build the project without errors and produce .bin files for the firmware. Screenshots or console output Executing action: all (aliases: build) Running cmake in directory /home/walkershanks/Documents/PlantBusiness/ESP32-test/amazon-freertos/build Executing "cmake -G Ninja -DPYTHON_DEPS_CHECKED=1 -DESP_PLATFORM=1 -DCCACHE_ENABLE=0 /home/walkershanks/Documents/PlantBusiness/ESP32-test/amazon-freertos"... -- The C compiler identification is GNU 9.3.0 -- The CXX compiler identification is GNU 9.3.0 -- Check for working C compiler: /bin/cc -- Check for working C compiler: /bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /bin/c++ -- Check for working CXX compiler: /bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /bin/git (found version "2.25.1") -- Submodule update Skipping submodule '../../../libraries/abstractions/backoff_algorithm/' Skipping submodule '../../../../libraries/abstractions/pkcs11/corePKCS11/' Skipping submodule '../../../libraries/abstractions/pkcs11/corePKCS11/' Skipping submodule '../../../libraries/abstractions/pkcs11/corePKCS11/' Skipping submodule '../../../libraries/abstractions/pkcs11/corePKCS11/' Skipping submodule '../../../libraries/coreHTTP/' Skipping submodule '../../../libraries/coreHTTP/' Skipping submodule '../../../libraries/coreHTTP/' Skipping submodule '../../../libraries/coreJSON/' Skipping submodule '../../../libraries/coreJSON/' Skipping submodule '../../../libraries/coreJSON/' Skipping submodule '../../../libraries/coreMQTT/' Skipping submodule '../../../libraries/coreMQTT/' Skipping submodule '../../../libraries/coreMQTT/' Skipping submodule '../../../libraries/device_defender_for_aws/' Skipping submodule '../../../libraries/device_defender_for_aws/' Skipping submodule '../../../libraries/device_defender_for_aws/' Skipping submodule '../../../libraries/device_shadow_for_aws/' Skipping submodule '../../../libraries/device_shadow_for_aws/' Skipping submodule '../../../libraries/device_shadow_for_aws/' Skipping submodule '../../libraries/freertos_plus/standard/freertos_plus_tcp/' Skipping submodule '../../libraries/freertos_plus/standard/freertos_plus_tcp/' Skipping submodule '../../libraries/freertos_plus/standard/freertos_plus_tcp/' Skipping submodule '../../../libraries/jobs_for_aws/' Skipping submodule '../../../libraries/jobs_for_aws/' Skipping submodule '../../../libraries/jobs_for_aws/' CMake Error at CMakeLists.txt:53 (message): Board is not supported: vendor.board -- Configuring incomplete, errors occurred! See also "/home/walkershanks/Documents/PlantBusiness/ESP32-test/amazon-freertos/build/CMakeFiles/CMakeOutput.log". cmake failed with exit code 1 To reproduce Steps to reproduce the behavior: Follow getting started guide for Espressif ESP32 DevKitC Try building project using idf.py build Additional context Using Python 3.8 There are a couple things that I think may be contributing and that confuse me: The getting started guide does not mention installing and exporting the tools inside the vendors/espressif/esp-idf folder but the only way I was able to get idf.py build' to run properly at all was to run both install.sh' and 'export.sh'. The "Standard Setup of Toolchain for Linux" page that is linked by the getting started guide says to install the prerequisites using sudo apt-get install git wget flex bison gperf python python-pip python-setuptools cmake ninja-build ccache libffi-dev libssl-dev dfu-util'. However, to get this to run I have to change pythontopython3`. I actually had this demo working a few days ago using a downloaded version of FreeRTOS from the FreeRTOS console. (I think it uses version 4.0 or 4.1 of the ESP-IDF and an older version of the xtensa toolchain), but I noticed the getting started guide has just been updated to reflect the newest changes to ESP-IDF so I decided to redo running the demo. Thank you! Any help will be greatly appreciated :) Hi @walkershanks, thank you for reporting the problem. We recently upgraded the ESP-IDF version to v4.2 which requires slightly different build and flash steps. We are working to update the Getting Started Guide to reflect the instructions required for using the new ESP-IDF version. For the issue you have reported, the problem is caused by an incomplete cmake command that is issued by the idf.py tool when running the idf.py build command. To resolve the issue, please run the following CMake configuration command from the repository root directory before running the idf.py build command: -DVENDOR=espressif -DBOARD=esp32_wrover_kit -DCOMPILER=xtensa-esp32 -S . -B build -GNinja With the above command, the vendor and board configurations are passed which is missed by the idf.py build command. There are more differences from the current version of the Getting Started Guide for building and flashing with the ESP-IDFv4.2 SDK. I have summarized them in this reply of another issue post. To re-iterate the information in the post, make sure that: You have installed xtensa-esp32-elf-gcc 8.2.0 as the new IDF version does not support the older xtensa-esp32-elf-gcc 5.2.0 version. Your environment contains the required python dependencies for the new IDF version by running python -m pip install -r ./vendors/espressif/esp-idf/requirements.txt command. You are using build as the build directory as the idf.py tool is not supporting the -B build-directory flag in the recent version. The getting started guide does not mention installing and exporting the tools inside the vendors/espressif/esp-idf folder but the only way I was able to get idf.py build' to run properly at all was to run both install.sh' and 'export.sh'. Thank you for pointing it out. We will look to incorporate it in the Getting Started Guide document. The "Standard Setup of Toolchain for Linux" page that is linked by the getting started guide says to install the prerequisites using sudo apt-get install git wget flex bison gperf python python-pip python-setuptools cmake ninja-build ccache libffi-dev libssl-dev dfu-util'. However, to get this to run I have to change pythontopython3`. As we have upgraded to using the ESP-IDF v4.2 version, we will update the link to point to https://docs.espressif.com/projects/esp-idf/en/v4.2/esp32/get-started/linux-setup.html which mentions about using Python3. Hi @walkershanks, thank you for reporting the problem. We recently upgraded the ESP-IDF version to v4.2 which requires slightly different build and flash steps. We are working to update the Getting Started Guide to reflect the instructions required for using the new ESP-IDF version. For the issue you have reported, the problem is caused by an incomplete cmake command that is issued by the idf.py tool when running the idf.py build command. To resolve the issue, please run the following CMake configuration command from the repository root directory before running the idf.py build command: -DVENDOR=espressif -DBOARD=esp32_wrover_kit -DCOMPILER=xtensa-esp32 -S . -B build -GNinja With the above command, the vendor and board configurations are passed which is missed by the idf.py build command. There are more differences from the current version of the Getting Started Guide for building and flashing with the ESP-IDFv4.2 SDK. I have summarized them in this reply of another issue post. To re-iterate the information in the post, make sure that: You have installed xtensa-esp32-elf-gcc 8.2.0 as the new IDF version does not support the older xtensa-esp32-elf-gcc 5.2.0 version. Your environment contains the required python dependencies for the new IDF version by running python -m pip install -r ./vendors/espressif/esp-idf/requirements.txt command. You are using build as the build directory as the idf.py tool is not supporting the -B build-directory flag in the recent version. The getting started guide does not mention installing and exporting the tools inside the vendors/espressif/esp-idf folder but the only way I was able to get idf.py build' to run properly at all was to run both install.sh' and 'export.sh'. Thank you for pointing it out. We will look to incorporate it in the Getting Started Guide document. The "Standard Setup of Toolchain for Linux" page that is linked by the getting started guide says to install the prerequisites using sudo apt-get install git wget flex bison gperf python python-pip python-setuptools cmake ninja-build ccache libffi-dev libssl-dev dfu-util'. However, to get this to run I have to change pythontopython3`. As we have upgraded to using the ESP-IDF v4.2 version, we will update the link to point to https://docs.espressif.com/projects/esp-idf/en/v4.2/esp32/get-started/linux-setup.html which mentions about using Python3. Hi @aggarw13 , after running the cmake command you provided I was able to build the project properly. Thank you for your help! Hi @aggarw13 , after running the cmake command you provided I was able to build the project properly. Thank you for your help! Glad that your issue has been resolved. 🙂 If you have any further questions, feel free to reopen the issue. Glad that your issue has been resolved. 🙂 If you have any further questions, feel free to reopen the issue.
GITHUB_ARCHIVE
Generate backlog data from created and modified timestamps I have a dataset that looks like Invoice Id Created Date Modified Date 107736 2019-01-28 02:05:07 2019-01-28 02:10:34 107737 2019-01-28 02:10:09 2019-01-28 02:15:50 107738 2019-01-28 03:16:28 2019-01-28 03:20:41 107739 2019-01-28 03:16:28 2019-01-28 03:20:54 107740 2019-01-28 05:57:04 2019-01-28 06:00:52 107741 2019-01-28 06:02:07 2019-01-28 06:05:54 107742 2019-01-28 06:27:14 2019-01-28 06:31:21 107743 2019-01-28 06:27:15 2019-01-28 06:30:51 107744 2019-01-28 06:27:15 2019-01-28 06:32:07 107745 2019-01-28 06:27:15 2019-01-28 06:31:46 107746 2019-01-28 06:27:15 2019-01-28 06:31:06 107747 2019-01-28 06:32:19 2019-01-28 06:36:17 107748 2019-01-28 06:32:19 2019-01-28 06:36:02 107749 2019-01-28 06:32:19 2019-01-28 06:35:43 107750 2019-01-28 06:37:22 2019-01-28 06:41:58 107751 2019-01-28 06:37:24 2019-01-28 06:40:48 107752 2019-01-28 06:37:25 2019-01-28 06:41:40 107753 2019-01-28 06:37:25 2019-01-28 06:41:02 107754 2019-01-28 06:37:25 2019-01-28 06:42:21 107755 2019-01-28 06:42:29 2019-01-28 06:47:04 I want to generate a dataset that tells me the backlog at every 5 minute intervals. Eg: At time 2019-01-28 02:05:00 backlog = 0 since no invoice exists At 2019-01-28 02:10:00 backlog = 1 since 1st invoice has been created but not modified At 2019-01-28 06:30:00 backlog = 5 since 1st invoice has been created but not modified How do I generate this with pandas? A better definition of backlog at time t = ((df['Created Date'] < t) & (df['Modified Date'] > t)).sum() If you can assume that no invoice can be modified before it is created then you can just group by 5 mins 'Created Date' and subtract the group by of 'Modified Date' and then show the cumsum(), e.g.: In []: df1 = df.groupby(pd.Grouper(key='Created Date', freq='5Min'))['Invoice Id'].count() df2 = df.groupby(pd.Grouper(key='Modified Date', freq='5Min'))['Invoice Id'].count() df1.subtract(df2, fill_value=0).rename('Backlog').astype(int).cumsum() Out[]: 2019-01-28 02:05:00 1 2019-01-28 02:10:00 1 2019-01-28 02:15:00 0 2019-01-28 02:20:00 0 2019-01-28 02:25:00 0 ... snip ... 2019-01-28 06:25:00 5 2019-01-28 06:30:00 3 2019-01-28 06:35:00 5 2019-01-28 06:40:00 1 2019-01-28 06:45:00 0 Freq: 5T, Name: Backlog, dtype: int64 Note: This is 5 min off your example because it shows the beginning of the time interval, e.g. 02:05 - 02:10 = 1. You can extend your index to include 02:00 - 02:05 = 0 if you want.
STACK_EXCHANGE
The inventory pane on the left shows a hierarchical list of infrastructure objects. The buttons at the bottom of the inventory pane allow you to switch between Veeam ONE Client views. Each node in the hierarchy tree reflects the state of a corresponding infrastructure object. If there exist unresolved alarms for the object, Veeam ONE Client displays on the node an icon of an alarm with the highest severity. Veeam ONE reflects the state of child objects on parent nodes to let you easily find problematic objects. For example, if an error alarm was triggered for a host, the error icon displays on the host node. In addition, a red downward error is shown on the parent cluster node and on the parent management server node to indicate that an error has occurred on the child host. If necessary, you can change Veeam ONE Client settings to display icons next to affected objects only. - To search for a Veeam Backup & Replication, Veeam Backup for Microsoft 365, virtual infrastructure, VMware Cloud Director or Business View infrastructure component, use the search field at the top of the inventory tree. Search results depend on the selected view. - To expand/collapse all tree nodes, right-click the root node in the inventory pane and choose Expand all/Collapse all from the shortcut menu. - To show all objects with errors and warnings in the hierarchy, right-click the root node in the inventory pane and choose Show all error objects from the shortcut menu. Veeam ONE Client expands all nodes that have child objects with registered errors or warnings. - To hide and show the inventory pane, use the collapse/expand arrow at the top right corner of the inventory pane. - To change the inventory view display settings, click the ellipsis at the bottom right corner of the inventory pane and select the necessary options. For more information on changing display settings, see Other Settings. The Veeam Backup & Replication tree displays a hierarchical list of connected Veeam Backup Enterprise Manager servers, Veeam Backup & Replication servers, and components of the backup infrastructure — backup proxies, backup repositories, WAN Accelerators, tape servers, cloud repositories, and cloud gateways. The Veeam Backup for Microsoft 365 tree displays a hierarchical list of connected Veeam Backup for Microsoft 365 servers and components of the backup infrastructure — backup proxies and backup repositories. The Virtual Infrastructure tree displays a hierarchical list of virtual infrastructure objects — vCenter Servers/SCVMM servers, clusters, hosts, folders, VMs, storage objects and so on. It shows the virtual infrastructure in inventory terms, similar to vCenter Server/SCVMM topology presentation. If you connect a VMware Cloud Director server to Veeam ONE, the Virtual Infrastructure inventory tree displays vCenter Servers attached to VMware Cloud Director and VMware Cloud Director VMs. To hide VMware Cloud Director VMs from the Virtual Infrastructure inventory, enable the Hide VMware Cloud Director VMs from Virtual Infrastructure tree option in Veeam ONE server settings. For more information on the VMware Cloud Director display settings, see Other Settings. The VMware Cloud Director tree displays a hierarchical list of VMware Cloud Director objects — provider VDCs, organizations, organization VDCs, vApps, and VMs. The Business View tree displays a hierarchical list of categorization groups configured in Business View. It presents the infrastructure topology in business terms and allows you to monitor, alert and report on custom categorization units in your environment. By default, Veeam ONE Client hides the Uncategorized group for all Business View categories in the inventory tree. To make it available in the Business View hierarchy, disable the Hide ungrouped objects from Business View tree option in Veeam ONE Client server settings. For more information on changing Business View display settings, see Hiding Ungrouped Objects. The Alarm Management tree displays the list of available alarm types. Use the Alarm Management view to manage predefined alarms or create new alarms.
OPCFW_CODE
Is Kite for Python good? Kite provides good data storage and integration feature with database servers and more features for coding in python. It can be integrated with many popular IDE’s like Pycharm, VsCode, Intellij, Sublime, Spyder etc. It significantly reduces time when coding redundant programs. Review collected by and hosted on G2.com. Which is better Spyder or PyCharm? Spyder is lighter than PyCharm just because PyCharm has many more plugins that are downloaded by default. Spyder comes with a larger library that you download when you install the program with Anaconda. But, PyCharm can be slightly more user-friendly because its user interface is customizable from top to bottom. Is it safe to fly a kite? Keep in mind these common safety precautions: Don’t fly near people, especially young children. Don’t fly near airports. Don’t fly your kite in winds stronger than recommended. Never fly in stormy weather. Can a kite go to space? There is no air in space, so all the forces needed to fly a kite could not be applied to it. Even when a rocket is sent into space, the kite does not have any engines to propel it forward and maintain its elevator, so it is not possible to fly a kite in space. Can you fly a kite in a thunderstorm? No, it isn’t – though it does have its place in the history of science. In June 1752, the American polymath Benjamin Franklin flew a kite during a storm, using it to investigate his theory that lightning is a form of electricity. Can you get electrocuted by a kite? Every year in this country, children are electrocuted when their kite strings come in contact with a power line. Even though kite string is not a conductor of electricity, it can easily become contaminated with dirt and sweat, which will conduct the electrical current down the kite string. What happens if lightning hits a kite? As soon as any of the Thunder Clouds come over the Kite, the pointed Wire will draw the Electric Fire from them, and the Kite, with all the Twine, will be electrified, and the loose Filaments of the Twine will stand out every Way, and be attracted by an approaching Finger. Is kite flying illegal in India? In many cities, kites are also flown on the day of Makar Sankranti. However, people fly kites in Delhi on August 15. But, do you know that flying a kite can be overwhelming for you. Yes, kite flying is a crime according to the law of India! Is cursing illegal in India? Section 294 of the Indian Penal Code lays down the punishment for obscene acts or words in public. The law does not clearly define what would constitute an obscene act, but it would enter the domain of the state only when it takes place in a public place to the annoyance of others. … Is shaving your eyebrows illegal? Is it illegal to shave your eyebrows in India? There is no law in India that forbids a person from shaving his/her eyebrow. But if you shave a person’s eyebrow without there consent it may amount to an assault. Is Dating legal in India? Cohabitation in India, is legal. It is prevalent mostly among the people living in metro cities in India. Is it OK to kiss before marriage in India? In India, most marriages are still arranged, and the rate of sex before marriage is low, according to a government survey, so passionate kissing among the unmarried has long been discouraged.
OPCFW_CODE
Performance of AES CTR + HMAC SHA1 I'm doing a performance test on AES with CTR mode and HMAC SHA1 for message authentication and found the openssl speed tool for that. I run multiple tests with openssl speed -evp sha1 aes-128-ctr aes-128-gcm because I want to compare it to the GCM mode which do encryption and message authentication in a single mode. Now my question: how can I compare these values? Is the value of the sha1 result the "speed" of the combination of ctr + hmac sha1 because it's the bottleneck or do I have to subtract these two values to get the combined speed of ctr + hmac sha1? HMAC-SHA-1 uses the double call of the SHA-1, one is long (almost hashes as same as the message size) and the other one is short, a single block of SHA1 (512-bit for SHA-1). Short messages suffer from initialization, you might need to consider your real case. For the interpretation of the output see How can I interpret openssl speed output? from [so]. This is mostly off-topic. There aren't any AES-CTR cipher suites. Although CTR is used as underlying tech for GCM, the MAC authenticated ciphersuites are all based on CBC, unfortunately using MAC-then-encrypt. Of course, as CTR and CBC use as many block encrypts, the speed difference should be negligible for sane implementations (but there is a lot of insanity in this world). I’m voting to close this question because this is about interpreting the result of a cryptographic library. There is already HMAC speed on [so] already. Is the value of the sha1 result the "speed" of the combination of ctr + hmac sha1 because it's the bottleneck or do I have to subtract these two values to get the combined speed of ctr + hmac sha1? First of all, you should use AES-CBC just to be sure that you are using the right combination of algorithms. AES-CBC is much slower than AES-CTR on my machine. This is probably due to buffering; it is possible to precalculate large parts of the key stream for AES-CTR. TLS packets are usually about 1.5 K in size, so you could use the 1024 byte blocks as best indicator. I've done the speed test with SHA-1, AES-CBC as well as AES-GCM. So let's use the following values: SHA-1 (representing HMAC-SHA-1): 1467708 kB/s AES-CBC : 1330523 kB/s AES-GCM : 3346640 k Instead of GB/s you should really be looking at ns per kB, that way you can add the processing time together and then calculate it back to GB/s as you want. Now we can calculate the speed of SHA-1 + CBC by performing: $$T_{SHA-1\&CBC} = {1 \over {1 \over T_{SHA-1}} + {1 \over T_{AES-CBC}}}$$ where $T$ is the transfer speed in bytes per second. This will result in the following value: 697876kB/s for SHA-1 + AES-CBC. That means that AES-GCM is about 4.8 times faster than SHA-1 + AES-CBC for a normal, unthreaded implementation. This is assuming that HMAC has the same speed of SHA-1 (which is approximately true in all probability, even though it has to process a little more data). Thanks alot for your detailed answer. The only thing I don't get is why you chose cbc mode? I chose ctr + hmac sha1 because thats the default for srtp protocol Ah, sorry, I thought that you'd use TLS, because you were referencing OpenSSL. I don't have much time right now, can you get along with the given answer to calculate the aggregate result? I presume that most protocols will not multi-thread for one specific connection, so it should in that case be a valid way of calculating the speed.
STACK_EXCHANGE
Thumbs up for late night uploads 😀 x *As I mention in this video!!! This is Not for everyone. Do your research and try at your own risk. If you have very thick dark hair I would ask someone else who has done it before trying. IT IS A MYTH that your hair will grow back as a beard. Don’t always believe what you hear and do your research as I previously said! FALL MAKEUP LOOKS! *BLUE/GREEN SMOKE FALL MAKEUP: https://youtu.be/togd_wYjD5Y *COPPER & PLUM FALL MAKEUP: https://youtu.be/TbQribnXNQI *PURPLE SMOKE FALL MAKEUP: https://youtu.be/cQhRtFhC-YQ *MAUVE FALL MAKEUP: https://youtu.be/TeIdsBi1GbA For Pics & More check out my blog: http://www.thebeautybybel.com/2015/11/how-to-shave-your-face.html Hey beauties! Ever since I mentioned this in my September favorites video, I have seen a lot of requests for a tutorial! As I mention in the video – the past two days I couldn’t wear makeup. So instead of taking the day off I figured – Perfect time to teach this! Make sure you are doing this on dry clean skin! Don’t use lotion beforehand either. AGAIN do your homework and make sure to ask permission if you are younger!! I personally never did this before a few months ago. So don’t feel like you HAVE to. This is for all the ladies who asked!! I know its a weird topic but IDC thats what i’m here for! Any tricks I learn I will always share with you 🙂 & Thanks to my muffin NIC For always teaching me fun new tricks!!! Goodnight guys!! XO Carli *Face Razors: http://amzn.to/1JN8bD3 *Image Skincare Vital C Repair Cream http://amzn.to/1PtTNq3 *MAC Boldly bare lip liner: http://bit.ly/1tyF59w *I’m not sure my nail color it is gel! I will find a Dupe! x *Headband is Asos: http://bit.ly/1Sd7yt7 *For my upper lip i’ve always used this: http://amzn.to/1iIY2Be It doesn’t work on everybody but I am allergic to wax. 🙁 That hair is different than the rest of your face, so if you have darker hair I would recommend waxing there! *I do this 1 – 2 times a month *I wouldn’t use a regular razor for this- single blade is better -Removes dead skin cells -Removes Peach Fuzz -Gives your skin a Glow -Anti aging treatment -Products absorb better on your skin -Makeup goes on smoother Shout out to my cute pimple on my chin starring in this video. He had to say hi. *ALSO it IS a MYTH that your hair will grow back darker and thicker. For your face at least 😀 *Send me LETTERS!!! PO Box 6639 Bridgewater NJ 08807 *SUBSCRIBE to Brett’s FITNESS & HEALTH channel! Follow me on Instagram: http://www.instagram.com/TheFashionBybel – My Fashion Page for OOTD Follow me on SNAPCHAT! CarliPenguin5 Come check out my beauty page for quotes & love! Disclaimer: This video is not sponsored. Some links may be affiliate links.
OPCFW_CODE
The Basic Principles Of sql assignment helpThis may also be a good idea, in the event you modify the structure of an item and previous versions of it remain in certain person's cookies. With server-facet session storages you'll be able to filter out the sessions, but with client-facet storages, this is hard to mitigate. Consumer comments During this part are, as being the title implies, furnished by MySQL customers. The MySQL documentation crew isn't accountable for, nor do they endorse, any of the data delivered here. Posted by Misha B on April 21, 2011 All your present-day filters are exhibited in specific containers on the correct aspect on the monitor. Every single filter adds to the last, so a record ought to fulfill every one of the filter criteria being included in your final results. You may click the shut By default, Rails logs all requests becoming produced to the net application. But log information is usually a substantial stability problem, as They might consist of login credentials, charge card figures et cetera. When building an online application stability concept, It's also wise to consider what's going to occur if an attacker acquired (total) access to the web server. Every single region in the site incorporates a small help icon to connection you straight to aspects especially about that location, with in-depth explanations of the contents and features presented. This assumes you posted the subtraction backward; it subtracts the quantities inside the purchase from your equilibrium, which makes the most perception without having understanding a lot more about your tables. Just swap Those people two to vary it if I had been Improper: Description If you wish to learn the way to achieve insights from details but are much too intimidated by databases to learn where to get started on, then this program is for yourself. This system is a mild but in depth introduction to MySQL, one of the most extremely in-desire competencies while in the small business sector right now. In the event you roll your very own, make sure to expire the session soon after your sign in motion (in the event the session is produced). This tends to get rid of values through the session, therefore you will have to transfer them to The brand new session and declare the aged a person invalid immediately after a successful login. That way, an attacker simply cannot make use of the fastened session identifier. This is the good countermeasure versus session hijacking, as well. Here's how to produce a new session in Rails: . An attacker can synchronously begin image file uploads from many pcs which increases the server load and should ultimately crash or stall the server. The sanitized versions on the variables in the 2nd Element of the array change the concern marks. Or you can pass a hash for a similar result: 2008 Update - For a whole therapy of The subject of Oracle safety on the internet, see these guides and means: You should be able to use circumstance statements and complete this module by talking about facts governance and profiling. You will also manage to utilize elementary concepts when making use of SQL for facts science. You can use suggestions and methods to apply SQL in a knowledge science context. The very first set of statements displays 3 ways to assign customers to consumer groups. The statements are executed from the person masteruser, which isn't a member of the user group outlined in any WLM queue. No query team is ready, so the statements are routed towards the default queue. The consumer masteruser can be a superuser as well as the query group is set to 'superuser', Therefore check that the query is assigned to your superuser queue. The user admin1 is usually a member of the user group mentioned in queue 1, Therefore the query is assigned to queue one.
OPCFW_CODE
import { createTag } from "../../scripts/scripts.js"; import { attachNextAction, updateField, } from '../../../templates/orderbytext/orderbytext.js' export default function decorate(block) { const selectorSection = block.closest('.selector-container'); const contentContainers = selectorSection.querySelectorAll('.default-content-wrapper'); const itemsContainer = contentContainers[1]; itemsContainer.classList.add('gallery') const items = createTag('div', { class: 'gallery_scroller' }); let item; const children = itemsContainer.childNodes; for (const itemContent of children) { const picture = itemContent.querySelector('picture'); if (picture) { // We're at the begining of an item if (item) { items.append(extractAltText(item)); } item = createTag('div', { class: 'gallery_item' });// Reset item.addEventListener('click', function () { handleItemSelect(this) }) } item.append(itemContent.cloneNode(true)); const title = extractItemTitle(item); if (title) { item.id = title; } } items.append(extractAltText(item)); itemsContainer.innerHTML = ''; itemsContainer.append(items); const nextButton = createTag('button', {}, 'Next') itemsContainer.append(nextButton); attachNextAction(nextButton); updateField('Product', 'some Flowers'); } function handleItemSelect(item) { const selected = item.parentNode.querySelector('.selected'); selected?.classList.remove('selected') item.classList.add('selected') const title = extractItemTitle(item) updateField('Product', `the ${title}`, 'some Flowers'); } function extractItemTitle(item) { const title = item.querySelector('h3'); return title?.textContent; } function extractAltText(item) { item?.querySelector('img')?.setAttribute('alt', item?.id); return item; }
STACK_EDU
I need some simple corrections to this silverlight wireframe: [url removed, login to view] The corrections are presented here as questions/suggestions provided by a reviewer. I will rephrase them into more concrete, less ambiguous actions to be taken. The quote given should cover implementing these corrections. a. Inter page linkages and a startup page can help, like dummy entry to the application( a simple login form leading to a dummy dashboard page can help). Also like when i go to the detailed guest view there is no way to come out of there… an exit button can tell us a lot during usability tests, particularly indicating that the user got confused and left without doing the task or after doing the task. b. Since we will devise scenarios based on each of the modules of the software ( For e.g. Event component or the Guests list component), We may need to explicitly indicate these components on the pages. By titles on the top. c. The links on the left hand side of the sketchflow prototype can be distracting to the user. The idea is that in a prototype we should try and show the software as close to the real environment or scenario. So just like we are using web page and hence the web environment can be identified, we can also remove/hide the navigation panel on the left, make the top links active and make it feel ( if not look) like a real application. d. Can we avoid the hand writing font in the prototype to make it more readable, since the testing of the labels can become more easy if we do so. e. Mocking up some dummy functionality can also help, like when u add a new event, nothing happens currently, can we have some dummy form displayed for the user to fill? Also deleting or removing an event or a guest, does not show any visual change currently. Some change can help in communicating the user the task is done. Note: There are loads of places in which the functionality is missing, however adding those which we plan to test will be useful. We don't want to frustrate the volunteer for testing. f. Although i see that sketch flow has a good tool to let the users give feedback, but i would recommend that you take print outs of the screen to the users, at the end of the tests the users can mark on it what they think should improve. However drawing on screen is not easy for all the users, and pen/ pencil is the most basic interaction people are accustomed to express. Now for some detailed input on each component: - The two lists Upcoming events and Past events, have only one set of Edit/ Delete / Create button. Obviously the user can get confused with the mapping. Suggestion: We can use tabs for "upcoming events" and " past events". This can help in using more screen space for the complete list of events. - The calendar on the right needs to be tested. We can make better use of the calendar by having a todays date at the top, along with displaying events on the calendar etc. We can also further make calendar based view for the interface( something on the lines of google calendar). Need to see some possibilities for the same. - In the spec the following interaction is specified.: When clicking the title of an event, edit event for that event is opened. Alternatively, the user can select an event and click "edit selected". It is only possible to edit one event at a time. If multiple events are selected, there should be a popup warning saying "Please select only one event". Such interactions can be irritating to the users, if the user cannot edit more than one event at a time, then the possibility of selecting more than one event at a time should be avoided. I understand that the problem arrives when the user has to delete more than one event. This can be solved using the following pattern. a. Keep an event title in the list as a hyper link, along with the same checkbox structure. So in that case one can click on the event to view/edit it, as well as have multiple select while deleting. This avoids display of the error "please select only one event". A similar pattern is implemented in gmail inbox if you observed in detail. The mails are the events and a delete button exists, but no view button exists. We need to answer the question, what is the goal of the user in events panel, and what are the scenarios in which it may be used? - I have some questions here. Is the Guestlist page and Event details page the same? If so then we should change the title, since it can confuse the user further on. - Import from Excel or previous events interaction needs to be detailed out in the prototype. Since the interaction of the user adding csv contacts will be very different to that of adding from previous events. - The add/edit guests interaction is a bit different from the edit guests interaction from the list above. There needs to be some consistency in the interaction for better usability. We can keep the add guestlist entry as a hidden div type button. Which on click expands in length on the same page. So the user can use that section only when he needs it, and rest of the times the page remains clean. How does the admin choose a particular event to enter spend. Can he see previous expenditure spent across events. Can we have graphs here to display spend by various users in the past… Basically instead of just keeping this section as "Enter Spend" how about making it " Spend Details" hence including input as well as view of all the spend in an event or by a specific guest across events. 7 freelancers are bidding on average $772 for this job Hello :) I have lots of experience programming in Silverlight and WPF/RIA webservices. I can make your requested modifications in 10 days. Regards, Alexey Hi I have an experience of around 3 years in WPF and Silverlight. I have a good knowledge of sketchflow and will deliver you the updated wireframes within a week. The amount is also negotiable as I am in need of wor Daha Fazla Dear Mr./Mrs. I have worked with world class ergonomics expert on a few different project to make their GUI perfectly pleasant and easy to operate for users having IQ of maximum 90. So I have good experience in crea Daha Fazla Hi We are ISTQ Solutions Pvt Ltd an ISO 9001-2000 certified Company and are a premier custom software development and Testing consulting company committed to developing effective software solutions with tangible Daha Fazla
OPCFW_CODE
Over the past few semesters I have been involved in a mentoring group at ASU called Sundial. In Sundial, older undergrads and graduate students come together to mentor incoming freshman. We share about our own college experiences, provide resources on things from finding tutoring to research opportunities, and in general provide a friendly space for everyone to feel accepted and welcomed. This year, as we were getting to know the mentees, I was asked by a freshman if I would be willing to help her with a project for another class. She was required to interview someone in her major so that she could learn more about the direction she wants to take through college. I was happy to be involved and ended up responding to the questions via email. As I finished up going through the questions I realized these were fun questions that others may enjoy reading my answers to! So; here they are 🙂 Why did you choose to pursue this major? I liked being able to problem solve. I felt that I was good at finding solutions or thinking critically about a problem. It allowed me to apply my skills in math to something that felt more practical or realistic and I wanted to keep learning new things. Also – I think it’s cool 🙂 What made you major in Astrophysics? Really early on I knew I liked science… I grew up being fascinated when I learned how the world around me worked, I enjoyed reading biographies about physicists like Isaac Newton and Albert Einstein… and I always thought I would be a middle school or high school science teacher. It took a while before I realized I could envision myself as the person who does science rather than just the person who teaches it. I had some really great teachers who pointed me in that direction 🙂 In my high school science classes I learned that I was much more inclined to physics than other sciences. I liked that I could use equations/math to predict things. I got to college with the intention to major in physics. I took classes in astronomy because I had always thought it was cool, with no real thought to whether or not it was practical. My physics degree required that I take a computer science class, and with just the most introductory glimpse, I was enthralled. Remember how I liked using equations to predict things? Well, now I could use computers to do that for me! I could write programs to recreate the world around me, I could tweak things just to test what would happen. It was like I could do experiments on the universe. After just a little searching I realized I could combine all of my passions into one thing: Computational Astrophysics. Who do you work with during research? I work with Evan Scannapieco, an astrophysics professor at ASU, as well as Marcus Bruggen and Wladimir Banda-Barragan at the Hamburg Observatory. Marcus is an astronomer that has previously collaborated with Evan on work very similar to mine. Wladimir is a post-doc at the observatory who has had the most experience with the types of simulations I am doing. Who did you talk to about research? In my undergrad, I talked to one of my professors about working with them. I had taken one of their classes in my first semester, I liked them, and I wanted to have some kind of research experience. She was very accomodating and gave me resources to learn about what she studied. After that first meeting, we would get together to discuss what I had read and how I could build a project to do my own kind of investigation. After working with her for a while, she recommended me for a summer research position with another faculty member who became my mentor as I applied to graduate school. For graduate school, I talked to a lot of people in the process of applying. I had mentors who helped arrange meetings and I was encouraged to go to talks given by people who might be doing what I was interested. I talked to graduate students who were working with advisors I was considering. Ultimately, when I applied to a school I had to have a list of people I was interested in working with as part of my application. Upon being accepted to ASU, it was already decided that I would be mentored by one of the people I had mentioned. Not all graduate schools work this way, sometimes you need to discuss working with potential advisors in your first semester before concluding on one. Some programs, like ASU’s physics department facilitate this by having new graduate students work in several labs in their first year as a requirement of the degree. What type of research do you do? I study the gas escaping and interacting with galaxies. As stars are born, live and eventually die, they cause gas to flow out of the galaxy like a wind. This gas impacts the evolution of the galaxy and others that may form nearby. I am learning about what this gas looks like, what is it made of, and what kinds of physics affect its behavior so that we can understand these winds’ roles in the greater evolution of galaxies. I do this by running computer simulations of this gas and studying how it evolves, this makes me a computational astrophysicist. My work is primarily doing what are called hydrodynamic simulations which means that they predict the way fluids (the hydro part) move (the dynamic part), my most recent simulations also consider what magnetic fields do and how they change how the gas moves and they are called magnetohydrodynamic simulations. My simulations are run on supercomputers and take several days to complete. Most of my time is spent making plots of the simulations and managing all the data created into something we can understand and make conclusions from. Would taking a coding class help with this major? Yes! Being familiar with programming would be very helpful in astrophysics. Even if you don’t do computational work like I do, most data analysis is done by writing code to understand the data. Coding can help you make plots, calculate important values and even automate things like searching through images. Most people who end up in astrophysics without a previous coding background find that they are teaching themselves programming in order to do their best work. Would taking classes during the summer be good for this major? It depends on what else you are trying to accomplish. In general if you are able to just spend your time on the major you should be able to get them done without taking classes over the summer. However, if you are wanting to double major, or you want to minor in something not science related, summer classes can be a good way to spread out your load so that you have more time to devote to each class. If you’re not pressed for time to complete all the necessary classes, I would recommend leaving your summers open for internships, research experiences and summer jobs. Being well rounded makes you good scientist too 🙂 What advice do you have for someone who is going into this major? I would advise taking the time to learn about all the different topics included in the major; instrument design, planets, galaxies, cosmology. If there’s a talk you’re vaguely interested in, go to it! There are a lot of things under the astrophysics umbrella 🙂 I would also advise staying on top of your math classes. A lot of physics hinges on being able to intuitively understand the equations that describe it. You will inevitably run into something that stumps you (for me it was using linear algebra to do quantum mechanics) but having the ability to work through and understand the math will give you an easier time of figuring things out! Finally, I would always recommend finding people in the major to relate to; to bounce ideas off of for homework, to commiserate with over a tough test, or to share your excitement with when you learn something new. 🙂 What are the options for jobs for people with this major? When considering just astrophysics you have some limited options for a job – mostly staying in academia to work as a professor or research scientist. These jobs will let you continue to do research in astrophysics and will allow you to apply most of your expertise. However, there are also a lot of options open that don’t work with astrophysics directly. You can work for commercial companies looking into space, or national labs. You can also apply your skills in data analysis, critical thinking and problem solving to other industries. Vaguely; tech companies, medical fields, environmental causes. How is astrophysics going to help me in the job field? If you aim to work for a research group or as a professor, all of your astrophysics experience is going to be very helpful. If you end up looking for jobs outside of astrophysics directly, you will have to leverage your experience into something more applicable to the job. This means you’ll have to highlight the other skills you’ve learned along the way over your knowledge of how the universe works. Pursuing a degree in astrophysics gives you great problem solving skills, it teaches you how to look at problems from different perspectives and to how interpret data to make a conclusion. You learn how to communicate complex topics to others and how to reduce data down into something useful or relevant. These are the kinds of skills that you’ll pick up while also learning about something really fascinating at the same time 🙂
OPCFW_CODE
My name is Jorn and I am a student at Fontys in Eindhoven, Netherlands. Please excuse my English. For a project on school we are using 4 Pololu 24v23 simple motor controllers that are being controlled by a Parallax proto board. We use a 24v/8.2A power supply for testing two 24v23 attached to two DC motors with encoder. The wheels are floating in mid air. We can drive the wheels in all direction and is working, but one of the two is getting a unusual amount of errors. The other one is getting almost 0 errors. But when we disconnect the “0-error” 24v23 controller, the other one is getting less errors then before. But still a lot. At the moment we are clueless of why this is happening. I hope this can be solved easily. I am sorry you are having problems with your Simple Motor Controllers. It sounds like you might have connection issues. Could you post photos of your setup and close-ups of the problematic board? Could you also tell me what errors are occurring (you can get information about the errors by checking the “Errors” box under the “Status” tab of the Simple Motor Control Center)? Thank you for your fast response. Also Wednesday is our project day, the only day in the week we work on this project. The errors are serial errors. Most of them are noise, some are format errors. We will test the system with a battery today, it may be caused by the power supply. Which unit produces the errors? If it is the one closer to the power supply, your theory that the power supply is the cause of the faults seem reasonable and changing the power supply to a battery might help. By the way, from your photos, it is hard to tell if the through-hole capacitors are soldered in. If they are not, you should definitely solder them on. The capacitors are soldered in. The one that produces the errors is the one closer to the power supply. We have not tested with a battery yet. From the photograph, it appears that you are attempting to use serial communications in parallel. That should work for one TX pin on one device communicating to two RX pins on two other devices, but probably will not work for two TX pins in parallel, attempting to communicate with one RX pin. If the two TX pins don’t function with identical timing, they will conflict and that would lead to communication errors. Nice catch Jim! Jorn, if you want to communicate with both SMCs, you might consider using the TXIN pin to daisy chain the two controllers. You can read more about this in the “Daisy Chaining” section of the Simple Motor Controller user’s guide.
OPCFW_CODE
As I've said earlier, I was looking into a new library for implementing IRC support. I now have a very basic client working. It currently does not even handle any mode changes, except for few simple ones... but you can send messages and receive them. That already sounds great! How did you go about the license problem? Well, the easy way: use a different library (http://code.google.com/p/irc-api/). This one is Apache License 2.0-licensed, which should be liberal enough if I understand it correctly. The other library did already have a question regarding the GPL licensing "issue" but since there are many contributions and it's already a fork of another project, it's really hard to legitimately I've included a text file irc-api-1.0-NOTICE that describes the additional modification I have made to the library. This is also reported upstream but it wasn't included yet in the latest version. During the implementation I've also fixed a few minor issues some w.r.t. tab-completion of user names. Now, I am actually wondering how to proceed. 1. What is the preferred way to deliver changes: a bunch of smaller patches for these fixes and then IRC support as a big patch when a "first version" is done? (In case of small patches, it might be interesting to send some of these already ...) Either post them here on the dev-list or create a pull-request on GitHub. Okay, so I'll post a few small fixes up front. The remaining code should be as closely related to the IRC implementation as possible, since other patches are already submitted. 2. How can I best approach unit testing. I get the impression from the Developers documentation that I would have to create a bunch of accounts (10+, if I am not exagerating ... Well... I guess most of us just don't run them, I definitely don't What you can do to run only some of them is to change the net.java.sip.communicator.slick.runner.TEST_LIST property in lib/testing.properties. The SlicklessTests should be able to run without any Ah right. I did think of that, but it felt like cheating. Depending on how many unit test you want to write for IRC, maybe try to create them with as less server interaction as possible. This way they wouldn't need so much of the OSGi Slick setup around, wouldn't be so much prone to error due to network circumstances... test/protocol/sip/TestAutoProxyDetection.java is one those plain-junit-ones as an example. I'll keep that in mind. I understood that there are some tests that are somewhat flaky. At least, judging by some of the posts that recently came by here. PS: In case people are curious. I use github to back up this code during development. It can be found at https://github.com/cobratbq/jitsi/tree/ircapi (branch 'ircapi'). Also note that, currently, I am shamelessly rebasing the code upon the HEAD of jitsi, so expect history to change for this repo ... I love rebasing On 12/02/2013 10:43 PM, Ingo Bauersachs wrote:
OPCFW_CODE
5: # Introduction 7: This page describes howto setup NetBSD to be able to use linux lvm tools and libdevmapper for lvm. For now my work was done on haad-dm branch in main netbsd repository I want to merge my branch back to main repository as soon as it will be possible. 9: LVM support has been merged to NetBSD-current on the 23rd of December 2008. 11: # Details 13: Tasks needed to get LVM on NetBSD working 15: * Get latest sources 16: * Compile new kernel and tools 17: * Create PV, LV and VG's and enjoy :) 19: ## Howto update/checkout sources 21: You can checkout the latest sources with this command 24: $ export CVS_RSH="ssh" 25: $ export CVSROOT="firstname.lastname@example.org:/cvsroot" 26: $ cvs checkout -dfP src 29: Or update by command 32: $ export CVS_RSH="ssh" 33: $ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot" 34: $ cvs update -dfP 37: There are only 3 directories which were changed in my branch 39: 1. sys/dev/dm 40: 2. external/gpl2/libdevmapper 41: 3. external/gpl2/lvm2tools 43: ## Howto setup NetBSD system to use lvm 45: The easiest way is to build distribution and sets. You need to build with flag MKLVM set to yes. 47: $ cd /usr/src $ ./build.sh -u -U -V MKLVM=yes tools distribution sets 49: After successful build update your system with them. You can also run make install (as root) in src/external/gpl2/lvm2 to install userland part of LVM into the existing NetBSD system. There is also simple driver used by LVM2 tools in our kernel in src/sys/modules/dm you have to install and load this driver before you test lvm. The NetBSD LVM uses same tools as Linux and therefore we have the same user interface as is used in many common linux distributions. 54: ### Howto compile new kernel 56: Kernel compilation procedure is described here <http://www.netbsd.org/docs/guide/en/chap-kernel.html#chap-kernel-build.sh> To get device-mapper compiled in kernel you have to add this option to kernel config file 59: pseudo-device dm 62: ### Using new MODULAR modules 64: There are two versions of modules in NetBSD now old LKM and new MODULAR modules. New modules are build in sys/modules/. All GENERIC kernels are compiled with support for new modules. There is reachover makefile for dm driver in sys/modules/dm. You can use it to build dm module. 66: For loading new style modules new module utils are needed. You need to add 72: to /etc/mk.conf and build *modload*, *modstat* and *modunload*. 74: ### Compile lvm2tool and libdevmapper 76: To get lvm working it is needed to compile and install linux lvm tools. They are located in 78: * external/gpl2/libdevmapper 79: * external/gpl2/lvm2tools 81: Only make/make install is needed to build/install tools to machine. Tools are not integrated in to lists and build system now. Therefore it is possible that when you try to add them to build process it will fail(with error to many file in destdir). 83: ## Using lvm on NetBSD 85: lvm2tools are used to manage your lvm devices. 87: lvm pvcreate /dev/raw_disk_device # create Physical Volume 88: lvm vgcreate vg00 /dev/raw_disk_device #create Volume Group-> pool of available disk space. 89: lvm lvcreate -L20M -n lv1 vg00 # create Logical volume aka Logical disk device 90: newfs /dev/vg00/rlv1 # newfs without -F and -s doesn't work 91: mount /dev/vg00/lv1 /mnt # Enjoy 94: After reboot, you can activate all existing Logical Volumes (LV) in the system with the command: 96: lvm vgchange -a y 99: You can use: 101: lvm lvdisplay vgdisplay pvdisplay to see the status of your LVM devices. 104: You can also use lvm lvextend/lvreduce to change size of LV. 106: I haven't tested my driver on SMP system there are probably some bugs in locking so be aware of it :) and do not try to extend/reduce partition during I/O (something bad can happend). 110: * Review locking - I will probably allow only one ioctl call to be inside dm driver at time. *DONE* 112: * Write snapshot driver -only skeleton was written yet. 114: * Add ioctls needed to correct newfs functionality. I tried to implement them in device-mapper.c::dmgetdisklabel but it doesn't work yet. *DONE* 116: * Write lvm rc.d script to enable LVM before disk mount so we can use lvm for system partitions. *DONE* CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb
OPCFW_CODE
import torch from torch import nn from losses.cross_entropy_label_smooth import CrossEntropyLabelSmooth from losses.triplet_loss import TripletLoss from losses.contrastive_loss import ContrastiveLoss from losses.arcface_loss import ArcFaceLoss from losses.cosface_loss import CosFaceLoss, PairwiseCosFaceLoss from losses.circle_loss import CircleLoss, PairwiseCircleLoss def build_losses(config): # Build classification loss if config.LOSS.CLA_LOSS == 'crossentropy': criterion_cla = nn.CrossEntropyLoss() elif config.LOSS.CLA_LOSS == 'crossentropylabelsmooth': criterion_cla = CrossEntropyLabelSmooth() elif config.LOSS.CLA_LOSS == 'arcface': criterion_cla = ArcFaceLoss(scale=config.LOSS.CLA_S, margin=config.LOSS.CLA_M) elif config.LOSS.CLA_LOSS == 'cosface': criterion_cla = CosFaceLoss(scale=config.LOSS.CLA_S, margin=config.LOSS.CLA_M) elif config.LOSS.CLA_LOSS == 'circle': criterion_cla = CircleLoss(scale=config.LOSS.CLA_S, margin=config.LOSS.CLA_M) else: raise KeyError("Invalid classification loss: '{}'".format(config.LOSS.CLA_LOSS)) # Build pairwise loss if config.LOSS.PAIR_LOSS == 'triplet': criterion_pair = TripletLoss(margin=config.LOSS.PAIR_M, distance=config.TEST.DISTANCE) elif config.LOSS.PAIR_LOSS == 'contrastive': criterion_pair = ContrastiveLoss(scale=config.LOSS.PAIR_S) elif config.LOSS.PAIR_LOSS == 'cosface': criterion_pair = PairwiseCosFaceLoss(scale=config.LOSS.PAIR_S, margin=config.LOSS.PAIR_M) elif config.LOSS.PAIR_LOSS == 'circle': criterion_pair = PairwiseCircleLoss(scale=config.LOSS.PAIR_S, margin=config.LOSS.PAIR_M) else: raise KeyError("Invalid pairwise loss: '{}'".format(config.LOSS.PAIR_LOSS)) return criterion_cla, criterion_pair def DeepSupervision(criterion, xs, y): """ Args: criterion: loss function xs: tuple of inputs y: ground truth """ loss = 0. for x in xs: loss += criterion(x, y) # loss /= len(xs) return loss
STACK_EDU
If you are upgrading from Release 1.1 or 1.2 to Release 1.3, please refer to the ICA-AtoM 1.3 upgrading guide. Otherwise, continue reading below. These instructions are intended for your system administrator and require some basic experience with Linux command-line tools. Save current data Dumping your data must be done before updating to the latest version of the application. To migrate your existing data to the new version of the application you will need to create a YAML backup file (dump). Type, $ php example/path/to/symfony propel:data-dump /exampleFileName.yml As an extra precaution against data loss, create a MySQL backup of your current data as well, $ mysqldump -u exampleusername -p yourqubitdatabasename > mysqldump_20100517.sql See mysqldump for more information on the mysqldump command. Upgrade application code See installation for more information on the differences between installing from a tarball archive or Subversion repository. Move existing application directory If you wish to install to the same directory as your previous version you must remove the old installation directory first. Moving the existing directory, rather than deleting it, preserves any digital objects, custom files or translations for transfer to the new version of the application. If you are installing the new version of the application to a new directory, this step is unnecessary. e.g. moving previous /var/www/dcb directory to /var/www/old_dcb, $ mv /var/www/dcb /var/www/old_dcb Upgrade from a tarball For example, to extract the dcb-1.0.9.tgz tarball to the /var/www/dcb-1.0.9 directory, type, $ tar xzf dcb-1.0.9.tgz -C /var/www See man tar for more information on the tar command. Checkout from Subversion Because there may be changes to the configuration files and directory structure between releases, we highly recommend checking out a fresh working copy of the application; You may experience application errors if you chose to update an existing working copy. e.g. to checkout a new copy of ICA-AtoM to the directory /var/www/icaatom-1.0.8, type, $ svn checkout http://qubit-toolkit.googlecode.com/svn/branches/ica-atom /var/www/icaatom-1.0.8 See svn checkout for more information on the Subversion "checkout" command. Run installer See installation for instructions on running the web-based application installer. Migrate uploads Copy the your uploaded images, videos, PDFs, etc. from your old application folder to your new application folder by copying the contents of your uploads directory, $ cp -r old_application_dir/uploads new_application_dir/uploads See the special instructions below for #Upgrading from 1.0.8 to 1.0.9. Migrate data The Migration Task provide an automated, command line tool for moving data from one version of Qubit or ICA-AtoM to the newest version of the application. It implements the Symfony Task (CLI) framework. To run the migration task, open a terminal window and enter the following command, $ php example/path/to/symfony propel:migrate exampleFileName.yml See the special cases section below for additional instructions for particular releases. Load migrated data Because the data load process can be very memory and CPU intensive, we recommend disabling the search index when loading any medium to large database (more than 1,000 descriptions). First, clear any existing data out of your database $ php example/path/to/symfony propel:insert-sql Then load the new, migrated data $ php example/path/to/symfony propel:data-load migrated_data_20100517141011.yml Migrate translations If you have done any custom translations of the user interface, they will be stored in the apps/qubit/i18n directory. Copy them from your old application directory to the new, $ cp -r old_application_dir/apps/qubit/i18n new_application_dir/apps/qubit/i18n Your translated metadata (archival descriptions, authority records, archival institutions, etc.) is stored in the database, and will be restored from your data dump. Rebuild search index Due to data changes during the upgrade process, you will need to rebuild the search index after upgrading. Clear cache Clear your cache to remove any out-of-date data from the application, $ php example/path/to/symfony cc See clear cache for more detailed instructions. Special cases Patch Releases These minor releases do not require an update to the data model, which makes upgrading considerably easier. Please see the links below for instructions on upgrading. From 1.0.8 to 1.0.9 $ cp -r old_application_dir/web/uploads new_application_dir/uploads From 1.0.7 or earlier The "aclGroup" module must be specifically enabled to allow administrating user groups in release 1.0.8 and later. Go to file apps/qubit/config/settings.yml, # # Activated modules from plugins or from the symfony core enabled_modules: [default, aclGroup] See also - Migrations - developer documentation describing the mechanics of the migration task.
OPCFW_CODE
Hello, and welcome back to my blog. This week I will be sharing what I have been learning about path testing. Path testing is a white box method of tests that is used to design test cases by using the source code of a program to find every possible executable path. The bug presumption for path testing is such that the program has gone wrong in some way, causing it to follow a different path than desired. Path testing utilizes Cyclomatic Complexity to establish the quantity of paths, and then tests cases for each path are generated. Developers can choose to execute some or all paths through when performing this testing. Path testing techniques are perhaps the oldest of all structural test methods. Path testing is most useful for unit testing new applications. It provides full branch coverage, but does so without covering all possible control flow graph paths. The four part process to path testing begins with drawing a Control Flow Graph of the software that is being tested. Next, Cyclomatic Complexity of the program is calculated based off Edges, Number of vertices, and Program factor. Now, we can use the data calculated in the first two steps to find a set of paths to test. The computed cyclomatic complexity equals the set’s cardinality. Finally, we will develop test cases for each of the paths determined in previous steps. Path Testing process: Path testing is beneficial because it focuses test cases on program logic. It helps to find all faults within the code and reduces redundant tests. In path testing, all program statements are executed at least once. Thank you for taking the time to visit my blog and join me on my growth as a software developer. As we continue to learn about unit testing and using mocks for unit testing, I wanted to get a better understanding of the subject. I decided to look further into the popular mock framework, Mockito, that we’ve been learning about. I was able to find a few great articles on mocking and even a few great tutorials on using Mockito. On the Vogella Mockito tutorial page, there is a brief overview and explanation of what mocking is and what a mock object is. There is a simple diagram showing the sequence of what typically happens when you use Mockito in your tests. This tutorial also mentions that when using the Mockito libraries, it is best to do so with either Maven or Gradle, which are supported by all of the modern IDEs. Here you can also find code examples, and even where to put them in your code. This tutorial is packed with visual representation of the information given and I find that to be extremely helpful. I would say this particular article/tutorial can be very helpful in bettering one’s understanding of how to use Mockito. It’s filled with tips and simple to understand diagrams, explanations and code examples. It even dives into using the spy() method to wrap Java objects, mocking static methods, creating mock objects (in the exercise provided, you can create a sample Twitter API), and testing an API using Mockito. I have included in this blog post two other articles I found interesting and informative on the subject of Mocks/Mocking Hello and welcome back to my Blog. Today I am going to be sharing about Static and Dynamic Testing. What is Static Testing? Static Testing is a method of software testing in which software is tested without executing the code. Static Testings main objective is to find errors early on in the design process in order to improve the software’s quality. This form of testing reduces time spent finding bugs and therefore reduces the cost of having to pay developers to find bugs. This is advantageous because we get fewer defects when we are nearing deployment. Static Testing also increases the amount of communication amongst teams. Below, I will give a brief overview of Review and Static Analysis, the two main ways in which Static Testing is performed. Review is a process that is performed to find errors and defects in the software requirement specification. Developers inspect the documents and sort out errors and ambiguities. In Informal review the dev shares their documents and code design with colleagues to get their thoughts and find defects early. After it has passed the Informal review, it is moved on to the Walkthrough. Walkthroughs are performed by more experienced devs to look for defects. Next, a Peer review is performed with colleagues to check for defects. Below is a list of free, open-source Static Testing tools: What is Dynamic Testing? Dynamic Testing is a software testing method that is performed when code is executed. It examines the behavior and relationship of the software in relation to the performance, (e.g. RAM, CPU). In dynamic testing the input and output are examined to check for errors and bugs. One common technique for performing dynamic testing is Unit Testing. In Unit Testing, code is analyzed in units or modules, which you may know as JUnit Testing. Another common approach to Dynamic Testing is Integration Testing. Integration Testing is the process of testing multiple pieces of code together to ensure that they synchronize. Below is a list of open-source Dynamic Testing tools: I hope you find as much value as I do in learning about these testing methods.
OPCFW_CODE
1. How can use the entities from two different objects to perform a single query? You can create two separate queries using the WebiClient and then use the option to merge these queries. Performing the set functions like UNION, MINUS or INTERSECTION between results of the different queries can also give useful data. 2. What are details objects related to dimensions? The detail objects are the information that are added onto a dimension. Assume the company name to be a dimension, then the detail objects may include the company address, contact details, status of order etc. 3. Can we have many security domains? No. All the other types of domains can be multiple, but security domain is always maintained as a single entity. 4. What are the different domains involved in the basic step? What identifies the relationship between these different domains? There are three types of domains in basic step namely Secure, universe and document domains. The relationship between the various domains is indicated by the domain model. 5. What is a Broad Cast Agent? The BCA is the one that is used to publish and update the documents at specified intervals of time. The scheduled update and publishing can optionally include the exporting of the document in specific formats and sending to the users involved. 6. What is a universe? What are they used for? The universe refers to a subset of the relational database. The Universe offers the view of classes and objects over the database, thus simplifying the logics of data access. It offers the metadata information regarding the entire database, thus acting as an intermediate layer of business logic. 7. What is the information contained in the universe file? The universe file contains the following information regarding the database: ● The connection configuration and other parameters for all the databases involved. ● Information regarding the class->objects relationships as seen in the higher level. ● The data extraction information (for example, the various combinations and joins that are applied to tables before getting the final object as seen by the user). The file system is used for the storage of the Universe details. This makes it easier for the universe shifting, replacements and report generations. 8. What are the ways in which you can check the integrity and reports? There are buttons available for the check of the integrity and consistency of the joins, data manipulations etc. This is at the business object level. The check can also be performed at the SQL level and the reports can also be tested by testing the underlying universe it uses. 9. What is analysis in the BO? How is it done? The Analysis of the data in the object helps us to manage our documents and reports. We can do it in the following ways: ● Drill mode: This is where we can perceive the data at the different dimensions and levels. This is helpful when dealing with a database that is based on few aspects interlinking against the various dimensions. ● The Slice and dice: This is where we analyse the data in the documents and reports and add ways to organize them. We can sort or rank the various documents. We can also create a master report that will have a detailed information on the generation of other reports. 10. How does the use of Aggregate tables improve performance? The Aggregate tables help us maintain the frequently calculates data readily available in the database in the form of an interlinked column. This table depends on the values and their updates in the various other tables. During report generation of general data access, the calculation and computation time is reduced as it already rendered during updates. 11. What is the use of BO SDK? The BO SDK allows us to customize the various aspects and interactions between the business objects. This can come in handy when we need additional uses and optimization to be done on the business Objects. The programming languages like .NET/VB/Java can be used and the APIs that are made available in SDK give us more features than what is readily available. 12. How are metrics useful to a process? The metrics are usually the quantitative results and numbers that we use to determine the operational statistics of a business. By have the parameters rightly measured, the calculation of metrics can lead to analytical data which can give insight into how the specific business process has performed over the specific duration of time. This is essential for the understanding the trend and coming up with adaptable solutions and alterations. 13. Is there any way to limit access to specific rows of a table in Oracle DB? The Oracle 9 and above has the option for the row-level security. The other versions can also be implemented with row level access permissions by configuring the supervisor accordingly. 14. What are the different types of object qualifiers? There are 3 different types of object qualifiers: ● Detailed: Which are the objects that give additional information regarding the dimension. ● Measure: These are the quantitative numbers or related information for inferring about the performance and metrics. ● Dimension qualifier: This is the main identifier that gives the reference to certain data set. 15. What are aliases and contexts used for? The Aliases and contexts are the references to the different tables that we are dealing with the databases. These help us to avoid the circular loops and also in better logical picturing of the data when queries are implemented. 16. What are the various ways by which you get the necessary data out from the entire data? Obtaining the necessary data from the database can be done by use of the following markers and classification mechanism: ● Alerters: This will give the scope of view to the specific set of data that can have the necessary data. ● Breaks: These are used to group and classify data according to certain parameters. ● Condition and filters: These are the ones that will separate out the data from the entire set. The condition provides the exact requirement and the filters may be a range of different aspects that we expect the data to qualify for. 17. How can you define Linked universes? The data when is obtained by the combination of accesses from different data providers will have inter-related objects and such universes are called as Linked universes. 18. What are the different joins that are used over the database tables? How are they useful? The joins refers to the combinatorial values that are obtained by combing the rows and columns from different tables according to specified condition. According to the condition that we specify the joins can be classified as inner join, outer join, right join, left join or full join. These are used when we want the data from multiple tables. For eg, if we are having a customer table with customer information and sales table with sales per customer information, we can join both to get the information regarding sales per area. 19. What is the difference between master-detail and break? The Breaks just group the different rows in the table according to certain value of data. It does not change the format of the table involved. The Master-detail is the metadata information regarding the data in various tables. The attribute that is defined as the master is the center of operation. In this case the outcome or report is involving the changed format to the table that is involved. 20. What is the difference between the class and an object? The class is more like a definition of what is grouped for access at the logical level. The object is the run time entity that is created as an instance of the specified class. The class is more abstract specification which is used for just the logical structuring of data and it acts as the template to be used by the application through object instances. 21. What is a category? The grouping of the entities which can be objects or data sets, according to a specific characteristic or aspect is called a category. 22. What is repository? The repository is the accumulated information of all the databases and other configurations that are used for each of them. It represents the underlying framework over which the entire database is built over. It is usually installed during the database software installation. Whenever you add or delete a table, the repository information changes accordingly. 23. What is RDBMS? RDBMS refers to Relational Database Management System. It is the concept of storing the data as tables with each of the different data having a realtionship between them stored in turn as a table. This comes in handy in storing huge chunks of information in a logically interpretable form. Most of the database systems use this relational model based database management techniques for logical and storage simplicity.
OPCFW_CODE
Can't build on Mac OS X Following build instructions at https://github.com/docker/swarm#2---alternative-download-and-install-from-source go get -u github.com/docker/swarm # github.com/docker/swarm/cluster src/github.com/docker/swarm/cluster/node.go:368: not enough arguments in call to n.client.RemoveContainer pat-2:gocode pat$ go version go version go1.4.2 darwin/amd64 pat-2:gocode pat$ uname -a Darwin pat-2.local 14.1.0 Darwin Kernel Version 14.1.0: Mon Dec 22 23:10:38 PST 2014; root:xnu-2782.10.72~2/RELEASE_X86_64 x86_64 pat-2:gocode pat$ git version git version 1.9.3 (Apple Git-50) pat-2:gocode pat$ docker --version Docker version 1.5.0, build a8a31ef @chanezon your dockerclient seems outdated go get -u ? Not sure I understand. Do I need to build docker from tip in order for swarm build to work? I'm using docker client latest release https://github.com/boot2docker/osx-installer/releases/tag/v1.5.0 @chanezon docker client si hosted here: https://github.com/samalba/dockerclient either you update your docker client ro you install godep and go a godep go build to build using the vendored dependencies Thanks, that worked:-) 2 - Alternative: Download and install from source. Alternatively, you can download and install from source instead of using the Docker image. Ensure you have golang and git client installed (e.g. apt-get install golang git on Ubuntu). You may need to set $GOPATH, e.g mkdir ~/gocode; export GOPATH=~/gocode. The install swarm binary to your $GOPATH directory. go get -u github.com/docker/swarm :-) Getting same issue. I build it some time ago and it was fine. I upgraded docker (from docker repo) and now swarm fails with: ERRO[0000] Node <IP_ADDRESS>:2376 is running an unsupported version of Docker Engine. Please upgrade. I tried to rebuild swarm but I get same error message ldaptop:~/Development/NOSAVE/gocode$ go get -u github.com/docker/swarm # github.com/docker/swarm/cluster src/github.com/docker/swarm/cluster/node.go:368: not enough arguments in call to n.client.RemoveContainer ldaptop:~/Development/NOSAVE/gocode$ go version go version go1.2.1 linux/amd64 @osallou I ran into this yesterday as well. The problem is swarm currently isn't compatible with the latest version of https://github.com/samalba/dockerclient (which takes in 3 arguments for RemoveContainer). If you look at https://github.com/docker/swarm/blob/master/Godeps/Godeps.json , you'll see it uses an older version of dockerclient. Workaround, on Mac OS X, for now as suggested by @vieux go get -u github.com/docker/swarm fails go get github.com/tools/godep add ~/$GOPATH/bin to PATH cd src/github.com/docker/swarm godep go build I'm going to update swarm's godep to use the last version of dockerclient. /cc @therealprologic
GITHUB_ARCHIVE
By Harry Reynolds This is the ebook you want to organize for the hands-on JNCIP examination, CERT-JNCIP-M, from Juniper Networks. Written by way of the Juniper Networks coach who helped strengthen the examination, this learn advisor offers the data and insights you want to procedure the not easy JNCIP hands-on lab examination with confidence.Authoritative insurance of all attempt goals, including:Monitoring and troubleshooting router operation Upgrading and backing up JUNOS software program Configuring Ethernet, body Relay, ATM, and HDLC tracking site visitors a lot Configuring, tracking, and troubleshooting OSPF operating with IS-IS Manipulating IBGP routing tracking EBGP operation Read Online or Download JNCIP: Juniper Networks Certified Internet Professional Study Guide (Exam CERT-JNCIP-M) PDF Best networking: internet books This ebook indicates how one can put up LaTeX files on the internet. LaTeX used to be born of the scientist's have to arrange well-formatted info, relatively with photographs and arithmetic integrated; the net was once born of the scientist's have to converse info electronically. This booklet, describes instruments and methods for reworking LaTeX assets into net codecs for digital e-book, and for reworking net resources into LaTeX files for optimum printing. Angiostatin protein is a proteolytic fragment of plasminogen that has been proven to inhibit endothelial cellphone proliferation and migration in vitro and tumor angiogenesis in vivo. - Where Code and Content Meet: Design Patterns for Web Content Management and Delivery, Personalisation and User Participation (Wiley Software Patterns Series) - Net Loss: Internet Prophets, Private Profits, and the Costs to Community - JSP(TM) and XML: Integrating XML and Web Services in Your JSP Application - How the Internet Works Extra info for JNCIP: Juniper Networks Certified Internet Professional Study Guide (Exam CERT-JNCIP-M) Note After verifying the trap, be sure to remove any arbitrary addressing that you have assigned to the lo0 interface. Neglecting to do so could cause problems in a subsequent lab scenario. 1 for the addressing specifics needed to complete this task. In this example, you will need to configure your router as a unicast NTP client because the NTP server is not directly attached to your OoB management network and the lack of multicast/broadcast forwarding on the firewall router will prevent the use of multicast or broadcast client modes. Set RE0 to be the primary, and configure RE failover in the event of routing daemon failure. You may assume that the configuration files have already been mirrored on the two REs for this task. • Ensure that failure of router flash will not affect the operation of your initial configuration. Configure alarms Alarms are configured at the [edit chassis alarms] configuration hierarchy. The following command is used to configure a yellow alarm upon detection of an fxp0 link down event: [edit chassis alarm] lab@r2"bold">set management-ethernet link-down yellow Configure redundancy System redundancy is configured at the [edit chassis redundancy] configuration hierarchy. The following commands are used to explicitly configure RE0 as the primary RE, which is the default, and to evoke a switchover to RE1 in the event of routing daemon (rpd) failure. The following commands were issued on a M20 router, because the M5 platform does not support RE redundancy: 38 Chapter 1: Initial Configuration and Platform Troubleshooting Chapter 1: Initial Configuration and Platform Troubleshooting 39 lab@m20# set chassis redundancy routing-engine 0 master lab@m20# set system processes routing failover other-routing-engine Perform a system snapshot To ensure that a failure of the router's flash will not cause the loss of your initial system configuration, you must perform a system snapshot to mirror the contents of the router's flash onto the router's hard drive: lab@r1> umount: Copying umount: Copying request system snapshot /altroot: not currently mounted / to /altroot..
OPCFW_CODE
'Architectural realization is a way of thinking and not a concrete technology implementation.' It is however, supported by frameworks, patterns and best practices that complement the mindset. It is the role of the enterprise architect (EA) and his team to both define and deliver a technical roadmap that supports the short and long-term strategic objectives of the business. Thus, the EA must be one who understands all the architectural forces and more importantly the technology components that allow him/her or them to deliver a transitional and target architecture. Over the past decade a new function / job title has risen in prominence, i.e. one of the 'architect'. In many cases the title architect has become a generic name for a specific systems domain expert and has resulted in a situation, where today, we have infrastructure, data, applications, network, and solutions architects. In fact, recent adverts on various recruitment websites have attached the term 'architect' as a trailer to almost all senior technical job functions. At the top of the hierarchy is the enterprise architect - one who is required to ensure that he or she encapsulates all areas of business systems architectural knowledge. The first question one asks is, what is an EA? There are numerous definitions of the EA. The definition I like to use is that of 'an individual or team who are deemed to understand the end to end horizontal and vertical technology components, processes and flows which make up the enterprise's business information systems'. The above definition does not clarify the day to day role of the EA, but does provide the scope of the role. EA's spend a majority of their time actively involved in what I call the 'architectural process' i.e. the rules that encapsulate the creation, maintenance and governance, where governance relates to the enforcement and policing of the conceptual models (abstraction layers) & systems maps. The tasks associated with creating and policing target architectures require a set of individuals with a wide array of skill sets that have evolved and matured over several years. A question that often arises especially in instances when the architectural team has focused purely on a conceptual target architecture (discussed below) is that of: 'Why as an organisation do we require an enterprise architecture team?' If the above question is raised, then both the EA and his team are on a downhill slope. How can this question best be avoided? The answer is simple - ensure that the enterprise architecture is clearly defined, documented and marketed to the organisation and that it is constantly aligned to the strategic goals and objectives of the organisation. So we now know what an EA is, we know the scope of his / her role. Now the question to address is 'what are the attributes that the team should have or acquire'? Hence, the question we address in this paper is 'What are the contributing characteristics and attributes that make for a good EA?' It can be argued that an EA must possess the same notional qualities of a master chef i.e. one who understands individual ingredients (technology components) together with the impact and use of these ingredients in varying quantities. An individual who can align ingredients to exploit a wide variety of tried and tested recipes / cookbooks (Frameworks). As an EA, my focus in the early stages on a project is one of producing conceptual models that illustrate complex problems by filtering out non-essential details in essence making the problem easier to understand. This is assisted by exploiting various notational modelling tools (UML, BPML etc). Practicing EAs seek to segment complex problems into manageable components and exploit various architectural principals, for example, that of the 'separation of concerns' and not just at the software component layer. This segmentation is supported by the concept of architectural views where the initial focus tends to be on four vertical strands of decomposition (infrastructure, data, application and business process modelling). Where: - Infrastructure relates to the physical components that make up the enterprise systems i.e. the tangible assets (Data Centre Components, Servers, Routers, Network devices & topology, protocols etc.) - Data encapsulates elements such as repositories of information held within Relational/Object Data Management systems and the extract, transform and load mechanisms deployed within the architecture. - Application refers to the executable software that performs the necessary functions and process conversions on flows of information. The application domain also focuses on concerns such as the interaction, integration and interface between systems in the enterprise and attributes such as state management components. - Business process modelling seeks to automate business processes through the creation, maintenance and validation of conceptual models using various tools and execution languages. The use of architectural patterns e.g. the Model View Controller pattern (used in Web presentation) provides an example of how the architects can reuse components and thus reduce effort. All of the above views can be decomposed and elaborated yet further, however the goal of the EA is to understand, both the concepts and in many cases the supporting technologies. A simple analogy that has been used on numerous occasions is that of geologists and the eco system - as an architect one must maintain a macro view (look at the planet as a whole). However, you should be in a position to drill down to micro components, continents, countries cities and yes, even the motorway network of the country. Figure 1 - The Macro and Micro skillsets. Figure 1 highlights the one set of views that the EA must understand. Emphasis must be placed on maintaining a macro view but ensuring that micro components are fully understood by the team as a whole... But this is not the end of the story. Above we have advocated that the EA must maintain a macro view of the technology deployments and business processes i.e. a deep understanding of the 'current components of the production architecture' but he/she must also understand both the 'transitional' and target architecture. Transitional Architecture (TArc) can be defined as the intermediate state between the current and 'to be' target architectural system states. Understanding of the TArc can only be achieved by working with the business and project managers to understand the systems that will be deployed in the short-medium term which may affect the systems landscape. Examples of which are Commercial Off-the-Shelf (COTS) packages that exploit specific systems frameworks. Understanding the TArc allows for the provision of non-compliant products e.g. if your organisation wishes to adopt a J2EE compliant target framework, but if there is a business requirement to purchase a COTs package that exploits the .NET framework, then you as an EA must ensure that this is captured and reflected in your technical road map. Target architecture (TArg) involves understanding the current production environment, the current and future business processes and the TArc are some of the qualities that an EA must possess. However he/she must have the ability to define and ensure that the above remains aligned to the target architecture. Figure 2 - EA Matrix. For illustration purposes Figure 2 highlights, what I like to call the horizontal strands, an architectural grid, which seeks to align architectural entities (grey clouds) onto a simple matrix. This matrix can be used in architectural workshops to plot, at a high level, a snapshot. The matrix in Figure 2 can also be used to determine the strengths and weakness of the EA in a simplistic graphical way during an evaluation period. EAs must possess many qualities. These qualities are not just constrained to technical abilities; however it is important that an EA has a solid understanding of enterprise technical components. EAs must encapsulate many attributes, as on a daily basis their role will be subjected to many architectural forces a sample of which are as depicted in Figure 3. Figure 3 - Architectural Forces. The EA must consider architectural forces which encapsulate the current and transitional system states, to ensure he/she delivers a structured agile framework to support target goals of the enterprise. In summary, as the end-to-end technology becomes more and more distributed and complex in nature, a special team or individual is required to manage the 'technical road map' for organisations. This team, which we refer to as the enterprise architects, who will become more and more prominent in the organisation. The EAs must be hybrids who have evolved over a period of time and have worked in different operational environments allowing them to have gained a wide range of technical exposure. The EA must have both the strength (political) and ability to police and realize a technical transitional and target architecture, which can only be done by understanding the macro and micro technology components together with architectural frameworks, best practices and patterns that allow simplify a smoother transition from concept to reality. Questions at an EA interview When interviewing EAs I have found that they are either bias towards application or infrastructure components. So to help you understand your EAs capabilities I have listed four simple questions you may want to ask a potential EA at their interview: - When mapping a relational database to an object relational database why is the object ID significant? - In the simple network management protocol (SNMP) configuration, what is a management information base (MIB)? - What is an architectural pattern? - How would you capture, define and manage a target EA in a turbulent environment?
OPCFW_CODE
multi table content selection I am working on Exam System. I have (student),(student_test),(test),(departments) tables having relationship with each others. When every student log in there is a link called take test which will redirect them to take test page. Question: How can I get all the available related to student department tests from test table only if the student have not attended the test or in other words how to get all the test from test table that student haven't taken? $select=$connection->query("SELECT student.std_id, student.department, test.depart_id, test.test_id, test.test_name, test.test_from, std_test.stdid, std_test.std_test_id FROM student INNER JOIN test ON student.department = test.depart_id INNER JOIN std_test ON std_test.std_test_id <> test.test_id I tried this code as well but not results. SELECT student.std_id, student.department, test.depart_id, test.test_id, test.test_name, test.test_from, std_test.stdid, std_test.std_test_id FROM student INNER JOIN test ON student.department = test.depart_id INNER JOIN std_test ON std_test.std_test_id <> test.test_id WHERE test.test_id <> std_test.std_test_id AND student.std_id <> std_test.stdid Schema Script CREATE TABLE student ( std_id int(11) NOT NULL AUTO_INCREMENT, name varchar(50) NOT NULL, f_name varchar(50) NOT NULL, department int(11) NOT NULL, semester varchar(255) NOT NULL, pass varchar(255) NOT NULL, email varchar(60) NOT NULL, rollnumber varchar(20) NOT NULL, PRIMARY KEY (std_id), INDEX department (department), UNIQUE INDEX email (email), CONSTRAINT student_ibfk_1 FOREIGN KEY (department) REFERENCES departments (dep_id) ON DELETE RESTRICT ON UPDATE RESTRICT ) ENGINE = INNODB AUTO_INCREMENT = 8 AVG_ROW_LENGTH = 8192 CHARACTER SET latin1 COLLATE latin1_swedish_ci; CREATE TABLE test ( test_id int(11) NOT NULL AUTO_INCREMENT, test_name varchar(80) NOT NULL, test_date varchar(30) NOT NULL, test_from datetime NOT NULL, test_to datetime NOT NULL, test_code varchar(30) NOT NULL, test_conducter varchar(30) NOT NULL, test_duration int(11) NOT NULL, total_question int(11) NOT NULL, session varchar(50) DEFAULT NULL, subject_id int(11) NOT NULL, semester_id int(11) NOT NULL, depart_id int(11) NOT NULL, status varchar(50) NOT NULL, PRIMARY KEY (test_id), INDEX depart_id (depart_id), INDEX semester_id (semester_id), INDEX subject_id (subject_id, semester_id, depart_id), CONSTRAINT test_ibfk_1 FOREIGN KEY (subject_id) REFERENCES subjects (id) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT test_ibfk_2 FOREIGN KEY (semester_id) REFERENCES semester (sem_id) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT test_ibfk_3 FOREIGN KEY (depart_id) REFERENCES departments (dep_id) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE = INNODB AUTO_INCREMENT = 6 AVG_ROW_LENGTH = 16384 CHARACTER SET latin1 COLLATE latin1_swedish_ci; CREATE TABLE std_test ( stdid int(11) NOT NULL, std_test_id int(11) NOT NULL, starttime timestamp DEFAULT CURRENT_TIMESTAMP, endtime timestamp DEFAULT '0000-00-00 00:00:00', progress enum ('over', 'inprogress') NOT NULL, PRIMARY KEY (std_test_id), INDEX stdid (stdid), INDEX tstid (std_test_id), CONSTRAINT std_test_ibfk_2 FOREIGN KEY (std_test_id) REFERENCES test (test_id) ON DELETE RESTRICT ON UPDATE RESTRICT, CONSTRAINT std_test_ibfk_3 FOREIGN KEY (stdid) REFERENCES student (std_id) ON DELETE RESTRICT ON UPDATE RESTRICT ) ENGINE = INNODB AVG_ROW_LENGTH = 16384 CHARACTER SET latin1 COLLATE latin1_swedish_ci; Can you show me your table structure? I have added schema script untested and shortened, but this is, what i understand, when i read your question SELECT .... FROM student s INNER JOIN test t ON t.depart_id = s.department LEFT JOIN std_test st ON t.test_id = st.std_test_id WHERE st.stdid IS NULL should work now SELECT student.std_id, student.department, test.depart_id, test.test_id, test.test_name, test.test_from, std_test.stdid, std_test.std_test_id FROM student LEFT JOIN test ON student.department = test.depart_id INNER JOIN std_test ON std_test.std_test_id <> test.test_id where test.test_id IS NULL NOT WORKING Thank you for you response. But your code will hide test for all who haven't taken the test. What i want is get all test from test table based on student id who haven't taken the test yet and hide all test who have already attended the tests. edited again, hope i did understand your question now ;)
STACK_EXCHANGE
Al, Big Data, Edge and Cloud Computing technologies are transforming the retail industry. Thundercomm, as a leading loT product and solution provider, is combing the disruptive technologies to provide an immersive shopping experience for our consumers, while keeping the merchandise in a competitive position to stay ahead in the curve. Building is a space where we spend more than 80% time of whole life, in office, at home, in restaurant, in shpping mall, in cinema, in sports center, etc. and we will encounter with the scenarios including access control, security and safety, energy and environment management in almost all kinds of buildings. Thundercomm combines the user experience with the fusion technologies, the integrated planning and construction of the building to transform the buildings into a “high IQ” smart building with thinking and perceiving abilities via Connection-Integration-Ecology approach. Thundercomm is a leading XR solution provider, with a professional team focusing on end-to-end AR/VR HMD/glasses development costly and efficiently. Over 40+ AR/VR products based on Qualcomm platforms have been developed during the years, where we have seen an increasing demand of low latency rendering and 6DoF tracking. The tremendous experiences and our expertise will contribute to the building of innovative AR/VR products. Partner with Thundercomm in this field will give you more than you expected. Thundercomm has been involving robotic applications for years: may it be a robot vacuum cleaner, which can navigate itself and recognize objects; or a service robot, which can interact with the visitors and provide help; or an AGV that assisting people in warehouses and factories; in every situation, Thundercomm enables robots not only by giving it a smart brain, but also providing world class end-to-end robotic solutions. Packing up the most advanced computational platform, customized OS, dependable software baseline, cutting edge AI algorithms, and 5G connectivity, Thundercomm is your best wingman to speed up your development and make your robots the shiniest star on the stage. Smart speaker product is a bit different from the other smart devices, it’s like a hub between human and machine, collecting the request and send it to server, where the information is analysed using the machine learning to inference technologies, then responses to the request, play a song, answer a question, make a call, etc. Leveraging the emerging technologies with the hardware and software development and manufacturing, Thundercomm is able to provide turnkey smart speaker solution, covering the application development and integration. Building smart camera product is a sophisticated art, which requires experitise in both imaging and audio solutions, excepting the general hardware design and software development. From the platform selection, operating system optimization, driver development, algorithm model migration to the hardware design, prototyping, testing and manufacturing, Thundercomm leverages years of in-house expertise in smart camera turnkey solution to enable a faster time to market of smart camera product with unique features.
OPCFW_CODE
What we're about Upcoming events (3) RHoK is a community of people who are passionate about using technology to solve problems for the greater good. Intrigued? If you're thinking of coming along to this year's RHoK Sydney Winter Hackathon, or just want to find out what RHoK is all about then this is your chance to find out who we are, what we do and how we can all do our little bit of hacking for humanity! We'll also be introducing our new changemakers, and giving people a chance to meet each other over a drink and some food and begin forming their teams for the Hackathon weekend. We'll start with food and drinks at 6pm and pitches should kick off around 6:30pm. See you there! Make sure you also RSVP to the Hackathon and the Ideation Evening! Ideation Evening: https://www.meetup.com/rhok-sydney/events/259836246/ Hackathon: https://www.meetup.com/rhok-sydney/events/259835734/ Ideation night is all about working out how we can create a realistic scope for each changemaker's idea - something that can be designed and implemented in a weekend. Together with small teams of skilled technologists, designers or business experts, ideas are thrown around in a really informal and casual environment to create a rough plan for a minimum viable product. We will provide food, drinks and all the stuff you need to make the ideas happen. All we need from you is to come down and contribute your brainpower to getting some brilliant ideas launched. ;) As this an ideation session only, there will be no coding at this event. Note: We recommend attendees of this event to attend the Team Forming night on Wed, 12th June beforehand. This will ensure you have all the information you need from our changemakers to get right into ideating for the hackathon weekend ahead. The Team Forming night is where we'll actually explain how RHoK works and where our new Change-makers give a proper pitch on what they are about and what they goals are :) Link to the Team forming night event page: https://www.meetup.com/rhok-sydney/events/259836195/ Link to the Hackathon weekend event page: https://www.meetup.com/rhok-sydney/events/259835734/. Challenge your thinking and hack for humanity at the next RHoK summer hackathon! As part of a national movement, we will be gathering together on the 22nd and 23rd to work with change-makers from the community to solve problems that have a positive social impact. Who can 'hack'? We're calling for individuals who want to make a difference and are passionate about using technology for the greater good. Our past hackers have come from various backgrounds including design (UI/UX/product), software development, business analytics and more. So if that's you, join us for a packed and rewarding weekend. Collaborate with new people. Get excited about an idea. Put all your effort behind it. Run up against a wall, then climb over it. Make it work in a weekend. Present the solution to the judges. Walk away from the event knowing you made a difference. While it's not mandatory, it's recommended that you come to the leadup events too: Team Forming Night (12 June): https://www.meetup.com/rhok-sydney/events/259836195/ Ideation (19 June): https://www.meetup.com/rhok-sydney/events/259836246/ *** We are currently looking for changemakers! Apply at https://www.rhokaustralia.org/become-a-changemaker ** • What to bring We provide stationary, powerboards, wifi, food and drink - please bring your own laptop or other devices that you need. • Important to know Please RSVP so we have an idea of numbers :)
OPCFW_CODE
v0.3.2-beta.1 - Better handle *.sh scripts and the Overload game directory Recent *.sh pull requests all have various issues related to starting the game properly. This issue is to come up with a solution that will satisfy all use cases of the olmod.sh script. I think this hits all cases, and provides helpful error messages to boot. #!/bin/bash args=() gamedir="$(cd $(dirname "$0"); pwd)" olmoddir="$(cd $(dirname "$0"); pwd)" while [[ $# -gt 0 ]] do key="$1" case $key in -gamedir) gamedir="$2" shift shift ;; *) args+=("$1") shift ;; esac done olargs=${args[@]} if [[ -z "$OSTYPE" || "$OSTYPE" != "darwin"* ]]; then olmodso="${olmoddir}/olmod.so" overload="${gamedir}/Overload.x86_64" if [[ -f "${olmodso}" ]]; then if [[ -f "${overload}" ]]; then LD_PRELOAD="${LD_PRELOAD:+$LD_PRELOAD:}${olmodso}" exec "${overload}" "${olargs}" else echo "Error: Overload.x86_64 not found." echo "Looked in ${gamedir}" echo "Use the -gamedir switch to specify the path that Overload.x86_64 resides in." fi else echo "Error: olmod.so not found." echo "Looked in ${olmoddir}" echo "You must run olmod.sh while it's in the same directory as olmod.so." fi else olmoddylib="${olmoddir}/olmod.dylib" overload="${gamedir}/Overload.app/Contents/MacOS/Overload" if [[ -f "${olmoddylib}" ]]; then if [[ -f "${overload}" ]]; then DYLD_INSERT_LIBRARIES="${DYLD_INSERT_LIBRARIES:+$DYLD_INSERT_LIBRARIES:}${olmoddylib}" exec "${overload}" "{$olargs}" else echo "Error: Overload.app not found." echo "Looked in ${gamedir}" echo "Use the -gamedir switch to specify the path that Overload.app resides in." fi else echo "Error: olmod.dylib not found." echo "Looked in ${olmoddir}" echo "You must run olmod.sh while it's in the same directory as olmod.dylib." fi fi CC: @sjackso @rucker I have some folks testing on servers, but let me know if this is a better script for your needs as well. PuDLeZ found an issue with directories with spaces in their names, which is common if you have Overload through GoG. Here's an updated script which handles that. cat olmod.sh: #!/bin/bash args=() gamedir="$(cd "$(dirname "$0")"; pwd)" olmoddir="$(cd "$(dirname "$0")"; pwd)" while [[ $# -gt 0 ]] do key="$1" case $key in -gamedir) gamedir="$2" shift shift ;; *) args+=("$1") shift ;; esac done olargs=${args[@]} if [[ -z "$OSTYPE" || "$OSTYPE" != "darwin"* ]]; then olmodso="${olmoddir}/olmod.so" overload="${gamedir}/Overload.x86_64" if [[ -f "${olmodso}" ]]; then if [[ -f "${overload}" ]]; then cd "${olmoddir}" LD_PRELOAD="${LD_PRELOAD:+$LD_PRELOAD:}./olmod.so" "${overload}" ${olargs} else echo "Error: Overload.x86_64 not found." echo "Looked in ${gamedir}" echo "Use the -gamedir switch to specify the path that Overload.x86_64 resides in." exit 1 fi else echo "Error: olmod.so not found." echo "Looked in ${olmoddir}" echo "You must run olmod.sh while it's in the same directory as olmod.so." exit 1 fi else olmoddylib="${olmoddir}/olmod.dylib" overload="${gamedir}/Overload.app/Contents/MacOS/Overload" if [[ -f "${olmoddylib}" ]]; then if [[ -f "${overload}" ]]; then cd "${olmoddir}" DYLD_INSERT_LIBRARIES="${DYLD_INSERT_LIBRARIES:+$DYLD_INSERT_LIBRARIES:}./olmod.dylib" "${overload}" ${olargs} else echo "Error: Overload.app not found." echo "Looked in ${gamedir}" echo "Use the -gamedir switch to specify the path that Overload.app resides in." exit 1 fi else echo "Error: olmod.dylib not found." echo "Looked in ${olmoddir}" echo "You must run olmod.sh while it's in the same directory as olmod.dylib." exit 1 fi fi 0.3.2-beta.2 will have this fixed script, and I also had to fix olmod.c for a new olmod.dylib. Once released, you should be able to unzip the release binaries anywhere and run olmod.sh -gamedir /path/to/Overload.app.
GITHUB_ARCHIVE
How does relativity affect the view of different observers I understand how the Lorentz contraction works but was thinking about how that would manifest itself as perspective of a distant object by two different observers. One is traveling at high velocity towards say, a distant planet, and another is stationary with respect to the planet. At the moment of observation both observers are the same distance from the planet. Would the moving observer see a larger planet compared to the stationary observer? The consistency between time slowing down for the moving observer, as perceived by the stationary observer, and the length contraction of the moving observer would have them both agreeing on how long the journey to the distant planet took (in their relative reference frames) but would the observers see different sized planets as the angle subtended by the planet would be different at different distances? If the observers are at the same place at the same time, what they will actually see depends only on the light rays arriving at that place and time. This must of course be the same for both observers. What they will infer about the distance those light rays must have traveled, the angles between them, and how long the journey took is another matter. Are you asking about the immediate perception or about the inference? Assuming you are describing the following situation: two reference frames related by a boost along an axis (the x-axis for simplicity), with a spherical object at rest in one of the two frames (say in frame $K_1$). Then the observer of frame $K_2$ will see length contracted along the x-direction: he will see an ellipsoid, not a sphere, as he will see the diameter of the sphere parallel to the x-axis contracted. @WillO Yes, what they actually perceive at the same instant when both are at the same location. Does one perceive a larger planet than the other? @Luthien Yes an ellipsoid which he sees as a circle when traveling directly towards it but because the distance from it is contracted, it would infer a larger subtended angle and a larger circle. Is there an explanation for why this would or wouldn't be the case? https://en.wikipedia.org/wiki/Relativistic_aberration. Also https://www.spacetimetravel.org/bewegung/bewegung5.html (and similar) for some generated images. Given your clarification in the comments, the first part of my comment is the relevant answer: If the observers are at the same place at the same time, what they will actually see depends only on the light rays arriving at that place and time. This must of course be the same for both observers. Intuitively that makes a lot of sense. With that thought, how do we make a light diagram work for the moving observer? Could the explanation be that the observer's eyeball has contracted in the direction of travel, hence the angle subtended by the light from the planet on his retina would be identical to that of the stationary observer. Does that sound reasonable? @philh : In his own frame, the "traveler" is stationary and his retina is not contracted. @WillO, the image formed depends not only on the light arriving at the same point and the same time, but on the observer interpreting the direction of the light being identical. I don't believe the direction will be the same for a moving and a non-moving observer. @BowlOfRed : I am taking each observer to be spatially zero-dimensional. To interpret the direction of the light, it seems to me you'd need a retina with some spatial extent. But if we allow that, then it becomes impossible to interpret the question, since the observers cannot agree that all parts of A's retina were in the same place at the same time as all the corresponding parts of B's retina. So I think that any fair interpretation of the question (with A and B unambiguously co-present at a single event) requires zero dimensionality and therefore prohibits directional interpretation. @BowlofRed: Of course I might be overlooking a possible alternative interpretation, and would be happy to have one. That's a legitimate concern, but to throw out the direction of light seems to remove the question of "viewing a planet" entirely. @BowlofRed: That too is a legitimate concern, and I suspect that the upshot is that there is no way to interpret the question that addresses both of our concerns. @BowlofRed I think I understand the concept from the math formula for relativistic aberration. In the reference frame of the moving observer, light from the periphery of the approaching planet will subtend a smaller angle as the light received by the observer is from the planet when it was a further distance away. From a classical perspective this is calculable but there is a further reduction in the subtended angle due to the relativistic effect. The difference between the classical and relativistic angles is consistent with the Lorentz contraction distance to the planet. Thanks Here is my understanding using a specific example. There is consistency in the math between classical and relativistic corrections in the image received by A with the difference due to the Lorentz contraction. In both cases, as stated in Albert's previous answer, the moving observer sees a smaller image of the planet compared to the stationary observer. In summary, there is an increase in image size of an approaching planet at relativistic velocities due to Lorentz contraction but this is overshadowed by the classical consideration of a moving observer receiving the image of the planet when it was at a much further distance. The overall effect being that a moving observer sees a smaller approaching planet than a stationary observer.
STACK_EXCHANGE
WebMethods.io B2B Environment Management & Promoting Assets Across Environment/Stages This article explains about how to promote B2B assets to higher environment and stages It is assumed that readers of this article know how to set up B2B partners on the webMethods.io B2B platform. - Set up a B2B enterprise profile. - Partner profiles, Business documents, channels, and processing rules are already created. - The environment is provisioned to which the assets need to be published from the current environment(Source and Target environment) - We need to register the environment to which assets need to be deployed from the current environment to a higher environment. - Click on setting and select Environment - Currently, we can see the default environment - Click on Register environment - Assign the name to a higher environment like UAT, PROD - Select the environment on which we need to deploy the assets. - In our case, we are deploying to PROD stage - Provide the username and click on the register - Click on the Asset migration icon and select the export tab within it. - We can see all partner profiles, business documents processing rules, and document attributes. - Select partner profiles, business document, processing rules, and other dependent artifacts which needs to be deployed. - After selecting all the artifacts click on add to export list. B2B assets can be moved to a higher environment in 2 ways - Using export option all the components can be exported in zip format and then the same can be imported in another tenant. - Channels and certificates won’t get exported. Create a new Deployment: - Select create a new deployment option. - Provide the name for deployment. - Select the target environment - Provide a user name and password. - Click finish. - Once we click finish assets are published to the target environment. - We can see the deployment logs from the source system under the outbound tab. - Deployment logs provide details such as event name, source environment, etc. - Connect to the environment (PROD, UAT) to which we have published the assets - Click on Asset migration and select the Deployments tab. - Click on Available deployments. - Select available deployments and click Next. - Click Proceed. Deployment logs from the target environment: - We can see the deployment logs from the target system under the inbound tab. Suppose if we want to deploy again the same assets to the higher environment. - Go to Asset Migration --> Deployments - Click on available deployments - Select the desired deployments. - Click on publish deployment. - Provide the password and click publish. - This will publish the deployable assets to the target environment. - Deployment logs for it can be viewed on source and target environments both.
OPCFW_CODE
There's a new revision to the Eclipse Platform Project Draft 3.1 Plan. This last one was published August 5th (Friday). It's a good idea to pay attention to these when they come out, especially with regards to the plugin APIs. The first milestone (M1) should hit the streets on or about August 12th. According to the draft plan, version 3.2 won't be finally released until sometime in 2Q2006. Since they only have four milestone releases listed so far, and the fourth is targeted for mid-December, I have to wonder what is going to happen in the next 3-6 months after the December milestone release. I hope it comes out a lot sooner. Here are some of the proposed items for 3.2 that I'm most interested in. Support logical model integration. The Eclipse Platform supports a strong physical view of projects containing resources (files and folders) in the workspace. However, there are many situations where plug-in developers would prefer to work with a logical model that is appropriate to their domain. Eclipse should ease the task for plug-in developers who want to make logical model-aware plug-ins. To do this, Eclipse should provide more flexible mechanisms for contributing actions on models that do not have a one-to-one mapping to files on the users hard disk. This would, for example, allow a team provider's repository operations to be made available on logical artifacts. In addition, existing views like the navigator and problems view should be generalized to handle logical artifacts and, in general, there should be better control over what is displayed in views and editors based on the logical models that the end user is working on. [Platform Core, UI, Team] (37723) [Theme: Design for Extensibility] Provide more flexible workspaces. Currently in Eclipse there is a direct connection between IResources and files and directories on the local file system. Eclipse should loosen this connection, by abstracting out its dependency on java.io.File, and allowing for alternative implementations. This would enable, for example, uses where the workspace is located on a remote server, or accessed via a non-file-based API, or has a non-trivial mapping between the resources and the physical layout of the files. [Platform Resources, Text] (106176) [Theme: Design for Extensibility, Enterprise Ready] Update Enhancements. As the number and range of Eclipse plug-ins continues to grow, it becomes increasingly important to have a powerful update/install story. For instance, if more information about an Eclipse install was available earlier, it would be possible to pre-validate that it would be a compatible location to install particular new features and plug-ins. This information could also be used to deal with conflicting contributions. Update should also be improved to reduce the volume of data that is transferred for a given update, and PDE should provide better tools for creating and deploying updates. [Update, Platform Runtime, PDE] (106185) [Theme: Enterprise Ready] I'm going to pay close attention to update enhancements. Keeping track of plugins and being able to manage them effectively is going to be crucial for Eclipse and NetBeans if they are to continue to evolve and grow. Provide more support for large scale workspaces. In some situations, users have workspaces containing hundreds of projects and hundreds of thousands of files. Scoped builds and working sets have become important tools for these users, and the performance optimizations done in Eclipse 3.1 have proven helpful. Eclipse should have even more support for dealing with very large workspaces, including improved searching, filtering, working sets, and bookmarks. This goes hand-in-hand with ongoing work to discover and address performance issues. [Platform UI, Resources] (106192) [Theme: Scaling Up, Enterprise Ready] Enhance the text editor. The Eclipse Text component provides a powerful editing framework that is useful in many contexts, but some of its capabilities are currently only available in the Java editor. The Java editor should be opened up to allow more general access to the reconciling, code assist, and template proposal mechanisms. Other enhancements to the look and feel of the editor should also be considered in areas such as the Find/Replace dialog, showing change history and comments in the editor, and annotation roll-overs. [Platform Text] (106194) [Theme: Design for Extensibility] Improve UI Forms. UI Forms are increasingly being used in the Eclipse user interface. The UI Form support should be improved to allow for more pervasive use in 3.2. Critical widget functionality should be moved to SWT to ensure quality and native look and feel. The remaining UI Forms code (minus FormEditor) should be pushed down into JFace so that it is available in the Eclipse workbench. [SWT, UI, Help] (106203) [Theme: Simple to Use, Design for Extensibility] Add support for Java SE 6 features. Java SE 6 (aka "Mustang") will likely contain improvements to javadoc tags (like @category), classfile specification updates, pluggable annotation processing APIs, and new compiler APIs, all of which will require specific support. [JDT Core, JDT UI, JDT Text, JDT Debug] (106206) [Theme: Appealing to the Broader Community] Improve refactoring. Refactoring currently relies on a closed world assumption. This means that all of the code to be refactored must be available in the workspace when the refactoring is triggered. However for large distributed teams, the closed world approach isn't really feasible. The same is true for clients which use binary versions of libraries where API changes from one version to another. In 3.2 the closed world assumptions will be relaxed in such a way that a refactoring executed in workspace A can be "reapplied" on workspace B to refactor any remaining references to the refactored element. Furthermore, existing refactorings will be improved to preserve API compatibility where possible (for example when renaming a method, a deprecated stub with the old signature will be generated that forwards to the new method). [JDT Core/UI] (106207) [Theme: Scaling Up]. API-aware tools. Given the importance that the Eclipse community places on stable, robust APIs, having good support for their implementation is critical. The support within Eclipse for describing APIs should be improved, along with better tools from assisting developers to stick to APIs provided by other plug-ins. [PDE, JDT] (106213) [Theme: Enterprise Ready] Improve PDE build. PDE Build is fundamental to how the Eclipse Platform releases are produced. It is also increasingly being used by other Eclipse projects and in the wider community. Potential improvements to PDE build include parallel cross-building, incremental building of plug-ins, increased integration with the workspace model, and support for additional repository providers. [PDE] (106214) 106214[Theme: Enterprise Ready] There's a log more to read, and there is the link on the page to API change tracking. If they deliver most or all of what is targeted (and not just my interest list), then Eclipse 3.2 is going to be a powerful challenger. It will be interesting to see how NetBeans responds.
OPCFW_CODE
Zend capture layout and view content as variable I have a controller Mycontroller with simple exemple action: public function exempleAction(){ // Using layout "mail" $this->_helper->layout()->setLayout("mail"); } I want to get HTML content of the view using: (to use it later as email content) $view_helper = new Zend_View_Helper_Action(); $html_content = $view_helper->action('exemple', 'Mycontroller','mymodule'); This successfully allow me to get the view content but WITHOUT the layout content. All the HTML code of the layout "mail" is not included in $html_content. How can i capture the whole content includind the layout part? If I'm not mistaken, it is normal that you do not have the layout after $view_helper->action('exemple', 'Mycontroller','mymodule'); Indeed, the layout is call in postDisatch() of Zend_Layout_Controller_Plugin_Layout's plugin. You can still try this: In your layout 'mail.phtml' put this: echo $this->layout()->content; In your method : $view_helper = new Zend_View_Helper_Action(); $html_content = $view_helper->action('exemple', 'Mycontroller','mymodule'); $layout_path = $this->_helper->layout()->getLayoutPath(); $layout_mail = new Zend_Layout(); $layout_mail->setLayoutPath($layout_path) // assuming your layouts are in the same directory, otherwise change the path ->setLayout('mail'); // Filling layout $layout_mail->content = $html_content; // Recovery rendering your layout $mail_content = $layout_mail->render(); var_dump($mail_content); Try this: //this will get the current layout instance //clone it so you wont see any effects when changing variables $layout = clone(Zend_Layout::getMvcInstance()); //if you want to use another layout script at another location //$path = realpath(APPLICATION_PATH . '/../emails/'); //$layout->setLayoutPath($path); //set the layout file (layout.phtml) //$layout->setLayout('layout'); //prevent this layout from beeing the base layout for your application $layout->disableLayout(); //get your view instance (or create a new Zend_View() object) $view = $this->view; //new Zend_View(); //set the path to view scripts if newly created and add the path to the view helpers //$view->setBasePath(realpath(APPLICATION_PATH.'/../application/emails')."/"); //$view->addHelperPath(realpath(APPLICATION_PATH.'/../application/layouts/helpers/')."/", 'Application_Layout_Helper'); //set some view variables if new view is used (used in the view script $this->test) $view->assign('test', 'this can be your value or object'); //set the content of your layout to the rendered view $template = 'index/index.phtml'; $layout->content = $view->render($template); $bodyHtml = $layout->render(); Zend_Debug::dump($bodyHtml); Or Get the response body in your action. This will store the html that normally would be send to the browser as response. //first disable output if needed $this->view->layout()->disableLayout(); //get the response object $response = $this->getResponse(); $bodyHtml = $response->getBody(); Zend_Debug::dump($bodyHtml); Have fun!
STACK_EXCHANGE
Software design myths If a software product is to be further developed over a longer period of time, the Costs increase with increasing complexity often one of the most important constraints. Various software design practices and principles that are intended to keep the costs of new functionality fairly constant help to manage complexity. Unfortunately, there are also Misunderstandings and myths around some of these design principles, often spread faster than the intended interpretation and can thereby make the complexity and cost problems even worse. By taking a closer look at the original sources, these myths can be easily debunked. Below I will introduce you to three of these principles and their most common myths. DRY – Don't Repeat Yourself This policy states that every piece of information or “knowledge” should have a single, unique representation. Especially in the code, but also, for example, in the documentation and in the automated tests. The common misinterpretation, which is quite understandable based on the name alone: Each piece of code can only exist once - even if it occurs in different parts of the code that shouldn't know anything about each other. Anyone who only knows this incorrect interpretation should forget it as quickly as possible, because it leads to increased coupling (coupling) in the program code. This not only makes changes more complex, but also makes it easier for errors to occur. Single Responsibility Principle (SRP) This principle is probably the most misunderstood of all known SOLID Principles. According to the author Robert C. Martin, who formulated the principle, it is probably due to the inappropriate name: Many software developers interpret this as meaning that every module or class only one thing supposed to do. There is also the principle that... every function should only do one thing, which has nothing to do with the SRP. On the contrary: one goal of the principle is cohesion, i.e. bringing together functionality that belongs together professionally or technically and is changed for the same group of (professional or technical) “users”. Anyone who separates related code based on this misunderstanding so that each class or module supposedly only “does one thing” achieves exactly the opposite - and violates several other principles in addition to the SRP. The “responsibility” does not lie in was a class does, but fur wen. Open-Closed Principle (OCP) The aim of this principle (the O in SOLID) is to allow new functionality to be added without having to adapt large amounts of existing code. The common mistake: “guessing” possible future functionality and making the code as flexible as possible in advance. Because of this Overengineering The code often becomes more difficult to understand, and much of this flexibility may never be needed. Instead, Robert C. Martin speaks of one strategic closure (“strategic shell”) certain parts of the code that comes through certain code changes should not be influenced. This keeps code changes local instead of propagating through the code base. The whisper effect can often be observed in software design principles: information is lost, people add their own interpretations, and so misunderstandings often spread faster than correct information. That's why when you hear new principles, it's advisable to occasionally read the original source to check your understanding - for example, timeless classics like The Pragmatic Programmer. And as with any skill, software design requires sensitivity and a goal-oriented approach - blindly following subjective “principles” is rarely helpful. The best software designs are created through close collaboration in diverse teams.
OPCFW_CODE
Pedagogical star modeling using Web technology. ishort "at" ap.smu.ca To provide university and high school instructors and students with physics-based pedagogical web apps for demonstrating and investigating stellar astronomy and astrophysics, and exoplanetary systems, at a wide range of educational levels, that are accessible on any common computational device as is, To provide astrophysics students with public domain open source stellar astrophysical modeling codes with which they can experiment, and to which they can contribute, using the developer console of any web browser on any device. To popularize and normalize astronomy by deploying intuitive and engaging interactive credible simulations of astronomical objects and systems in a way that can be seamlessly integrated with web cluture, including web-based gaming, massive multi-player virtual worlds, and social media. Collaboration and supervision opportunities in astrophysics, web development, and pedagogy. Projects range from Undergrad to M.Sc. level: Exoplanetary atmosphere and surface modeling: Improve the habitable zone model and provide for greater planet customization - Stellar atmospheric modeling and spectrum synthesis: Develop any aspect of ChromaStar (CS) and Implement the line lists as SQL databases to be queried by CSS Done Re-do the flux integration as 2D longitude & latitude tiles and provide for customized star spots First part done Add molecular bands in the JOLA approximation Done EVEN MORE for 5 TiO bands - CH g band next if I can find data! Add and test additional molecules (beyond current TiO)Done by integrating Phil Bennett's GAS package for CSS and CSPy Add H2 Rayleigh scattering opacityDone for CSPy and CSS - Add proper atomic energy level data structure, with link to atomic transitions, for proper line broadening - Add approximate line blanketing for lambda < 400 nm as a pseudocontinuous "just-overlapping-atomic-line" opacity source for radiative equilibrium temperature corrections Improve atomic partition function treatment - see data of Barklem & Collet 2016Done Add important metal bound-free and Rayleigh scattering opacities to improve blue/near-UV opacity - Work on temperature correction and overall structure convergence (or at least add low log(g) template models for the current T_kin(tau) re-scaling - Add interactive periodic table to CSS interface to allow non-solar abundance distribution - begin with simple alpha-enhancement - Add radial-tangential macro-turbulent broadening (now possible) and spectral line bisector visualization (see David Gray Lectures on FGK stars) Stellar interior structure modeling: Develop PolyStar Add solvents other than water - ammonia, carbon dioxide, methaneDone In situ Exo-planet transit light-curve modellingDone Spring 2020 - Provide for isochrone generation Port to Python3 with visualization in a Jupyter notebookDone and done (May 2017) Port to Julia with visualization in Jupyter - faster than python Add H, J, K AND ugriz filters and colors - spectrum now goes to 2600 nm Done Add faster spectrum-synthesis-only mode to CSS and CSDB Accelerate computation of CS with WebGL Accelerate computation of CSS, CSDB, and CSPy with multi-threading in Java and python Web interface development: Develop the OpenStars UIs by incorporating the latest web programing libraries and methods User interface (UI) development: Improve the OpenStars UIs by basing them on sound pedagogical design principles Web development: Game-ify and/or puzzle-fy any OpenStars apps Connection with BGO twitter interface? Port as app to Android and iOS stores Re-do plots in HTML5 SVG and incorporate d3 First part done for CS Any of these can be cited when referring to any OpenStars application Short, C. Ian & Bayer, Jason, H.T., 2018, ChromaStarAtlas: Browser-based visualization of the ATLAS9 stellar structure and spectrum grid Short, C. Ian, Bayer, Jason, H.T. & Burns, Lindsey M., 2018, ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python, ApJ, 854, 82, 5 pp. Short, C. Ian, 2017, ChromaStarDB: SQL database-driven spectrum synthesis, and more, PASP, 129, 094504, 11 pp. Short, C. Ian, 2016, GrayStarServer: Server-side spectrum synthesis with a browser-based client-side user interface, PASP, 128, 104503 (arXiv:1605.09368) Short, C. Ian, 2015, grayStar3 - gray no more: More physical realism and a more intuitive interface - all still in a WWW browser (arXiv:1509.06775) Short, C. Ian, 2014, GrayStar: A Web Application For Pedagogical Stellar Atmosphere and Spectral-Line Modelling and Visualization, JRASC 108, 230 (arXiv:1409.1891) Short, C. Ian, 2014, GrayStar: A Web application for pedagogical stellar atmosphere and spectral line modelling and visualisation II: Methods, arXiv:1409.1893 Friends of OpenStars Atmospheric modeling + spectrum synthesis ATLAS9 + SYNTHE9 NLTE Spectral line
OPCFW_CODE
There are some important details about handling signals in a Python program that uses threads, especially if those threads perform tasks in an infinite loop. I realized it today while making some improvements to a script I use for system monitoring, as I ran into various problems with the proper handling of the SIGINT signals, which should normally result in the termination of all the running threads. After a thorough read of the documentation and some research on the web, I finally made it work and thought it would be a good idea to write a post that points out these important details using a sample code snippet. Set signal handlers in the main thread The first most important thing to remember is that all signal handler functions must be set in the main thread, as this is the one that receives the signals. def signal_handler_function(signum, frame): # ... def main_function(): signal.signal(signal.SIGTERM, signal_handler_function) # ... Registering signal handlers within the thread objects is wrong and doesn’t work. The documentation of the signal module has a very informative note: Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication. Use locks instead. Keep the main thread running This is actually a crucial step, otherwise all signals sent to your program will be ignored. Adding an infinite loop using time.sleep() after the threads have been started will do the trick: thread_1.start() # ... thread_N.start() while True: time.sleep(0.5) Note that simply calling the thread’s .join() method is not going to work. Example code snippet Here is a basic Python program to demonstrate the functionality. The main thread starts two threads (jobs) that perform their task in an infinite loop. There is a registered handler for the INT signals, which gives all running threads the opportunity to shut down cleanly. Note that a Ctrl-C on your keyboard) is interpreted as a SIGINT, so this is an easy way to terminate both the running threads and the main program. For more information about how it works, please refer to the next section containing some useful remarks. import time import threading import signal class Job(threading.Thread): def __init__(self): threading.Thread.__init__(self) # The shutdown_flag is a threading.Event object that # indicates whether the thread should be terminated. self.shutdown_flag = threading.Event() # ... Other thread setup code here ... def run(self): print('Thread #%s started' % self.ident) while not self.shutdown_flag.is_set(): # ... Job code here ... time.sleep(0.5) # ... Clean shutdown code here ... print('Thread #%s stopped' % self.ident) class ServiceExit(Exception): """ Custom exception which is used to trigger the clean exit of all running threads and the main program. """ pass def service_shutdown(signum, frame): print('Caught signal %d' % signum) raise ServiceExit def main(): # Register the signal handlers signal.signal(signal.SIGTERM, service_shutdown) signal.signal(signal.SIGINT, service_shutdown) print('Starting main program') # Start the job threads try: j1 = Job() j2 = Job() j1.start() j2.start() # Keep the main thread running, otherwise signals are ignored. while True: time.sleep(0.5) except ServiceExit: # Terminate the running threads. # Set the shutdown flag on each thread to trigger a clean shutdown of each thread. j1.shutdown_flag.set() j2.shutdown_flag.set() # Wait for the threads to close... j1.join() j2.join() print('Exiting main program') if __name__ == '__main__': main() Run the above code and press Ctrl-C to terminate it. Below, there are some remarks which aim to help you better understand how the code snippet above works: - As mentioned previously, each Job object performs its task in its own thread using an infinite loop. Each Job object has a shutdown_flagattribute (threading.Event object). On each cycle, the status of the shutdown_flagis checked. As long as the shutdown flag is not set, the threads continue doing their jobs. When set, the job threads shut down cleanly. ServiceExitis a custom exception. When raised, it triggers the termination of the running job threads. service_shutdownfunction is the signal handler. When a supported signal is received, this function raises the - In the service_shutdownfunction is registered as the handler for the - Whenever a SIGINTis received, the signal handler ( service_shutdownfunction) raises the ServiceExitexception. When this happens, we handle the exception by setting the shutdown flag of each job thread, which leads to the clean shutdown of each running thread. When all job threads have stopped, the main thread exits cleanly as well. Using signals to terminate or generally control a Python script, that does its work using threads running in a never-ending cycle, is very useful. Learning to do it right gives you the opportunity to easily create a single service that performs various tasks simultaneously, for instance system monitoring, and control it by sending signals externally from the system. How to terminate running Python threads using signals by George Notaras is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright © 2016 - Some Rights Reserved
OPCFW_CODE
Cleanup Settings for Virtual Machine Backups In this article we discuss the Traditional Incremental Backup Scheme, and important concept in Backup Engineering, and how it relates to the cleanup of virtual machine backups. The first section of this article discusses the theory behind the incremental backup scheme and the last few slides show the cleanup in action, depicted over several backup cycles. What is the Traditional Incremental Backup Scheme? The traditional incremental backup scheme is a popular scheme because it prioritizes speed over storage usage. It’s popular because speed is generally, but not always, more important to the business and because storage costs are decreasing every year. It’s generally cheaper and easier to add more storage than to have backups processes spend additional time reorganizing archives unnecessarily. A major problem with so-called ‘forever-incremental’ or ‘synthetic forever’ schemes is that at some point they require a reorganization, or a merge operation of some sort. This merging substantially increases the process time required to finish a backup cycle. Forever-incremental types of strategies also rely on the backup pieces stored in the backup storage, which may have become corrupt or damaged over time. What is a Backup Chain? The traditional incremental scheme does not require any additional processing in the backup folder. A full, generally compressed backup, is taken first, followed by a certain number of incremental backups or differential backups. This forms a backup chain (hence, the product name BackupChain). Once the first backup chain is complete, a new one is begun and the process continues in this fashion forever. At some point, to prevent the storage from filling up, a cleanup needs to take place. The cleanup is simply achieved by removing the oldest backup chain in its entirety from the storage. This is a very quick and efficient process. Pros and Cons of the Traditional Incremental Backup Scheme The main advantages of the traditional incremental backup scheme are the following. Backups are always written fresh. If, for any reason, files in the backup storage become corrupt, through any means such as bit rot or even vandalism, the software will at some point start a new backup chain automatically. The second main advantage is that the cleanup operation is instantaneous. There is no post-processing required, no merging, no reading, no writing. This is ideal for remote and slow storage but also saves considerably amounts of time even with high speed local backups. The main disadvantage is that cleanup cannot occur at each backup cycle because a backup chain can only be deleted in its entirety. While the first backup chain is growing, no cleanup is possible because items in the chain cannot be individually removed and that’s because they are interdependent. In order for a cleanup to occur, there must be at least two backup chains present and the number of backups after the deletion must be over the limit specified in BackupChain. For example, if you wish to keep at least 10 backups (the last 10 backup cycles) in your backup storage, then deleting the first backup chain must leave behind at least 10 backups. Hence, there will have to be far more than 10 backups in the storage in order for a cleanup to take place and reduce the total to 10 (or more). Please see the last few slides for a depiction of the cleanup process. An additional caveat presented in the slide above is that very long backup chains are more efficient with storage but require a longer restore process. First there is a full compressed backup which is then followed by N increments. If the last backup in the chain is to be restored, the entire chain needs to be restored, including all increments. Because each increment refers to the previous, the storage used will be minimal as it only contains the changes since the last backup. But the restore operation will take longer because it has to work through each increment. The exception here is a differential backup. In a differential backup the comparison is always made to the last full backup; hence, there are at the most two steps requires for a restore, which is faster. But because the differential backup looks back at the last full backup, it tends to use more and more backup storage as the total number of changed blocks in the backup increases. Over time, differential backup chains tend to grow large because they are more inefficient with storage, i.e. because the percentage of changed blocks between now and the last full backup tends to increase quickly. For the above reasons, BackupChain allows users to configure the number of increments or differentials per backup chain. In certain situations it makes sense to create very long backup chains. In other scenarios, a short chain may be better. For example, if you know that your huge 4 TB VM has very little change per day, it will make perfect sense to create very long incremental backup chains, especially if the storage is also remote and slow. If the percentage of content change in each backup is high, this will result in larger increments per day. If the increments for a VM are rather large, it would make sense to keep backup chains short. By breaking a backup chain, i.e. by starting a new backup chain, you can improve restore speeds and you increase the frequency at which a cleanup may occur. Conversely, short backup chains are not as storage efficient and ‘waste’ more backup space but involve fewer restore steps and allow more frequent cleanups to occur. In the above slide we see that even though four backups completed and we want to keep just two, it’s still not possible to delete anything. The reason is simple. Since a backup chain may only be deleted in its entirety, deleting it would result in just one backup being left over. But the limit clearly defines two backups as the minimum. Hence, BackupChain waits for more backups to complete. Note the above slide shows that the cleanup occurs at the end of day 5. The fifth backup must complete first, then there are two more backups on top of the first backup chain (3+2=5). Once that’s the case, BackupChain will delete the first backup chain and leave you with two backups, which is the defined limit in this example. Let’s say you wanted a full backup once a week and your backups run daily. In our example below, we would define in the Deduplication tab the following: In our example above, we chose 6 increments. A full backup plus six increments equals 7, a week. We chose differential above because we know that this particular VM changes the same blocks over and over again (the VM contains a specific kind of database with internals well known to the user). In the File Versioning / Cleanup tab, we define 7 as the minimum because storage is limited: Note that we set the same limit for all types of files involved. In this case it’s a VMware virtual machine backup. Note that based on the slides and other infos mentioned above, the backup will actually keep up to 14 backups in the backup storage. In the long-term, there will be a fluctuation between 7 and 14 backups in the backup folder. The reason is that the first backup chain will grow to 7 backups. BackupChain then starts a new backup chain. Once it also reaches 7 backups, the total is now 14, then a cleanup is initiated because deleting the first 7 backups (the oldest chain) results in 7 backups left over. Before that a cleanup is not allowed to occur because deleting the first chain too early would leave you with fewer than 7 backups and would violate the “Minimum Number of File Versions” setting shown above, which is set to 7 in this example. Delayed Deletion for Snapshots / Checkpoint Cleanup In the special case of virtual machine backups, you may want to consider setting a ‘delayed deletion period’. Backups are done, among many other things, to prevent data loss caused by accidental deletion. In the case of VM checkpoints, the checkpoint may have been deleted purposely. If that’s the case, it will be kept indefinitely in the backup storage unless you define a delayed deletion period. If you entered ’30 days’ without the quotes in the above cells that currently show ‘never delete’, then BackupChain will wait 30 days after it detects that the original files were deleted. In the case of Hyper-V, and because Hyper-V stores checkpoints as AVHD and AVHDX files, you would define rules for *.AVHD and *.AVHDX to handle the checkpoint cleanup after 30 days or whatever time frame makes sense in your setting. If you choose to use a delayed deletion period, it’s very important that you define the period to be much longer than your regular cleanup triggered by ‘minimum number of file versions’. From our above example we know that backups will be kept for 7 to 14 days. Hence, defining the delayed deletion period above 30 days (by adding some extra slack) is safe. Defining it as less than 14 days, say 7 days, would not be safe as the oldest backups might have contained checkpoints and be over 7 days old. Backup Software OverviewThe Best Backup Software in 2023 Download BackupChain® BackupChain Backup Software is the all-in-one Windows Server backup solution and includes: Disk Image Backup Drive Cloning and Disk Copy File Server Backup Virtual Machine Backup Server Backup Solution - Best Practices for Server Backups - NAS Backup: Buffalo, Drobo, Synology - Cloud Backup Solution for Windows Server - DriveMaker: Map FTP, SFTP, S3 Sites to a Drive Letter (Freeware) - VM Backup - Knowledge Base - BackupChain (German) - German Help Pages - BackupChain (Greek) - BackupChain (Spanish) - BackupChain (French) - BackupChain (Dutch) - BackupChain (Italian) - BackupChain is an all-in-one, reliable backup solution for Windows and Hyper-V that is more affordable than Veeam, Acronis, and Altaro.
OPCFW_CODE
Re: GTK frontend updated, please test unofficial miniiso Eddy Petrişor wrote: On 10/26/05, Attilio Fiandrotti <email@example.com> wrote: -In the partman's main menu partitions could be displayed as childs to disks drives: this can be obtained by hacking a bit the frontend in order not to have to change the partman templates For this exact reason (the interface must know the backend) debconf's specifications should be changed to support buffered questions. JoeyH, any chance of that happening? My target is doing this without having to touch partman, just adding some "intelligence" to the frontend to let him understand what rows are related to a drive and what rows are a partition inside the drive (like i did for the question that asks to choose nation ("child"), where continents are "fathers" ) I'll report those bugs to Mike Emmel, but since he is working on GTKDFB 2.8 i think he won't have the time to correct them in GTKDFB 2.0.9 :( -you can'tselect in lists by typing the first letter (as in newt frontend) on GTKDFB (any version, on GTKX works) since this is an unimplemented feature in GTKDFB (any version) (Gdk-DirectFB-WARNING **: gdk_display_request_selection_notification this is a major setback related to what the newt frontend offers. -Testing on archs different from i386 : ATM this is the only arch where miniiso images are knows to be built correctly. EddyP is known to have made some experiments on PPC but it seems he's stuck with segfaults that prevent the frontend from working properly. Well, actually I managed to build an image without libary reduction but it presented the flashing effect (FB is initiaited and then crashes). I didn't had time to investigate in this matter to see what happened (I ran the installer from the tmp directory of the build and my system blocked - otoh, the mouse pointer was displayed and it moved my 2 cents contribute on this: to state if the crashes are related to GTKDFB libs or to the frontend only try to compile a simple "ghello.c" application on your PPC and run it from inside the miniiso to see if it When debugging under i386 tests like this helped a lot. - label_string = malloc(strlen(q_get_description(q)) + 8 ); hpadbox = gtk_hbox_new (FALSE, 5); I was always (since I grew some experience in programming) of the oppinion that magic numbers are a thing to avoid as much possible. Use #defines instead, it will make the code more lisible. You're right, those parts of code are too hacky.
OPCFW_CODE
UHC Voter Bug Fix Description This PR fixes the bug introduced when AHR updated its dataset, and it refactors and removes unneeded code. AHR's dataset has a typo, where the measure name for the 65+ category has an extra space at the end. This caused a bug as our code looks for the 65+ string without extra spacing. This is specific only to the 65+ voter participation line. The other measure names we use from AHR's dataset do not have this issue. This PR will replace the measure name on the data frame with the correct format. Removed get_average_determinate_value function. Suggest that we only display breakdowns for presidential elections so that our data is more uniform since AHR no longer provides demographic breakdowns for midterm elections. Overall Issue: When AHR updates their dataset, sometimes they change the spacing in the names of the measures. When we check for matched rows, extra spacing in the measure name will cause the data frame to leave out those rows because we check for string literals. This will create unexpected null values. We should introduce checks to counter this. Motivation and Context fixes #1732 How has this been tested? I tested this locally using pytests and on glcloud. Screenshots (if appropriate): This created a null value specifically for the 65+ category: No more demographic breakdowns for the Midterm section: Types of changes Bug fix (non-breaking change which fixes an issue) @eriwarr Good work Eric. I think we still need the get_average_determinate_value function as it is used to generate a single voter_participation value for each demographic group. The way the function works is that if there are two values it averages them, otherwise, like in the case for 65+ it only uses the one value that we have. @eriwarr Good work Eric. I think we still need the get_average_determinate_value function as it is used to generate a single voter_participation value for each demographic group. The way the function works is that if there are two values it averages them, otherwise, like in the case for 65+ it only uses the one value that we have. The thing is, the tests should be failing, I am not really sure why they still passed without the function Thanks, Josh! I think the tests are still passing because the calculation isn't done until after the row is matched with the demographic group. The function was already bypassing missing midterm values, so I just removed the function altogether and only used the presidential values. I can add the function back to the All portion of the code, but I think it could still be left off the demographic breakdown section. However, I could see the argument for keeping it there if AHR decides to include demographic midterms breakdowns in the future. @eriwarr Good work Eric. I think we still need the get_average_determinate_value function as it is used to generate a single voter_participation value for each demographic group. The way the function works is that if there are two values it averages them, otherwise, like in the case for 65+ it only uses the one value that we have. The thing is, the tests should be failing, I am not really sure why they still passed without the function Thanks, Josh! I think the tests are still passing because the calculation isn't done until after the row is matched with the demographic group. The function was already bypassing missing midterm values, so I just removed the function altogether and only used the presidential values. I can add the function back to the All portion of the code, but I think it could still be left off the demographic breakdown section. However, I could see the argument for keeping it there if AHR decides to include demographic midterms breakdowns in the future. I see what you mean, Yeah I think we should add it back for the ALL calculation.
GITHUB_ARCHIVE
How Long do Video Games Take to Make? Video games have become one of the most lucrative entertainment businesses all over the world. According to Forbes, the video game industry hit a record $139 billion in revenue in 2020. As the industry continues to grow, more people are interested in knowing how long it takes to develop a video game. The Process of Making a Video Game The development of video games is an extensive process that requires time and patience. The process typically involves five stages: 1. Defining and Conceptualizing The first stage in developing a video game is brainstorming game ideas. Developers can come up with numerous ideas, but only a few get approved for development. Once an idea has been approved, the team begins developing concept art and storyboards for the game. 2. Design & Planning After defining the concept, developers create a design document that outlines the gameplay mechanics and technical aspects of the game. They also develop a prototype with basic controls and gameplay loops to test the concept and ensure that it’s feasible. Additionally, they establish technical requirements such as graphic quality, frame rate, and compatibility with different devices. In this stage, game developers create assets like characters, levels, and sound effects used in-game. They also write codes that enable players to interact with these assets while playing the game. 4. Quality Assurance This stage involves testing and debugging different aspects of the game to identify issues that require fixing before launch date. 5. Launch & Post-Launch This final stage includes preparing for launch by marketing the game through trailers or demos. After launching, developers need to address any bugs or issues that players encounter while playing the game by releasing updates or patches. Factors Which Affect The Timeframe for Game Development Several factors can determine how long it takes to develop a video game. These include: - Game Scope and Complexity: The more complex the game, the longer it takes to develop. An action-adventure game like Uncharted 4: A Thief’s End takes about three years to complete because of its elaborate storyline and advanced graphics. - Team Size and Experience: The size of a development team can also influence the time taken to create a video game. Large teams have more resources at their disposal and can divide tasks among their members to speed up production. Experience is equally important since an experienced team may work faster than inexperienced ones. - Technology/ Technical Requirements: New technology may take longer to incorporate into a game’s development process. For example, creating a VR game with sophisticated interactive gameplay experience requires significant technical expertise, taking much more time than traditional methods of gaming. - Collaboration with External Partners: Games with collaborations require extra time during the development process because of the need to negotiate agreements and merge resources between multiple internal and external studios. Case Studies: Types of Games and Their Development Timeframes The time frame for developing games differs across titles, genres, budgets, and teams involved in their development. Here are some examples of various types of games and their development timelines: The Elder Scrolls V: Skyrim Skyrim is an open-world role-playing game that took Bethesda Game Studios about three and a half years to develop. The game made its debut in November 2011 and has since gained immense popularity because of its stunning graphics, vast world-building, and captivating gameplay. Grand Theft Auto V Rockstar Games’ Grand Theft Auto V took about five years to develop before its release in 2013. The game offered various gameplay mechanics like multiplayer online gameplay, exploration of elaborate environments, and incredible attention to detail. The independent game Braid took two guys less than three years to create. Created with basic tools such as Adobe Flash and Photoshop, the game was successful because of its magnificent storytelling, challenging puzzles, and unique gameplay mechanics. Celeste is an award-winning indie game developed by Matt Makes Games. It took the team four years to complete and was initially released on January 25, 2018. The game received positive reviews because of its mix of platforming challenges and emotional storyline. Challenges/Mistakes in Game Development The video game industry is fraught with several challenges that make it difficult for developers to meet deadlines or produce high-quality games. - Communication Breakdowns Within Team Members: Miscommunication between team members can lead to significant delays and mismanagement during the development process. For example, a mistake made during the design phase ended up costing Rare around £7m ($9m) during the development of Perfect Dark Zero. - Insufficient Man-Power: Developers may face man-power shortages or lack access to funding for their games, which also leads to mismanagement. - Tight Deadlines: Developers may rush to meet tight deadlines, which can negatively affect a game’s overall quality. Inept quality assurance testing can lead to players encountering several bugs and it takes time before updates or patches are released. This happened with many titles like Fallout 76 and No Man’s Sky. In conclusion, the time taken to develop a video game varies due to several factors such as scope, team size, experience, technology, and collaboration. The development team should effectively organize itself throughout the entire process to ensure timely delivery of high-quality games. Videogame development is a craft that requires precise teamwork, planning, and execution. Understanding the processes involved helps us appreciate the creation of these fantastic entertainment products that we all enjoy. How Long do Video Games Take to Make: Frequently Asked Questions 1. What is the average length of time it takes to develop a video game? The length of time it takes to develop a video game largely depends on the size and complexity of the game being developed. However, on average, it can take anywhere from 18 months to 5 years or more. 2. Why does video game development take so long? The process of developing a video game involves many complex stages such as concept creation, game design and development, coding and programming, art and animation, sound effects, testing, and bug fixes. As such, all these phases require significant resources, time, effort, and coordination. 3. Does the size of the development team affect the timing of video game creation? Yes. The number of team members on a video game development project varies based on the project scope and complexity. Generally speaking, larger projects require bigger teams with expertise in coding, design, audio-visuals, etc. Thus having a smaller team will mean more time for each individual to cover each aspect. 4. Are indie games quicker to develop than AAA titles? Indie games are smaller in scope; however, there is no set formula for how long it takes to build them since the individual workload may vary widely depending on how experienced your team is or which technologies you work with. AAA games are typically produced faster than indie games because they tend to have larger budgets and more established teams with vast experience. 5. Is there any way to speed up video game development timelines? While developing a video game according to its scope is required; working smartly instead of harder can usually save a lot of time. Utilizing pre-made game engines such as Unity or Unreal Engine could save a lot of time and effort. Moreover, these engines also provide a vast collection of assets, animation tools, effects, and sound quality that your project requires. 6. Can video game development schedules change during the production process? Yes, like any other project deadline; sometimes game development schedules may change during production based on unforeseen circumstances such as changes in technology, budget cuts, health crises, unpredictable market shifts. All team members have to work together to manage timelines in the best way possible. 7. Are there any factors that contribute to longer turnaround times for video game development? - 1. The complexity and scope of the game - 2. Budget constraints - 3. Technical hurdles and difficulties in the programming or design process - 4. Resource imbalances such as inadequate manpower or hardware requirements - 5. Poor communication and coordination among team members - 6. Changes in direction and scope at various phases of the project life cycle - 7. Working remotely due to global pandemics We hope that these FAQs shed some light on how long video games take to make! How Long Does it Take to Make a Video Game? Creating a video game is a complex process that can take years to complete. Here are 4 key takeaways on how long it takes to make a video game: - Size and Scope: The size and scope of a game can greatly impact development time. Small, indie games may take a few months while large, AAA titles can take several years. - Team size: The number of people working on a game plays an important role in its development time. A small team of developers may take longer than a larger team with more resources. - Technology: The technology used to create a game can help speed up development time. Using advanced toolsets and software can make development faster. - Experience: Experienced developers with knowledge of the game development process can help speed up production time. Teams that have worked together before may also be able to develop games faster due to efficiency and familiarity with each other’s work processes. In conclusion, the time it takes to create a video game depends on several factors such as the project’s size and scope, team size, technology used, and the experience of the developers working on it.
OPCFW_CODE
<?php namespace Fnash\GraphqlOnRestBundle\GraphQL\Type; use GraphQL\Type\Definition\InterfaceType; use GraphQL\Type\Definition\ObjectType; use GraphQL\Type\Definition\UnionType; /** * Class TypeRegistry. */ class TypeRegistry { /** * @var array */ private static $types = []; /** * @var array */ private static $interfaces = []; /** * @var array */ private static $unions = []; /** * @param $fqcnType * @param array $arguments * * @return ObjectType */ public static function get($fqcnType, array $arguments = []): ObjectType { if (!array_key_exists($fqcnType, static::$types)) { if (!class_exists($fqcnType)) { throw new \BadMethodCallException(sprintf('%s is not a defined ObjectType', $fqcnType)); } static::$types[$fqcnType] = new $fqcnType(...$arguments); } return static::$types[$fqcnType]; } /** * @param $fqcnType * @param array $arguments * * @return InterfaceType */ public static function getInterface($fqcnType, array $arguments = []): InterfaceType { if (!array_key_exists($fqcnType, static::$interfaces)) { if (!class_exists($fqcnType)) { throw new \BadMethodCallException(sprintf('%s is not a defined InterfaceType', $fqcnType)); } static::$interfaces[$fqcnType] = new $fqcnType(...$arguments); } return static::$interfaces[$fqcnType]; } /** * @param $fqcnType * @param array $arguments * * @return UnionType */ public static function getUnion($fqcnType, array $arguments = []): UnionType { if (!array_key_exists($fqcnType, static::$unions)) { if (!class_exists($fqcnType)) { throw new \BadMethodCallException(sprintf('%s is not a defined UnionType', $fqcnType)); } static::$unions[$fqcnType] = new $fqcnType(...$arguments); } return static::$unions[$fqcnType]; } }
STACK_EDU
A transaction (task) may execute several programs in the course of completing its work. The program definition contains one entry for every program used by any application in the CICS® system. Each entry holds, among other things, the language in which the program is written. The transaction definition has an entry for every transaction identifier in the system, and the important information kept about each transaction is the identifier and the name of the first program to be executed on behalf of the transaction. You can see how these two sets of definitions, transaction and program, work in concert: The user types in a transaction identifier at the terminal (or the previous transaction determined it). - CICS looks up this identifier in the list of installed transaction definitions. This tells CICS which program to invoke first. - CICS looks up this program in the list of installed transaction definitions, finds out where it is, and loads it (if it isn’t already in the main storage). - CICS builds the control blocks necessary for this particular combination of transaction and terminal, using information from both sets of definitions. For programs in command-level COBOL, this includes making a private copy of working storage for this particular execution of the program. - CICS passes control to the program, which begins running using the control blocks for this terminal. This program may pass control to any other program in the list of installed program definitions, if necessary, in the course of completing the transaction. There are two CICS commands for passing control from one program to another. One is the LINK command, which is similar to a CALL statement in COBOL. The other is the XCTL (transfer control) command, which has no COBOL counterpart. When one program links another, the first program stays in main storage. When the second (linked-to) program finishes and gives up control, the first program resumes at the point after the LINK. The linked-to program is considered to be operating at one logical level lower than the program that does the linking. In contrast, when one program transfers control to another, the first program is considered terminated, and the second operates at the same level as the first. When the second program finishes, control is returned not to the first program, but to whatever program last issued a LINK command. Some people like to think of CICS itself as the highest program level in this process, with the first program in the transaction as the next level down, and so on. Figure 1 illustrates this concept. The LINK command looks like this: EXEC CICS LINK PROGRAM(pgmname) COMMAREA(commarea) LENGTH(length) END-EXEC. where pgmname is the name of the program to which you wish to link. commarea is the name of the area containing the data to be passed and/or the area to which results are to be returned. The COMMAREA interface is also an option to invoke CICS programs. A sound principle of CICS application design is to separate the presentation logic from the business logic; communication between the programs is achieved by using the LINK command and data is passed between such programs in the COMMAREA. Such a modular design provides not only a separation of functions, but also much greater flexibility for the Web enablement of existing applications using new presentation methods.
OPCFW_CODE
My arduino seems to have a stutter Hi @rwaldron Again great work on etherport, I have been playing a little with some simple REPL and have noticed some "stuttering" on the arduino. Here is a little sample code: var five = require("johnny-five"); var EtherPort = require("etherport"); var myBoard, myLed; myBoard = new five.Board({ port: new EtherPort(3030) }); myBoard.on("ready", function() { led = new five.Led(5); // make led available as "led" in REPL this.repl.inject({ led: led }); }); I first noticed a slight issue on: led.strobe(200) The flashes seemed a little inconsistent. I then tried a pulse and noticed the fading again was a little inconsistent. Then I got the error in the screenshot. Again not sure if this is a direct result of running over ethernet or not but, I didn't notice this behaviour previously? Many thanks Andy That error generally means that the IO plugin (in this case, just the built-in Firmata.js dependency) hasn't emitted a "ready" event in a reasonable amount of time (which, in this case, is usually caused by a problem on the board—more often than not due to missing StandardFirmata or whatever version of Firmata is on the board). How are you powering the board? I first noticed a slight issue on: led.strobe(200) The flashes seemed a little inconsistent. My first guess would be transport lag over ethernet? @soundanalogous may know better A transport lag is possible. There hasn't been a lot of work on Firmata over Ethernet so I'm not sure what all of the possible issues are yet. I'll keep a watch on this. The thing to do now (since StandardFirmataEthernet has not been 'officially' released yet) is to try to break it and report any issues. I've only tried blinking an LED and reading an analog input so far, but more complex cases need to be tested for sure. Also if anyone has an ENC28J60-based ethernet shield and can test it with StandardFirmataEthernet and EtherPort that would be helpful. See Option C in the Network Configuration instructions in the StandardFirmataEthernet.ino file. @soundanalogous I've got an enc board too I'll give it a go and let you know. I know space is a real issue with that board too can you recommend lines to comment out? There are no lines to comment out in StandardFirmataEthernet. ENC28J60 is known to work with ConfigurableFirmata but has not yet been tested with StandardFirmataEthernet so I'm looking for confirmation that it works there. Hi @soundanalogous The first thing I notice after commenting the correct lines for for UIP is the error check is not detecting the selection correctly: StandardFirmataEthernet.ino:129:2: error: #error "you must uncomment the includes for your board configuration. See OPTIONS A, B and C in the NETWORK CONFIGURATION SECTION" I commented this check out all together and I get it to flash ok with expected memory warning. However when I try to run the same sample code to control an led I got the error below: Many thanks Andy I had not compiled StandardFirmataEthernet with the UIPEthernet library yet. I turns out that library uses too much memory to use reliably with an Arduino Uno or similar board. It will have to be used with an Arduino Mega or other board with a good amount of RAM. Hi @rwaldron / @soundanalogous Just an update I have installed configurable firmata to see if there was any difference, the led functions still seem a little unreliable using the code sample I posted previously. However I have tried this with a ds18b20 and had great results, this was my code: var five = require("johnny-five"); var EtherPort = require("../"); var board = new five.Board({ port: new EtherPort(3030) }); board.on("ready", function() { // This requires OneWire support using the ConfigurableFirmata var temperature = new five.Temperature({ controller: "DS18B20", pin: 2 }); temperature.on("data", function(err, data) { console.log(data.celsius + "°C", data.fahrenheit + "°F"); }); }); I wonder is there something that could be causing issues specifically with the led implementation? I haven't had any timeouts like last time, and configurable firmata is using considerable more memory even with a few items removed e.g. servos etc. PS I am using the normal ethernet shield not the enc28j60, although I could try it again. Many thanks Andy Interesting, I'm not sure what to make of that. I'll try to take a closer look sometime this week. Thanks for the update :) Hello, any news? I have tried something remote, I tried ENC28J60, also ESP8266 all with Arduino, but no success. Well, Firmata is too large, tb commented wire and servant, 'cause I use the Arduino Leonardo
GITHUB_ARCHIVE
Re: Help with : send "hello world" ASCII text to Linux serial port and read into a text file in java virtual machine... Hello. Excuse the fact that I'm a newbie. I have a linux v 2.4 box with serial port something like ttyS0. I have a java VM running. I want to be able to pass the phrase"hello world" that is sent to the serial port with hyperterminal from a PC to a .java program running inside java virtual machine. I'm not sure if I'm on the right track with: FileInputStream, fis read() I guess the idea is to read 1 character at a time and build up a file like textfile.txt I can only use printable characters. Maybe I need some flags like: start_of_text I just want to able to read the text file after it has been sent, and the "hello world" is now there! Any help with this would be greatly appreciated!!! Thanks in advance. I think you should try opening the device special file and reading from it. See what happens. One of the fundamentals of UNIX/Linux devices is that they should appear to a program as any regular file, and a device driver provides the four fundamental operations of open/close/read/write to do this. Devices fall into two basic categories, character and block, and a tty is a character special file so should be read byte-by-byte. So, if the device /dev/ttsS0 behaves as it should, you ought to be able to open and read from it as though it were a plain text file. But don't rely on any other file operations beyond those basic four, though. I think you are on the right track with FileInputStream's open() and read() methods. This should be fine for ASCII data. If you need to structure the data into some kind of "records" with start-of-text and end-of-text then you will need to define that protocol yourself. About the only control information you'll receive will be an EOF indication when whatever is writing to the device Have a look at the section "Basic I/O" in the Java Tutorial: for some simple examples of how to do I/O. As I said above, you ought to be able to substitute any device special file in UNIX/Linux for a regular file where basic read/write is concerned. There are, as always, exceptions and caveats to that rule. Whether a plain tty is one of them, only experience will tell. Nigel Wade, System Administrator, Space Plasma Physics Group, University of Leicester, Leicester, LE1 7RH, UK E-mail : email@example.com Phone : +44 (0)116 2523548, Fax : +44 (0)116 2523555
OPCFW_CODE
"You will love him to ruins." You will love him to ruins. This is the first sentence of the book The Evolution of Mara Dyer by Michelle Hodkin. Could somebody tell me what does that mean? Here is the sentence in context: YOU WILL LOVE HIM TO RUINS. The words echoed in my mind as I ran through clots of laughing people. Blinking lights and delighted screams bled together in a riot of sound and color. I knew Noah was behind me. I knew he would catch up. But my feet tried to do what my heart couldn’t; they tried to leave him behind. I finally ran out of breath beneath a leering clown that pointed to the entrance to the Hall of Mirrors. Noah caught up to me easily. He turned me to face him and I stood there, my wrist in his grasp, my cheeks wet with tears, my heart splintered by her words. If I truly loved him, she said, I would let him go. I wished I loved him enough. Please check that you’ve spelled everything correctly. Please also edit into the question the research you have already done. It would probably also help to say where you found your sentence, because it's not exactly idiomatic English. I had read of in a book. It sounds like someone complaining to the parent of a child that they are giving the infant too much love, which will result in his becoming spoiled. This is not an unknown narrative in Anglo society. But most professionals will tell you that children are far more likely to be damaged by indifferent, neglectful or excessively strict parenting than by an over-reach of love. OK. A book. Please say which book; and possibly a little more context -- a couple of sentences before and after that one. Give the community as much information as you possibly can. As I said, this is not idiomatic. (It was ruins, was it, and not ruin?) AND, what have you already done to find the answer? It is a first sentence in the book, The Evolution of Mara Dyer. The preface. the second sentence: The words echoed in my mind as I ran through clots of laughing people. Is it perhaps a too-literal translation of "love him to pieces"? An idiom meaning roughly "love him to an irrational degree". This whole preface: YOU WILL LOVE HIM TO RUINS. The words echoed in my mind as I ran through clots of laughing people. Blinking lights and delighted screams bled together in a riot of sound and color. I knew Noah was behind me. I knew he would catch up. But my feet tried to do what my heart couldn’t; they tried to leave him behind. I finally ran out of breath beneath a leering clown that pointed to the entrance to the Hall of Mirrors. Noah caught up to me easily. He turned me to face him and I stood there, my wrist in his grasp, my cheeks wet with tears, my heart splintered by her words. If I truly loved him, she said, I would let him go. I wished I loved him enough. Possible allusion to a Wilde line, from the first act of A Woman of No Importance: "Twenty years of romance make a woman look like a ruin; but twenty years of marriage make her something like a public building." I googled the phrase and found that this is apparently a well-known, quite popular quote from a triology by Michelle Hodkin, which I saw described as a "Psychological Thriller with a Paranormal Twist." I am guessing your question comes from being a non-native speaker of English; forgive me if that's a wrong assumption. I haven't read the book, but I think I can answer this. The feeling that I get from the quote is This isn't a healthy relationship, and I predict no good will come of it. Maybe she will end up damaged or destroyed; maybe he will; maybe they both will. Here's a definition of in ruins: The state of being extensively harmed or damaged: Our vacation plans are in ruins. See http://www.thefreedictionary.com/ruins (Happy to help, but please do include the author and title, or a link in future -- thanks.) Thank you the answer :) yes, I'm a non-native speaker of English. I'm hungarian :) You will love him to ruins means that loving him will either ruin you, ruin him, or ruin you both. I think this parallels a Scots usage in which people say colloquially "I love him to bits". It implies that the love is strong and will endure until things fall apart - that is, for a very long time.
STACK_EXCHANGE
[darcs-users] Patch Theory in action yarcs at qualitycode.com Thu Nov 20 02:43:46 UTC 2003 I'm trying to figure out patch theory, and specifically at the moment, dependencies. I think the patch theory section of the manual would be far more understandable if it had concrete examples to augment the So here's an email I sent to a friend/coworker, as I try to grasp the simple aspects of how darcs generates and merges patches. Something like this might be helpful in a section of the manual for people who want to know (basically) what darcs is doing internally. I would love to hear feedback on this. Hopefully my explanation is accurate. Perhaps it is even helpful :-) There are a couple things I don't understand: 1) How did it figure out that the Change patch depended on the Modify 2) As a darcs user, how would I know when I should to manually specify a I created an empty repository (repo), and did the following: echo hello >hello darcs mv hello goodbye echo goodbye >goodbye So now I have a file named goodbye that contains the word goodbye. There are three patches in my repository, which I'll call Add, Move, and Change. The Change patch appears as a Hunk diff of the file goodbye. Next, I create a second repo, and 'pull' the Add patch across. Fine. Next, I pull the Change patch. darcs first offers me the Move patch, but I decline it, and only take the Change patch. Now, in this second repository, I have still only have a file named hello, but now it contains the text 'goodbye'. Very slick. And I think I actually understand how it did it. Here's part of the trick: The Change patch, has the same filename (and hash) in both repos. In the first repo, it specifies a change to the 'goodbye' file. But in the second repo, the patch specifies the 'hello' file. Same semantic patch, but different representations based on the 'context' of the repository it is in. That's patch theory at work. And that's why you can't include the patch contents (representation) in the hash that is used as the unique identifier of that patch. Any patch might be 'commuted' to fit its new context. It's kind of creepy, though. It means that a patch without context is meaningless (more or less). If I give you that single patch, without giving you access to my repository, it's just a command to modify a specific file. It may do what you want, or it may not. Or, you might receive the same patch from two different people, and it might look completely different, but would still "mean" the same thing (assuming you had access to the context that each of those patches came It's only by knowing that the file on my system used to have a different name, and by knowing that both our systems had the original patch that created the file in the first place, that it figures out what to do. More information about the darcs-users
OPCFW_CODE
Want to know how much website downtime costs, and the impact it can have on your business? Find out everything you need to know in our new uptime monitoring whitepaper 2021 As a developer, I am a massive fan of documentation and (as you can probably tell from my previous blog post) also a big fan of Storybook. If you’re interested in what Storybook is and how to set it up, or integrate it into your existing project, you can find out more about that here. However, in this post, I am going to be outlining why you should be using Storybook and each of its features and capabilities. This is in addition to (at the time of writing) some exciting new additions to the library. There is a reason so many big name brands such as GitHub, Mozilla and Airbnb use Storybook to document their components. The first (and in my opinion the most redeeming) benefit of using the library is that it is the most extensive and detailed way of documenting your components. Within Storybook, components are thought of as ‘alive’. What I mean by this, is that the component within Storybook isn’t just a static snapshot of that component at a certain time, it is the component itself at any given time. So whenever changes are made to that component within the codebase, the story will reflect those changes too. You can also add the integration of ‘controls’ to make your component completely interactive as if it were on the live site. The best part of this is that these controls have a very user-friendly interface, so each user (no matter their technical capabilities) is able to see and interact with the component in every state that it has. I have found that this is a very useful tool to increase collaboration between both development and design & UX teams, as it acts as a single source of truth for all of the components within your codebase. The next benefit (from a developers perspective) is the great way that it documents the technicalities of the component. We’ve all been there, when you’ve worked on a large codebase for a while and you come across a component that you haven’t seen for too long and you’re not sure what the hell it even does. Similarly, you could be a developer that is new to a pre-existing codebase and need to familiarise yourself with the components available to you and how they work. This is where the ‘docs’ Storybook addon comes into play. With story docs, you can add a description of the element as well as being able to view it in a sandbox. The best part of this though, is that each of the props that are passed through to this component are laid out in a grid with descriptions, so you can easily view the data and variables that go into a component and how they affect that component if amended. Speaking about docs, in addition to being able to add documentation for each component on a per-story basis, with Storybook you can also add what are known as ‘doc blocks’. Doc blocks are separate pages that sit outside of the component folders. They are used to add much more generalised information about the project, such as an introduction page, or a theming page such as typography. It is completely up to you how extensive and in-depth you want these doc blocks to be, but in my opinion, every little (bit of documentation) helps. Now, moving away from documentation for a little bit, another significant benefit of integrating each of your components into Storybook is the testing capabilities that it gives you. With your components now in isolation, visual tests can be created to ensure no visual regression when you are writing your day to day code. This, to me, is a very powerful addition to any codebase and can work very well alongside any functional or end to end tests you may already have. This leads nicely on to a very exciting feature that (at the time of writing this blog post) is currently in beta mode. Storybook Interaction Testing will elevate the current testing capabilities within Storybook and allow you to write UI tests within stories and (using the new test runner) execute them. This is a great addition to the visual tests as you will be able to verify both looks and logic at the same time. A great feature of Storybook that I personally benefit from when working day to day is the fact that each component is visible and can be worked on from the sandbox that it lives in within its story. This means that when you are creating a brand new component, instead of needing to add it to a page to work on its UI and styles, all you need to do is make a story for it. This is also fantastic for code review as well, as instead of having to search through a website to try and find the component you are reviewing, you will always be able to find it on your Storybook instance as long as a story has been created for it. There are also a multitude of addons that you can add to your Storybook integration in order to elevate its capabilities even further. I will list some of my favourites, but there are many more that you can explore here. The first (and my personal favourite) is an addition that both makes your components more accessible, as well as maintaining any accessibility standards you may already have. storybook-addon-a11y is really simple to install and adds another tab onto your story to allow you to run Axe to let you know where you are going wrong in terms of the components accessibility. As well as running it on your existing components, you can also use it while you are coding and creating new components, to ensure high accessibility standards are being met at all times. storybook-addon-designs is another fantastic addon that improves collaboration between development and design teams. Using this addon gives you an extra panel on your story, where you can view the design of that particular element directly from Figma via a link. Alternatively, if you use other design tools on your project (e.g. Adobe XD) there are other addons that work with those too. Another valuable addon for Storybook would be storybook-links-addon. This allows you to navigate between your stories by using links, instead of having to use the sidebar. This makes the user experience much nicer. In conclusion, Storybook itself is a super powerful addition to any project and you can use each of its capabilities to improve code quality & accessibility, increase collaboration between teams and ensure no regression of your existing components using the testing integrations. As mentioned before, if you are now interested in introducing Storybook into your project, as well as using the fantastic docs, you can also view my tutorial blog post to show you how to do that. 5 min read We offer an API that provides direct access to features the platform offer, with each feature providing a set of endpoints to perform operations on resources associated with your account. The StatusCake control panel offers plenty of useful visualisations and alerting systems so you can be in touch with your data, but sometimes we may have use-cases where we would rather leverage the API so in this blog post we’re going to see how we can make use of these endpoints using C#. 5 min read In this blog post I want to go over some of the software I use alongside my IDE/version control tools during my day-to-day work. These tools allow me to cut down on wasted time spent doing things inefficiently, track my work, take notes, and generally make my life easier. 4 min read I allows users of the platform to come up with custom ways of interacting and making our tools work for their specific needs. In this blog post I’m going to look at a few recent projects on GitHub that use the StatusCake API to either save you time or do something interesting with your test data. 2 min read It’s estimated that over 18 million people in the UK use online banking. So when the Lloyds, Halifax, and the Bank of Scotland online banking platforms all suffered partial downtime, millions of people were unable to access their accounts properly. Find out more here! 6 min read In this article, we’ll explore the complexities of monitoring larger infrastructures and how StatusCake solves these problems with simple automation tools. 2 min read March saw many of the big tech companies have technical issues with their products and services. But the biggest one was by far the colossal Google; Google Maps experienced the much dreaded website downtime impacting thousands of users across the globe. It was reported online that Google Maps had suffered a partial outage meaning that many couldn’t access the location tool. Read all about it here. Find out everything you need to know in our new uptime monitoring whitepaper 2021
OPCFW_CODE
The Greatest Guide To python homework help Python also can deliver graphics easily making use of “Matplotlib” and “Seaborn”. Matplotlib is the most popular Python library for generating plots and other 2D info visualizations. Film Web page: We will learn the way to generate an great webpage that lists your favorite films and exhibits their trailers. The moment very first set of column values (vj is thought, Find other routes of crammed cells in these columns. Calculate future of ui (or vj values using above equation. In this way, for all rows and columns, ui and vj values are decided for just a non- degenerate Preliminary Resolution. Jason Brownlee, Ph.D. is actually a machine Discovering expert who teaches developers ways to get final results with contemporary device Studying techniques via arms-on tutorials. See all posts by Jason Brownlee → Not the answer you're looking for? Look through other queries tagged python scipy or request your very own query. questioned Make a model on Every list of attributes and Evaluate the general performance of each and every. Contemplate ensembling the styles with each other to see if functionality might be lifted. The advertising department’s aim is to unfold the attention with regards to the hotel in terms of high quality of assistance as a result the promoting approach are going to be helpful to doing exactly the same. ...In December 1989, I had been seeking a "passion" programming project that may retain me occupied in the course of the week about Xmas. My Workplace ... will be shut, but I'd a house computer, rather than A lot else on my hands. Right after 20 several hours of structured lectures, pupils are encouraged to operate on an exploratory data analysis project centered on their own pursuits. A project presentation demo are going to be organized afterwards. The danger method from the lodge Plainly suggests that the cost of creating and utilizing new technological innovation is kind of substantial. It needs fantastic commitment within the aspect of team to implement new technological know-how and make full use of it correctly. You may embed unique products in RFE and see if the results inform the identical or distinctive stories with regard to what features to choose. " This is termed binding the name to the thing. Since the title's storage area won't incorporate the indicated worth, it can be incorrect to connect with it a variable. Names may very well be subsequently rebound at any time to objects of enormously varying forms, like strings, processes, complex objects with facts and techniques, etcetera. Successive assignments check that of a common worth to various names, e.g., x = two; y = two; z = two lead to allocating storage to (at most) 3 names and just one numeric item, to which all three names are sure. Since a reputation is a generic reference holder it truly is unreasonable to affiliate a hard and fast knowledge sort with it. On the other hand in a offered time a name will likely be bound to some object, that will have a type; thus There's dynamic typing. (McIvor, R. & Humphreys, P& Mc Aleer, W. (1997). Yet another authorized issue is the termination of agreement, below both equally the parties sign an agreement or possibly a agreement which mentions that in case the lodge does not such as function or the operate ethics from the supplier, they may have correct to terminate the companies and would spend only for the authorised perform by them. (Mulgan Richard 1997). One of the authorized problems could be the possession legal rights; here the proprietor that is the supplier is given the right to help make some modifications ultimately product or service and in addition to individual the item, even so, all the most crucial utilization rights is Using the resort. The supplier keeps the tights to use the perform to show case in his portfolio. Hence; incorporate a “dummy place” (say D5) with zero transportation Value and harmony demand which happens to be difference in offer and demand from customers (= a hundred units).
OPCFW_CODE
Why does feedforwardnet(int32(8)) raise an error while feedforwardnet(double(8)) doesn't? In MATLAB, feedforwardnet(8) creates a feedforward network with one hidden layer containing 8 hidden neurons. MATLAB stores numeric data as double-precision floating point (double) by default. Therefore feedforwardnet(8) is equivalent to feedforwardnet(double(8)). However, feedforwardnet(int32(8)) will raise the following error: Undefined function or variable 'ind'. Error in network/subsasgn>setLayerSize (line 1170) err = sprintf('"layers{%g}.size" must be a positive integer.',ind); Error in network/subsasgn>network_subsasgn (line 180) if isempty(err), [net,err] = setLayerSize(net,i,newSize); end Error in network/subsasgn (line 13) net = network_subsasgn(net,subscripts,v,netname); Error in feedforwardnet>create_network (line 116) net.layers{i}.size = param.hiddenSizes(i); Error in feedforwardnet (line 69) net = create_network(param); Why won't feedforwardnet() takes an int32 as argument? Equivalently, why doesn't isposint() (in *network/subsasgn.m, line 1169)) returns true when given int32(8) as argument? The code was tested with MATLAB 2011a, 2012a and 2012b. Edit (at your own risk, not tested thoroughly) isposint.m on line 9: % if ~isa(v,'double') | any(size(v) ~= [1 1]) | ... if ~isnumeric(v) | any(size(v) ~= [1 1]) | ... Both go through: a = feedforwardnet(8); b = feedforwardnet(int32(8)); Little change in size: >> whos Name Size Bytes Class Attributes a 1x1 31224 network b 1x1 30968 network Interesting, thanks! I guess the change in size is due to the fact that int32 is 4 bytes, while double is 8 bytes. Yes, the point here is that you actually don't gain much in term of memory nor in performance. Unless you are really constrained to use int I would go for double. There are no problems casting int to double, as you can check with intmax('uint64') and realmax('double'), but you incur in a slight performance penalty. Well, doing double(intmax('uint64')) you lose in precision. But I agree this is not a big deal unless your network is the human brain ;-) Right, got fooled (starts loosing precision at 2^53 while uint64 is max at 2^64-1).
STACK_EXCHANGE
Cross compilation fails to configure gmp correctly GMP's configure failed when attempting to cross-compile on amd64 targeting arm-linux-gnueabihf. It appears that the current make-based build system passes --host=arm-unknown-linux to gmp's configure script, whereas Hadrian passes the host triple. I believe the patch fixing this is simple enough, diff --git a/src/Settings/Builders/Configure.hs b/src/Settings/Builders/Configure.hs index 93225b5..8f88bfd 100644 --- a/src/Settings/Builders/Configure.hs +++ b/src/Settings/Builders/Configure.hs @@ -9,10 +9,10 @@ configureBuilderArgs = do gmpPath <- expr gmpBuildPath libffiPath <- expr libffiBuildPath mconcat [ builder (Configure gmpPath) ? do - hostPlatform <- getSetting HostPlatform buildPlatform <- getSetting BuildPlatform + targetPlatform <- getSetting TargetPlatform pure [ "--enable-shared=no" - , "--host=" ++ hostPlatform + , "--host=" ++ targetPlatform , "--build=" ++ buildPlatform ] , builder (Configure libffiPath) ? do However, I quickly ran into trouble while trying to test this. I'll report this in another ticket. @bgamari Thanks for the report and proposed fix. Hope we'll find a way to fix the linked issue! @bgamari Could you send the PR with the suggested fix? _build/stage0/bin/arm-linux-gnueabihf-ghc -Wall -hisuf hi -osuf o -hcsuf hc -static -hide-all-packages -no-user-package-db '-package-db _build/stage1/lib/package.conf.d' '-this-unit-id integer-gmp-<IP_ADDRESS>' '-package-id ghc-prim-<IP_ADDRESS>' -i -i_build/stage1/libraries/integer-gmp/build -i_build/stage1/libraries/integer-gmp/build/autogen -ilibraries/integer-gmp/src/ -Iincludes -I_build/generated -I_build/stage1/libraries/integer-gmp/build -I_build/stage1/libraries/integer-gmp/build/include -Ilibraries/integer-gmp/include -I/home/zhen/repos/ghc/_build/stage1/lib/arm-linux-ghc-8.5.20180425/rts-1.0/include -I_build/generated -optc-I_build/generated -optP-include -optP_build/stage1/libraries/integer-gmp/build/autogen/cabal_macros.h -optc-marm -optc-fno-stack-protector -odir _build/stage1/libraries/integer-gmp/build -hidir _build/stage1/libraries/integer-gmp/build -stubdir _build/stage1/libraries/integer-gmp/build -optc-std=c99 -optc-Wall -optc-marm -optc-fno-stack-protector -optc-Iincludes -optc-I_build/generated -optc-I_build/stage1/libraries/integer-gmp/build -optc-I_build/stage1/libraries/integer-gmp/build/include -optc-Ilibraries/integer-gmp/include -optc-I/home/zhen/repos/ghc/_build/stage1/lib/arm-linux-ghc-8.5.20180425/rts-1.0/include -Wnoncanonical-monad-instances -optc-Werror=unused-but-set-variable -optc-Wno-error=inline -c libraries/integer-gmp/cbits/wrappers.c -o _build/stage1/libraries/integer-gmp/build/c/cbits/wrappers.o -O0 -H64m -this-unit-id integer-gmp -Wall -XHaskell2010 -ghcversion-file=/home/zhen/repos/ghc/_build/generated/ghcversion.h -Wno-deprecated-flags GHC stack-space overflow: current limit is<PHONE_NUMBER> bytes. Use the `-K<size>' option to increase it. libraries/integer-gmp/cbits/wrappers.c:25:17: error: fatal error: gmp.h: No such file or directory | 25 | #include <gmp.h> | ^ compilation terminated. `arm-linux-gnueabihf-gcc' failed in phase `C Compiler'. (Exit code: 1) shakeArgsWith 0.000s 0% Function shake 0.224s 0% Database read 0.122s 0% With database 0.005s 0% Running rules 497.560s 99% ========================= Total 497.911s 100% Error when running Shake build system: * _build/stage1/lib/package.conf.d/integer-gmp-<IP_ADDRESS>.conf * _build/stage1/libraries/integer-gmp/build/HSinteger-gmp-<IP_ADDRESS>.o * _build/stage1/libraries/integer-gmp/build/c/cbits/wrappers.o user error (Development.Shake.cmd, system command failed Command: _build/stage0/bin/arm-linux-gnueabihf-ghc -Wall -hisuf hi -osuf o -hcsuf hc -static -hide-all-packages -no-user-package-db '-package-db _build/stage1/lib/package.conf.d' '-this-unit-id integer-gmp-<IP_ADDRESS>' '-package-id ghc-prim-<IP_ADDRESS>' -i -i_build/stage1/libraries/integer-gmp/build -i_build/stage1/libraries/integer-gmp/build/autogen -ilibraries/integer-gmp/src/ -Iincludes -I_build/generated -I_build/stage1/libraries/integer-gmp/build -I_build/stage1/libraries/integer-gmp/build/include -Ilibraries/integer-gmp/include -I/home/zhen/repos/ghc/_build/stage1/lib/arm-linux-ghc-8.5.20180425/rts-1.0/include -I_build/generated -optc-I_build/generated -optP-include -optP_build/stage1/libraries/integer-gmp/build/autogen/cabal_macros.h -optc-marm -optc-fno-stack-protector -odir _build/stage1/libraries/integer-gmp/build -hidir _build/stage1/libraries/integer-gmp/build -stubdir _build/stage1/libraries/integer-gmp/build -optc-std=c99 -optc-Wall -optc-marm -optc-fno-stack-protector -optc-Iincludes -optc-I_build/generated -optc-I_build/stage1/libraries/integer-gmp/build -optc-I_build/stage1/libraries/integer-gmp/build/include -optc-Ilibraries/integer-gmp/include -optc-I/home/zhen/repos/ghc/_build/stage1/lib/arm-linux-ghc-8.5.20180425/rts-1.0/include -Wnoncanonical-monad-instances -optc-Werror=unused-but-set-variable -optc-Wno-error=inline -c libraries/integer-gmp/cbits/wrappers.c -o _build/stage1/libraries/integer-gmp/build/c/cbits/wrappers.o -O0 -H64m -this-unit-id integer-gmp -Wall -XHaskell2010 -ghcversion-file=/home/zhen/repos/ghc/_build/generated/ghcversion.h -Wno-deprecated-flags Exit code: 1 Stderr: GHC stack-space overflow: current limit is<PHONE_NUMBER> bytes. Use the `-K<size>' option to increase it. libraries/integer-gmp/cbits/wrappers.c:25:17: error: fatal error: gmp.h: No such file or directory | 25 | #include <gmp.h> | ^ compilation terminated. `arm-linux-gnueabihf-gcc' failed in phase `C Compiler'. (Exit code: 1) ) errors that I encountered under the new configure flags I believe this has been fixed.
GITHUB_ARCHIVE
If you haven’t already, please see Getting Started for information on how to install, build and run the plugin in Unreal Engine 4. Whether you have installed trueSKY via the Git Repo or via the binary installer, start by creating and opening a new project. When you first open the scene you will probably see the default Sky and Fog folder in the World Outliner. If so, then it is important to remove the contents of this folder (this may require deleting them individually and not just deleting the folder itself), as the default Fog and Sky sphere can cause issues with trueSKY. After doing this, you should see the default atmospherics disappear, leaving a black sky. Now we can insert a trueSKY Sequence Actor. To do so, click Window -> Add Sequence to Scene. Find the TrueSkySequenceActor in the World Outliner, and in the Details Panel, click on Active Sequence. If there are no existing trueSKY sequences to load, we can create a new one by clicking the Create New Asset option. Name this whatever you wish and find a location in which to save it. Once done, it will be automatically assigned to the TrueSkySequenceActor. You can change the sequence currently in use at any time by clicking on the Active Sequence option. The sky in your scene should now appear blue, but will lack clouds or anything interesting. To liven up the scene, find your newly created sequence asset in the Content Browser and double click it. This will load the trueSKY Sequencer. If you haven’t already, input your license key into the sequencer to allow full access. Read more about the Sequencer here. For now however we will just add some basic clouds to the scene. To do this, right click in the 3D Clouds section of the timeline and click “New cloud keyframe”. Select the keyframe and try experimenting with the values. You can also use the Presets on the left hand side of the sequencer to select predefined arrangements. Be sure to set the Wind Speed value to something greater than 0, if you wish to see the clouds move (once time is being progressed). To further enhance the scene, you can also add a keyframe for 2D clouds, by right clicking on the 2D Clouds area and creating a keyframe in the same manner as before. Read more about both kinds of clouds here. In this example I have set the Wind Speed to 1, raised the cloudbase to 5.0 and increased cloudiness by to 0.6. I have also added a 2D cloud keyframe with default settings. Though we can now see clouds, you will notice that they aren’t moving, even if the Wind Speed has been increased above 0. To get some movement in the scene we can use the Blueprint system to drive trueSKY. To do this we will want to progress the trueSKY time. Open up the level blueprint and create an Event Tick event. Next create a float variable to store the current time, then connect the Event Tick event to a Setter for this variable. For framerate independence, we should use Delta Time to progress the trueSKY time. To do this, create a Getter for the Time variable and then a “Float + Float” function and a “Float / Float” function. Connect the Delta Seconds pin from the Event Tick event to the divide function, then connect the output from this to the addition function. Next connect the Time Getter to the other input pin of the addition function, and then connect the ouput pin of the addition function to the Time Setter. Lastly find the trueSKY Set Time function and connect the Time Setter to this, and pass the Time variable output pin to the Set Time function’s float input pin. If you press Play/Simulate now you will see the clouds move and the day turn to night very quickly. This is because we are not scaling the raw Delta Time input just yet, so every real time second is progressing trueSKY by one whole day. It is unlikely that you would want the days to move so quickly, so try replacing the second argument in the division function that is receiving the Delta Time input, with a more suitable value. For example, a value of 60 will equate one trueSKY day to 60 seconds, a value of 3600 would equal an hour and a value of 86400 (60 x 60 x 24) would simulate a real day. In this example I am using 600, so a full day will pass in ten minutes. You may find that you need to change the Wind Speed variable to suit the rate at which you are changing time. Additionally, if you want the scene to start at a specific time, try changing the default value of the Time float variable (where 0.0 is the start of the first day, 0.5 is noon on the first day and 1.0 is the start of the second day). Now that time is moving and the clouds are behaving properly, the next element of trueSKY to configure is the lighting. By default your scene should already have a directional light present. Ensure that its Mobility (Details Panel -> Transform) is set to Movable. Open up the TrueSkySequenceActor and assign the directional light under the Lighting group. You can also apply a multiplier to each to scale the brightness of sunlight and moonlight. The default Unreal Engine Skylight is not dynamic. To get the most out of trueSKY it is advisable to replace this with a TrueSkyLight (Modes -> All Classes). Do not manually capture the cubemap when using this Skylight: it works automatically. You can adjust the update frequency (default 4 means every four frames the TrueSkyLight is updated), and the Diffuse and Specular brightness - it’s recommended not to change these dynamically, but choose in advance the values that work best. The Blend property controls how smoothly (and thus, slowly) the light changes over time. It’s also worth rembering to set the Render Textures for Loss, Inscatter and Cloud Visibility (these are important when rendering transparent materials alongside trueSKY). With that your scene is now fully configured and should be showing moving clouds and correct lighting. Of course this is a very basic scene, so to learn more and get the most out of trueSKY for Unreal, please see the further information section below. To enable cloud shadows, connect the render texture “CloudShadowRT” to the “Cloud Shadow” slot of the trueSKY Sequence Actor. Assign the material function M_LightFunction to the “Light Function Material” slot of your Directional Light. To increase the range of the shadows, adjust the Directional Light’s Dynamic Shadow Distance, under Cascaded Shadow Maps. To prevent rain from falling in covered areas, create a SceneCapture2D actor, and give it a texture target that’s contains only a red channel (e.g. RainDepthRT from the trueSKY content). Make the Capture Source “SceneDepth in R”. You don’t need to enable “Capture Every Frame” unless you expect the geometry to change. Rotate the Scene Capture 2D to face upwards. On the trueSKY Sequence Actor, assign the Scene Capture 2D actor to the Rain Mask SceneCapture property. Now, rain will only appear where there is no cover above the Scene Capture actor. You can use trueSKY to create stationary skyboxes, to produce procedural weather states with no impact on GPU or CPU usage. To do this, add a trueSKY Sequence Actor as normal, and set its Visible flag to false. Add a TrueSkyLight also. In the TrueSky content folder, find the Blueprint Class, SM_SkySphere, and add an instance to the scene. This adds a background skybox using the BackgroundCube Mesh and the material StaticSky_M. The material uses a lookup into the cubemap as the Emissive colour. And to update the skybox, call the trueSKY Blueprint function Render to Cubemap, shown here updating StaticSkyboxRT when Tab is pressed. The TrueSkyLightComponent input would be the component from your TrueSkyLight Actor in the scene. The resulting skybox will be sampled from the same position as the TrueSkyLight ambient lighting, which will also be resampled, along with all other standard TrueSKY updates, when Render To Cubemap is called. To show the trueSKY debugging overlays, assign the Render Target Texture TrueSkyOverlayRT (under TrueSkyPlugin/Overlay content) to the Overlay RT slot in the trueSKY Sequence Actor. The RT texture is then used with the PostProcessVolume: under Rendering Features, add a Post Process Material, and set it to the TrueSkyOverlay material reference.
OPCFW_CODE
How to give access to other domains? hey! how to get access to cors from other domains? For instance: I want to use api.site.com like backend, and site.com like frontend, I'll send requests via jquery for downloading youtube videos, creating mixes (separating), and for getting urls to files. But when I run js code from site.com There is one problem: Access to XMLHttpRequest at 'https://api.site.com/api/source-track/youtube/' from origin 'https://site.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. I tried to install django cors headers BUT it doesn't work, pls help me and how to separate not to 4 stems (other,bass,drums and vocal) but to 2 stems (vocal and instrumental) ? Have you tried playing around with the CORS_ALLOWED_ORIGINS setting to allow requests from https://site.com? and how to separate not to 4 stems (other,bass,drums and vocal) but to 2 stems (vocal and instrumental) ? This is not currently supported, only 4-stem separation works. I installed django-cors-header by running this code: python -m pip install django-cors-headers I've added these lines to file django_react/settings_docker.py INSTALLED_APPS = ( ... 'corsheaders', ... ) MIDDLEWARE = [ ..., 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ..., ] ALLOWED_ORIGINS = [ "*" ] CORS_ALLOW_ALL_ORIGINS = True # If this is used then `CORS_ALLOWED_ORIGINS` will not have any effect CORS_ALLOW_CREDENTIALS = True CORS_ALLOW_METHODS = [ 'DELETE', 'GET', 'OPTIONS', 'PATCH', 'POST', 'PUT', ] CORS_ALLOW_HEADERS = [ 'accept', 'accept-encoding', 'authorization', 'content-type', 'dnt', 'origin', 'user-agent', 'x-csrftoken', 'x-requested-with' ] I don't see any problems with the above. Are you using Docker? And where did you run python -m pip install django-cors-headers? You may need to add django-cors-headers to requirements.txt. root@e20135:~/karaoke# python3 -m pip install -r requirements.txt Ignoring python-magic-bin: markers 'sys_platform != "linux"' don't match your environment Requirement already satisfied: django-cors-headers in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 1)) (3.10.1) Collecting demucs==3.0.3 (from -r requirements.txt (line 2)) Using cached https://files.pythonhosted.org/packages/17/24/3b75e03243651603b95d8dbdb99eba104a464d3e8ca26d1ad54d95a985c5/demucs-3.0.3.tar.gz Collecting absl-py==0.12.0 (from -r requirements.txt (line 3)) Using cached https://files.pythonhosted.org/packages/92/c9/ef0fae29182d7a867d203f0eff8296b60da92098cc41db33a434f4be84bf/absl_py-0.12.0-py3-none-any.whl Collecting amqp==5.0.6 (from -r requirements.txt (line 4)) Try adding corsheaders.middleware.CorsMiddleware as the first item in MIDDLEWARE? Try adding corsheaders.middleware.CorsMiddleware as the first item in MIDDLEWARE? INSTALLED_APPS = ( 'corsheaders', ... ) MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', ...] It was as the first item in Middleware Hmm maybe try checking the network requests and headers in your browser's Network dev tools. Are all requests missing that Access-Control-Allow-Origin header? Could also be something with the Nginx config
GITHUB_ARCHIVE
Maven release from a branch by jenkins I have to perform a maven release from branch with Jenkis and tag the RELEASE.At the same time, i need to update the version (new SNAPSHOT) on trunk. For Example: /trunk contains Module_1.0.0-SNAPSHOT /branches/Module_1.0.0-SNAPSHOT After Perform Maven Release on /branches/Module_1.0.0-SNAPSHOT /trunk contains Module_1.0.1-SNAPSHOT /tag/Module_1.0.0-RELEASE Setting maven-release-plugin and scm (url, connection, developerConnection), /tag/Module_1.0.0-RELEASE and new version on branch go right way. But, even with developerConnection pointing to trunk doesn't udpate the version on trunk. How could I achieve that? thank's in advance. If you run the Maven release plugin on your branch (/branches/Module_1.0.0-SNAPSHOT), it will: Update the version number on this branch (1.0.0-SNAPSHOT --> 1.0.0) Apply a tag on this branch (tag/1.0.0 or something like that) Move your branch to 1.0.1-SNAPSHOT Even if you set the developerConnection property, the Maven release plugin will not update the version number of your trunk branch. If you want to increase the version number on the trunk, you have to release from the trunk (or release from the branch + merge your branch to your trunk). How do I exclude java files from a jar in maven-jar-plugin? Running Tomcat Server with an artifact created by Maven into IntelliJ 13 AxisServlet ClassCastException when deploying Axis2 on Tomcat via Maven Configuring Maven to use Central Repository if Nexus is unreachable Hg : join many repositories into a single one Artifact has not been packaged yet - maven-dependency-plugin how to make a automation testing framework Maven module dependency and resources Permission denied error Apache Maven Error : Could not transfer artifact org.apache.maven.plugins:maven-clean-plugin:pom: 2.5 to central Replace tokens from one file to another using Ant script How to run all the test files and trigger test using maven-surefire plugin Execute a separate Java process in the classpath of a module How to trigger distribution module when rebuilding sub module in Maven? java.nio.file.InvalidPathException: Illegal char <:> at index 4: http://central.maven.org/maven2/org/jboss/weld/servlet/weld-servlet/2.1.2.Final Openshift deploy with additional Maven command line options
OPCFW_CODE
January 2nd, 2013, 07:57 AM Join Date: Sep 2010 Location: Xilinhot, China 中国锡林浩特 Device(s): Samsung Galaxy Win Duos, Lenovo P700i Carrier: China Mobile, China Unicom. Thanked 2,488 Times in 1,781 Posts Hello and happy new year francismel, welcome to Android Forums. Originally Posted by francismel Its my first time in here. I finally decided to lookf or some help after how many months of purchasing an android tablet in spain and since being back in london. the stupid thing is not working as it should. I am unable to connect my wifi to this thing. The only way it works is if it is via my computer. The other problem is that it keeps sayign sd card is full, however, there is no sd card in th slot. So confused right about now. i even gave it to my brother to have as he wanted it and i could not use it, but he also cannto work it either. Is it that i have been cheated with from spain???? Someoen help me please! I think so. What you describe sounds typical of an el-cheapo Chinese tablet . They often don't have much internal storage, which tends to fill up rather quickly, and I'm sure this is what it's referring to as "SD-card" or internal memory card. The best thing you can do here is put an SD-card in the slot, external memory card, and try and move apps and other content to it, and clear out any caches as well, like the internet browser. The problematic WiFi is a frequent issue with cheapo tablets as well. You could try resetting your WiFi router, that can sometimes clear any connectivity issues. See if it works on open public WiFi. Unfortunately if you can't return it, i.e. going back to Spain, you'll have to put up with it. You migh want to read the forum sticky about these things... Off-brand Phones and Tablets: Worth the Low Cost? BTW is your tablet the "Sharpixels" you got listed in your profile > devices? I just Google'd "Sharpixels" and the second hit was another forum post, "I bought a sharpixels tablet on Tenerife ", but apparently this one has bricked itself. Which is another all too frequent problem with cheapo off-brand Android tablets. The People's Guide to Android in the People's Republic. Honorary Grand Poobah Shenzhen University English Corner. There are nine million bicycles in Beijing. There are nine million Androids in Shenzhen. Last edited by mikedt; January 2nd, 2013 at 08:36 AM.
OPCFW_CODE